Sample records for parallelizeable multiloop numerical

  1. Multiloop Functional Renormalization Group That Sums Up All Parquet Diagrams

    NASA Astrophysics Data System (ADS)

    Kugler, Fabian B.; von Delft, Jan

    2018-02-01

    We present a multiloop flow equation for the four-point vertex in the functional renormalization group (FRG) framework. The multiloop flow consists of successive one-loop calculations and sums up all parquet diagrams to arbitrary order. This provides substantial improvement of FRG computations for the four-point vertex and, consequently, the self-energy. Using the x-ray-edge singularity as an example, we show that solving the multiloop FRG flow is equivalent to solving the (first-order) parquet equations and illustrate this with numerical results.

  2. Synthesis of multi-loop automatic control systems by the nonlinear programming method

    NASA Astrophysics Data System (ADS)

    Voronin, A. V.; Emelyanova, T. A.

    2017-01-01

    The article deals with the problem of calculation of the multi-loop control systems optimal tuning parameters by numerical methods and nonlinear programming methods. For this purpose, in the paper the Optimization Toolbox of Matlab is used.

  3. Resolution of singularities for multi-loop integrals

    NASA Astrophysics Data System (ADS)

    Bogner, Christian; Weinzierl, Stefan

    2008-04-01

    We report on a program for the numerical evaluation of divergent multi-loop integrals. The program is based on iterated sector decomposition. We improve the original algorithm of Binoth and Heinrich such that the program is guaranteed to terminate. The program can be used to compute numerically the Laurent expansion of divergent multi-loop integrals regulated by dimensional regularisation. The symbolic and the numerical steps of the algorithm are combined into one program. Program summaryProgram title: sector_decomposition Catalogue identifier: AEAG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAG_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 47 506 No. of bytes in distributed program, including test data, etc.: 328 485 Distribution format: tar.gz Programming language: C++ Computer: all Operating system: Unix RAM: Depending on the complexity of the problem Classification: 4.4 External routines: GiNaC, available from http://www.ginac.de, GNU scientific library, available from http://www.gnu.org/software/gsl Nature of problem: Computation of divergent multi-loop integrals. Solution method: Sector decomposition. Restrictions: Only limited by the available memory and CPU time. Running time: Depending on the complexity of the problem.

  4. A systematic and efficient method to compute multi-loop master integrals

    NASA Astrophysics Data System (ADS)

    Liu, Xiao; Ma, Yan-Qing; Wang, Chen-Yu

    2018-04-01

    We propose a novel method to compute multi-loop master integrals by constructing and numerically solving a system of ordinary differential equations, with almost trivial boundary conditions. Thus it can be systematically applied to problems with arbitrary kinematic configurations. Numerical tests show that our method can not only achieve results with high precision, but also be much faster than the only existing systematic method sector decomposition. As a by product, we find a new strategy to compute scalar one-loop integrals without reducing them to master integrals.

  5. The use of singular value gradients and optimization techniques to design robust controllers for multiloop systems

    NASA Technical Reports Server (NTRS)

    Newsom, J. R.; Mukhopadhyay, V.

    1983-01-01

    A method for designing robust feedback controllers for multiloop systems is presented. Robustness is characterized in terms of the minimum singular value of the system return difference matrix at the plant input. Analytical gradients of the singular values with respect to design variables in the controller are derived. A cumulative measure of the singular values and their gradients with respect to the design variables is used with a numerical optimization technique to increase the system's robustness. Both unconstrained and constrained optimization techniques are evaluated. Numerical results are presented for a two-input/two-output drone flight control system.

  6. The use of singular value gradients and optimization techniques to design robust controllers for multiloop systems

    NASA Technical Reports Server (NTRS)

    Newsom, J. R.; Mukhopadhyay, V.

    1983-01-01

    A method for designing robust feedback controllers for multiloop systems is presented. Robustness is characterized in terms of the minimum singular value of the system return difference matrix at the plant input. Analytical gradients of the singular values with respect to design variables in the controller are derived. A cumulative measure of the singular values and their gradients with respect to the design variables is used with a numerical optimization technique to increase the system's robustness. Both unconstrained and constrained optimization techniques are evaluated. Numerical results are presented for a two output drone flight control system.

  7. Multiloop functional renormalization group for general models

    NASA Astrophysics Data System (ADS)

    Kugler, Fabian B.; von Delft, Jan

    2018-02-01

    We present multiloop flow equations in the functional renormalization group (fRG) framework for the four-point vertex and self-energy, formulated for a general fermionic many-body problem. This generalizes the previously introduced vertex flow [F. B. Kugler and J. von Delft, Phys. Rev. Lett. 120, 057403 (2018), 10.1103/PhysRevLett.120.057403] and provides the necessary corrections to the self-energy flow in order to complete the derivative of all diagrams involved in the truncated fRG flow. Due to its iterative one-loop structure, the multiloop flow is well suited for numerical algorithms, enabling improvement of many fRG computations. We demonstrate its equivalence to a solution of the (first-order) parquet equations in conjunction with the Schwinger-Dyson equation for the self-energy.

  8. Numerical evaluation of multi-loop integrals for arbitrary kinematics with SecDec 2.0

    NASA Astrophysics Data System (ADS)

    Borowka, Sophia; Carter, Jonathon; Heinrich, Gudrun

    2013-02-01

    We present the program SecDec 2.0, which contains various new features. First, it allows the numerical evaluation of multi-loop integrals with no restriction on the kinematics. Dimensionally regulated ultraviolet and infrared singularities are isolated via sector decomposition, while threshold singularities are handled by a deformation of the integration contour in the complex plane. As an application, we present numerical results for various massive two-loop four-point diagrams. SecDec 2.0 also contains new useful features for the calculation of more general parameter integrals, related for example to phase space integrals. Program summaryProgram title: SecDec 2.0 Catalogue identifier: AEIR_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIR_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 156829 No. of bytes in distributed program, including test data, etc.: 2137907 Distribution format: tar.gz Programming language: Wolfram Mathematica, Perl, Fortran/C++. Computer: From a single PC to a cluster, depending on the problem. Operating system: Unix, Linux. RAM: Depending on the complexity of the problem Classification: 4.4, 5, 11.1. Catalogue identifier of previous version: AEIR_v1_0 Journal reference of previous version: Comput. Phys. Comm. 182(2011)1566 Does the new version supersede the previous version?: Yes Nature of problem: Extraction of ultraviolet and infrared singularities from parametric integrals appearing in higher order perturbative calculations in gauge theories. Numerical integration in the presence of integrable singularities (e.g., kinematic thresholds). Solution method: Algebraic extraction of singularities in dimensional regularization using iterated sector decomposition. This leads to a Laurent series in the dimensional regularization parameter ɛ, where the coefficients are finite integrals over the unit hypercube. Those integrals are evaluated numerically by Monte Carlo integration. The integrable singularities are handled by choosing a suitable integration contour in the complex plane, in an automated way. Reasons for new version: In the previous version the calculation of multi-scale integrals was restricted to the Euclidean region. Now multi-loop integrals with arbitrary physical kinematics can be evaluated. Another major improvement is the possibility of full parallelization. Summary of revisions: No restriction on the kinematics for multi-loop integrals. The integrand can be constructed from the topological cuts of the diagram. Possibility of full parallelization. Numerical integration of multi-loop integrals written in C++ rather than Fortran. Possibility to loop over ranges of parameters. Restrictions: Depending on the complexity of the problem, limited by memory and CPU time. The restriction that multi-scale integrals could only be evaluated at Euclidean points is superseded in version 2.0. Running time: Between a few minutes and several days, depending on the complexity of the problem. Test runs provided take only seconds.

  9. Dilatonic parallelizable NS-NS backgrounds

    NASA Astrophysics Data System (ADS)

    Kawano, Teruhiko; Yamaguchi, Satoshi

    2003-08-01

    We complete the classification of parallelizable NS-NS backgrounds in type II supergravity by adding the dilatonic case to the result of Figueroa-O'Farrill on the non-dilatonic case. We also study the supersymmetry of these parallelizable backgrounds. It is shown that all the dilatonic parallelizable backgrounds have sixteen supersymmetries.

  10. Multiloop ghost vertices and the determination of the multiloop measure

    NASA Astrophysics Data System (ADS)

    West, P.

    1988-04-01

    Using the group-theoretic approach to string theory, the multiloop vertices previously computed are extended to include ghost oscillators. In accord with the approach, we show how demanding that zero-norm physical states decouple leads to a set of first-order differential equations which uniquely determine the multiloop measure. Permanent address: Mathematics Department, Kings College, Strand, London WC2R 2LS, UK.

  11. An approach toward the numerical evaluation of multi-loop Feynman diagrams

    NASA Astrophysics Data System (ADS)

    Passarino, Giampiero

    2001-12-01

    A scheme for systematically achieving accurate numerical evaluation of multi-loop Feynman diagrams is developed. This shows the feasibility of a project aimed to produce a complete calculation for two-loop predictions in the Standard Model. As a first step an algorithm, proposed by F.V. Tkachov and based on the so-called generalized Bernstein functional relation, is applied to one-loop multi-leg diagrams with particular emphasis to the presence of infrared singularities, to the problem of tensorial reduction and to the classification of all singularities of a given diagram. Successively, the extension of the algorithm to two-loop diagrams is examined. The proposed solution consists in applying the functional relation to the one-loop sub-diagram which has the largest number of internal lines. In this way the integrand can be made smooth, a part from a factor which is a polynomial in xS, the vector of Feynman parameters needed for the complementary sub-diagram with the smallest number of internal lines. Since the procedure does not introduce new singularities one can distort the xS-integration hyper-contour into the complex hyper-plane, thus achieving numerical stability. The algorithm is then modified to deal with numerical evaluation around normal thresholds. Concise and practical formulas are assembled and presented, numerical results and comparisons with the available literature are shown and discussed for the so-called sunset topology.

  12. A New Method for Setting Calculation Sequence of Directional Relay Protection in Multi-Loop Networks

    NASA Astrophysics Data System (ADS)

    Haijun, Xiong; Qi, Zhang

    2016-08-01

    Workload of relay protection setting calculation in multi-loop networks may be reduced effectively by optimization setting calculation sequences. A new method of setting calculation sequences of directional distance relay protection in multi-loop networks based on minimum broken nodes cost vector (MBNCV) was proposed to solve the problem experienced in current methods. Existing methods based on minimum breakpoint set (MBPS) lead to more break edges when untying the loops in dependent relationships of relays leading to possibly more iterative calculation workloads in setting calculations. A model driven approach based on behavior trees (BT) was presented to improve adaptability of similar problems. After extending the BT model by adding real-time system characters, timed BT was derived and the dependency relationship in multi-loop networks was then modeled. The model was translated into communication sequence process (CSP) models and an optimization setting calculation sequence in multi-loop networks was finally calculated by tools. A 5-nodes multi-loop network was applied as an example to demonstrate effectiveness of the modeling and calculation method. Several examples were then calculated with results indicating the method effectively reduces the number of forced broken edges for protection setting calculation in multi-loop networks.

  13. Multi-loop Integrand Reduction with Computational Algebraic Geometry

    NASA Astrophysics Data System (ADS)

    Badger, Simon; Frellesvig, Hjalte; Zhang, Yang

    2014-06-01

    We discuss recent progress in multi-loop integrand reduction methods. Motivated by the possibility of an automated construction of multi-loop amplitudes via generalized unitarity cuts we describe a procedure to obtain a general parameterisation of any multi-loop integrand in a renormalizable gauge theory. The method relies on computational algebraic geometry techniques such as Gröbner bases and primary decomposition of ideals. We present some results for two and three loop amplitudes obtained with the help of the MACAULAY2 computer algebra system and the Mathematica package BASISDET.

  14. Application of matrix singular value properties for evaluating gain and phase margins of multiloop systems. [stability margins for wing flutter suppression and drone lateral attitude control

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, V.; Newsom, J. R.

    1982-01-01

    A stability margin evaluation method in terms of simultaneous gain and phase changes in all loops of a multiloop system is presented. A universal gain-phase margin evaluation diagram is constructed by generalizing an existing method using matrix singular value properties. Using this diagram and computing the minimum singular value of the system return difference matrix over the operating frequency range, regions of guaranteed stability margins can be obtained. Singular values are computed for a wing flutter suppression and a drone lateral attitude control problem. The numerical results indicate that this method predicts quite conservative stability margins. In the second example if the eigenvalue magnitude is used instead of the singular value, as a measure of nearness to singularity, more realistic stability margins are obtained. However, this relaxed measure generally cannot guarantee global stability.

  15. A multiloop generalization of the circle criterion for stability margin analysis

    NASA Technical Reports Server (NTRS)

    Safonov, M. G.; Athans, M.

    1979-01-01

    In order to provide a theoretical tool suited for characterizing the stability margins of multiloop feedback systems, multiloop input-output stability results generalizing the circle stability criterion are considered. Generalized conic sectors with 'centers' and 'radii' determined by linear dynamical operators are employed to specify the stability margins as a frequency dependent convex set of modeling errors (including nonlinearities, gain variations and phase variations) which the system must be able to tolerate in each feedback loop without instability. The resulting stability criterion gives sufficient conditions for closed loop stability in the presence of frequency dependent modeling errors, even when the modeling errors occur simultaneously in all loops. The stability conditions yield an easily interpreted scalar measure of the amount by which a multiloop system exceeds, or falls short of, its stability margin specifications.

  16. Multiloop Manual Control of Dynamic Systems

    NASA Technical Reports Server (NTRS)

    Hess, R. A.; Mcnally, B. D.

    1984-01-01

    Human interaction with a simple, multiloop dynamic system in which the human's activity was systematically varied by changing the levels of automation was studied. The control loop structure resulting from the task definition parallels that for any multiloop manual control system, is considered a sterotype. Simple models of the human in the task, and upon extending a technique for describing the manner in which the human subjectively quantifies his opinion of task difficulty were developed. A man in the loop simulation which provides data to support and direct the analytical effort is presented.

  17. Multigrid Techniques for Highly Indefinite Equations

    NASA Technical Reports Server (NTRS)

    Shapira, Yair

    1996-01-01

    A multigrid method for the solution of finite difference approximations of elliptic PDE's is introduced. A parallelizable version of it, suitable for two and multi level analysis, is also defined, and serves as a theoretical tool for deriving a suitable implementation for the main version. For indefinite Helmholtz equations, this analysis provides a suitable mesh size for the coarsest grid used. Numerical experiments show that the method is applicable to diffusion equations with discontinuous coefficients and highly indefinite Helmholtz equations.

  18. Numerical algebraic geometry: a new perspective on gauge and string theories

    NASA Astrophysics Data System (ADS)

    Mehta, Dhagash; He, Yang-Hui; Hauensteine, Jonathan D.

    2012-07-01

    There is a rich interplay between algebraic geometry and string and gauge theories which has been recently aided immensely by advances in computational algebra. However, symbolic (Gröbner) methods are severely limited by algorithmic issues such as exponential space complexity and being highly sequential. In this paper, we introduce a novel paradigm of numerical algebraic geometry which in a plethora of situations overcomes these shortcomings. The so-called `embarrassing parallelizability' allows us to solve many problems and extract physical information which elude symbolic methods. We describe the method and then use it to solve various problems arising from physics which could not be otherwise solved.

  19. Automation effects in a stereotypical multiloop manual control system. [for aircraft

    NASA Technical Reports Server (NTRS)

    Hess, R. A.; Mcnally, B. D.

    1984-01-01

    The increasing reliance of state-of-the art, high performance aircraft on high authority stability and command augmentation systems, in order to obtain satisfactory performance and handling qualities, has made critical the achievement of a better understanding of human capabilities, limitations, and preferences during interactions with complex dynamic systems that involve task allocation between man and machine. An analytical and experimental study has been undertaken to investigate human interaction with a simple, multiloop dynamic system in which human activity was systematically varied by changing the levels of automation. Task definition has led to a control loop structure which parallels that for any multiloop manual control system, and may therefore be considered a stereotype.

  20. Integrated analysis of particle interactions at hadron colliders Report of research activities in 2010-2015

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nadolsky, Pavel M.

    2015-08-31

    The report summarizes research activities of the project ”Integrated analysis of particle interactions” at Southern Methodist University, funded by 2010 DOE Early Career Research Award DE-SC0003870. The goal of the project is to provide state-of-the-art predictions in quantum chromodynamics in order to achieve objectives of the LHC program for studies of electroweak symmetry breaking and new physics searches. We published 19 journal papers focusing on in-depth studies of proton structure and integration of advanced calculations from different areas of particle phenomenology: multi-loop calculations, accurate long-distance hadronic functions, and precise numerical programs. Methods for factorization of QCD cross sections were advancedmore » in order to develop new generations of CTEQ parton distribution functions (PDFs), CT10 and CT14. These distributions provide the core theoretical input for multi-loop perturbative calculations by LHC experimental collaborations. A novel ”PDF meta-analysis” technique was invented to streamline applications of PDFs in numerous LHC simulations and to combine PDFs from various groups using multivariate stochastic sampling of PDF parameters. The meta-analysis will help to bring the LHC perturbative calculations to the new level of accuracy, while reducing computational efforts. The work on parton distributions was complemented by development of advanced perturbative techniques to predict observables dependent on several momentum scales, including production of massive quarks and transverse momentum resummation at the next-to-next-to-leading order in QCD.« less

  1. Substructured multibody molecular dynamics.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grest, Gary Stephen; Stevens, Mark Jackson; Plimpton, Steven James

    2006-11-01

    We have enhanced our parallel molecular dynamics (MD) simulation software LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator, lammps.sandia.gov) to include many new features for accelerated simulation including articulated rigid body dynamics via coupling to the Rensselaer Polytechnic Institute code POEMS (Parallelizable Open-source Efficient Multibody Software). We use new features of the LAMMPS software package to investigate rhodopsin photoisomerization, and water model surface tension and capillary waves at the vapor-liquid interface. Finally, we motivate the recipes of MD for practitioners and researchers in numerical analysis and computational mechanics.

  2. A Strassen-Newton algorithm for high-speed parallelizable matrix inversion

    NASA Technical Reports Server (NTRS)

    Bailey, David H.; Ferguson, Helaman R. P.

    1988-01-01

    Techniques are described for computing matrix inverses by algorithms that are highly suited to massively parallel computation. The techniques are based on an algorithm suggested by Strassen (1969). Variations of this scheme use matrix Newton iterations and other methods to improve the numerical stability while at the same time preserving a very high level of parallelism. One-processor Cray-2 implementations of these schemes range from one that is up to 55 percent faster than a conventional library routine to one that is slower than a library routine but achieves excellent numerical stability. The problem of computing the solution to a single set of linear equations is discussed, and it is shown that this problem can also be solved efficiently using these techniques.

  3. Regularization with numerical extrapolation for finite and UV-divergent multi-loop integrals

    NASA Astrophysics Data System (ADS)

    de Doncker, E.; Yuasa, F.; Kato, K.; Ishikawa, T.; Kapenga, J.; Olagbemi, O.

    2018-03-01

    We give numerical integration results for Feynman loop diagrams such as those covered by Laporta (2000) and by Baikov and Chetyrkin (2010), and which may give rise to loop integrals with UV singularities. We explore automatic adaptive integration using multivariate techniques from the PARINT package for multivariate integration, as well as iterated integration with programs from the QUADPACK package, and a trapezoidal method based on a double exponential transformation. PARINT is layered over MPI (Message Passing Interface), and incorporates advanced parallel/distributed techniques including load balancing among processes that may be distributed over a cluster or a network/grid of nodes. Results are included for 2-loop vertex and box diagrams and for sets of 2-, 3- and 4-loop self-energy diagrams with or without UV terms. Numerical regularization of integrals with singular terms is achieved by linear and non-linear extrapolation methods.

  4. Automation effects in a multiloop manual control system

    NASA Technical Reports Server (NTRS)

    Hess, R. A.; Mcnally, B. D.

    1986-01-01

    An experimental and analytical study was undertaken to investigate human interaction with a simple multiloop manual control system in which the human's activity was systematically varied by changing the level of automation. The system simulated was the longitudinal dynamics of a hovering helicopter. The automation-systems-stabilized vehicle responses from attitude to velocity to position and also provided for display automation in the form of a flight director. The control-loop structure resulting from the task definition can be considered a simple stereotype of a hierarchical control system. The experimental study was complemented by an analytical modeling effort which utilized simple crossover models of the human operator. It was shown that such models can be extended to the description of multiloop tasks involving preview and precognitive human operator behavior. The existence of time optimal manual control behavior was established for these tasks and the role which internal models may play in establishing human-machine performance was discussed.

  5. Prediction of pilot opinion ratings using an optimal pilot model. [of aircraft handling qualities in multiaxis tasks

    NASA Technical Reports Server (NTRS)

    Hess, R. A.

    1977-01-01

    A brief review of some of the more pertinent applications of analytical pilot models to the prediction of aircraft handling qualities is undertaken. The relative ease with which multiloop piloting tasks can be modeled via the optimal control formulation makes the use of optimal pilot models particularly attractive for handling qualities research. To this end, a rating hypothesis is introduced which relates the numerical pilot opinion rating assigned to a particular vehicle and task to the numerical value of the index of performance resulting from an optimal pilot modeling procedure as applied to that vehicle and task. This hypothesis is tested using data from piloted simulations and is shown to be reasonable. An example concerning a helicopter landing approach is introduced to outline the predictive capability of the rating hypothesis in multiaxis piloting tasks.

  6. Perturbative quantum gravity as a double copy of gauge theory.

    PubMed

    Bern, Zvi; Carrasco, John Joseph M; Johansson, Henrik

    2010-08-06

    In a previous paper we observed that (classical) tree-level gauge-theory amplitudes can be rearranged to display a duality between color and kinematics. Once this is imposed, gravity amplitudes are obtained using two copies of gauge-theory diagram numerators. Here we conjecture that this duality persists to all quantum loop orders and can thus be used to obtain multiloop gravity amplitudes easily from gauge-theory ones. As a nontrivial test, we show that the three-loop four-point amplitude of N=4 super-Yang-Mills theory can be arranged into a form satisfying the duality, and by taking double copies of the diagram numerators we obtain the corresponding amplitude of N=8 supergravity. We also remark on a nonsupersymmetric two-loop test based on pure Yang-Mills theory resulting in gravity coupled to an antisymmetric tensor and dilaton.

  7. Correction of Angle Class II division 1 malocclusion with a mandibular protraction appliances and multiloop edgewise archwire technique

    PubMed Central

    Freitas, Heloiza; dos Santos, Pedro César F; Janson, Guilherme

    2014-01-01

    A Brazilian girl aged 14 years and 9 months presented with a chief complaint of protrusive teeth. She had a convex facial profile, extreme overjet, deep bite, lack of passive lip seal, acute nasolabial angle, and retrognathic mandible. Intraorally, she showed maxillary diastemas, slight mandibular incisor crowding, a small maxillary arch, 13-mm overjet, and 4-mm overbite. After the diagnosis of severe Angle Class II division 1 malocclusion, a mandibular protraction appliance was placed to correct the Class II relationships and multiloop edgewise archwires were used for finishing. Follow-up examinations revealed an improved facial profile, normal overjet and overbite, and good intercuspation. The patient was satisfied with her occlusion, smile, and facial appearance. The excellent results suggest that orthodontic camouflage by using a mandibular protraction appliance in combination with the multiloop edgewise archwire technique is an effective option for correcting Class II malocclusions in patients who refuse orthognathic surgery. PMID:25309867

  8. Development of a sensitivity analysis technique for multiloop flight control systems

    NASA Technical Reports Server (NTRS)

    Vaillard, A. H.; Paduano, J.; Downing, D. R.

    1985-01-01

    This report presents the development and application of a sensitivity analysis technique for multiloop flight control systems. This analysis yields very useful information on the sensitivity of the relative-stability criteria of the control system, with variations or uncertainties in the system and controller elements. The sensitivity analysis technique developed is based on the computation of the singular values and singular-value gradients of a feedback-control system. The method is applicable to single-input/single-output as well as multiloop continuous-control systems. Application to sampled-data systems is also explored. The sensitivity analysis technique was applied to a continuous yaw/roll damper stability augmentation system of a typical business jet, and the results show that the analysis is very useful in determining the system elements which have the largest effect on the relative stability of the closed-loop system. As a secondary product of the research reported here, the relative stability criteria based on the concept of singular values were explored.

  9. On the use of reverse Brownian motion to accelerate hybrid simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bakarji, Joseph; Tartakovsky, Daniel M., E-mail: tartakovsky@stanford.edu

    Multiscale and multiphysics simulations are two rapidly developing fields of scientific computing. Efficient coupling of continuum (deterministic or stochastic) constitutive solvers with their discrete (stochastic, particle-based) counterparts is a common challenge in both kinds of simulations. We focus on interfacial, tightly coupled simulations of diffusion that combine continuum and particle-based solvers. The latter employs the reverse Brownian motion (rBm), a Monte Carlo approach that allows one to enforce inhomogeneous Dirichlet, Neumann, or Robin boundary conditions and is trivially parallelizable. We discuss numerical approaches for improving the accuracy of rBm in the presence of inhomogeneous Neumann boundary conditions and alternative strategiesmore » for coupling the rBm solver with its continuum counterpart. Numerical experiments are used to investigate the convergence, stability, and computational efficiency of the proposed hybrid algorithm.« less

  10. Multilevel filtering elliptic preconditioners

    NASA Technical Reports Server (NTRS)

    Kuo, C. C. Jay; Chan, Tony F.; Tong, Charles

    1989-01-01

    A class of preconditioners is presented for elliptic problems built on ideas borrowed from the digital filtering theory and implemented on a multilevel grid structure. They are designed to be both rapidly convergent and highly parallelizable. The digital filtering viewpoint allows the use of filter design techniques for constructing elliptic preconditioners and also provides an alternative framework for understanding several other recently proposed multilevel preconditioners. Numerical results are presented to assess the convergence behavior of the new methods and to compare them with other preconditioners of multilevel type, including the usual multigrid method as preconditioner, the hierarchical basis method and a recent method proposed by Bramble-Pasciak-Xu.

  11. Formulation of image fusion as a constrained least squares optimization problem

    PubMed Central

    Dwork, Nicholas; Lasry, Eric M.; Pauly, John M.; Balbás, Jorge

    2017-01-01

    Abstract. Fusing a lower resolution color image with a higher resolution monochrome image is a common practice in medical imaging. By incorporating spatial context and/or improving the signal-to-noise ratio, it provides clinicians with a single frame of the most complete information for diagnosis. In this paper, image fusion is formulated as a convex optimization problem that avoids image decomposition and permits operations at the pixel level. This results in a highly efficient and embarrassingly parallelizable algorithm based on widely available robust and simple numerical methods that realizes the fused image as the global minimizer of the convex optimization problem. PMID:28331885

  12. Numerical simulations of loops heated to solar flare temperatures. III - Asymmetrical heating

    NASA Technical Reports Server (NTRS)

    Cheng, C.-C.; Doschek, G. A.; Karpen, J. T.

    1984-01-01

    A numerical model is defined for asymmetric full solar flare loop heating and comparisons are made with observational data. The Dynamic Flux Tube Model is used to describe the heating process in terms of one-dimensional, two fluid conservation equations of mass, energy and momentum. An adaptive grid allows for the downward movement of the transition region caused by an advancing conduction front. A loop 20,000 km long is considered, along with a flare heating system and the hydrodynamic evolution of the loop. The model was applied to generating line profiles and spatial X-ray and UV line distributions, which were compared with SMM, P78-1 and Hintori data for Fe, Ca and Mg spectra. Little agreement was obtained, and it is suggested that flares be treated as multi-loop phenomena. Finally, it is concluded that chromospheric evaporation is not an effective mechanism for generating the soft X-ray bursts associated with flares.

  13. Hybrid suboptimal control of multi-rate multi-loop sampled-data systems

    NASA Technical Reports Server (NTRS)

    Shieh, Leang S.; Chen, Gwangchywan; Tsai, Jason S. H.

    1992-01-01

    A hybrid state-space controller is developed for suboptimal digital control of multirate multiloop multivariable continuous-time systems. First, an LQR is designed for a continuous-time subsystem which has a large bandwidth and is connnected in the inner loop of the overall system. The designed LQR would optimally place the eigenvalues of a closed-loop subsystem in the common region of an open sector bounded by sector angles + or - pi/2k for k = 2 or 3 from the negative real axis and the left-hand side of a vertical line on the negative real axis in the s-plane. Then, the developed continuous-time state-feedback gain is converted into an equivalent fast-rate discrete-time state-feedback gain via a digital redesign technique (Tsai et al. 1989, Shieh et al. 1990) reviewed here. A real state reconstructor is redeveloped utilizing the fast-rate input-output data of the system of interest. The design procedure of multiloop multivariable systems using multirate samplers is shown, and a terminal homing missile system example is used to demonstrate the effectiveness of the proposed method.

  14. Flux trapping in multi-loop SQUIDs and its impact on SQUID-based absolute magnetometry

    NASA Astrophysics Data System (ADS)

    Schönau, T.; Zakosarenko, V.; Schmelz, M.; Anders, S.; Meyer, H.-G.; Stolz, R.

    2018-07-01

    The effect of flux trapping on the flux-voltage characteristics of multi-loop SQUID magnetometers was investigated by means of repeated cool-down cycles in a stepwise increased magnetic background field. For a SQUID with N parallel loops, N different flux offsets, each separated by {{{Φ }}}0/N, were observed even in zero magnetic field. These flux offsets further split into a so called fine structure, which can be explained by minor asymmetries in the SQUID design. The observed results are discussed with particular regard to their impact on the previously presented absolute SQUID cascade vector magnetometer.

  15. Multiloop amplitudes of light-cone gauge superstring field theory: odd spin structure contributions

    NASA Astrophysics Data System (ADS)

    Ishibashi, Nobuyuki; Murakami, Koichi

    2018-03-01

    We study the odd spin structure contributions to the multiloop amplitudes of light-cone gauge superstring field theory. We show that they coincide with the amplitudes in the conformal gauge with two of the vertex operators chosen to be in the pictures different from the standard choice, namely (-1, -1) picture in the type II case and -1 picture in the heterotic case. We also show that the contact term divergences can be regularized in the same way as in the amplitudes for the even structures and we get the amplitudes which coincide with those obtained from the first-quantized approach.

  16. Streamline integration as a method for two-dimensional elliptic grid generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wiesenberger, M., E-mail: Matthias.Wiesenberger@uibk.ac.at; Held, M.; Einkemmer, L.

    We propose a new numerical algorithm to construct a structured numerical elliptic grid of a doubly connected domain. Our method is applicable to domains with boundaries defined by two contour lines of a two-dimensional function. Furthermore, we can adapt any analytically given boundary aligned structured grid, which specifically includes polar and Cartesian grids. The resulting coordinate lines are orthogonal to the boundary. Grid points as well as the elements of the Jacobian matrix can be computed efficiently and up to machine precision. In the simplest case we construct conformal grids, yet with the help of weight functions and monitor metricsmore » we can control the distribution of cells across the domain. Our algorithm is parallelizable and easy to implement with elementary numerical methods. We assess the quality of grids by considering both the distribution of cell sizes and the accuracy of the solution to elliptic problems. Among the tested grids these key properties are best fulfilled by the grid constructed with the monitor metric approach. - Graphical abstract: - Highlights: • Construct structured, elliptic numerical grids with elementary numerical methods. • Align coordinate lines with or make them orthogonal to the domain boundary. • Compute grid points and metric elements up to machine precision. • Control cell distribution by adaption functions or monitor metrics.« less

  17. Pilot dynamics for instrument approach tasks: Full panel multiloop and flight director operations

    NASA Technical Reports Server (NTRS)

    Weir, D. H.; Mcruer, D. T.

    1972-01-01

    Measurements and interpretations of single and mutiloop pilot response properties during simulated instrument approach are presented. Pilot subjects flew Category 2-like ILS approaches in a fixed base DC-8 simulaton. A conventional instrument panel and controls were used, with simulated vertical gust and glide slope beam bend forcing functions. Reduced and interpreted pilot describing functions and remmant are given for pitch attitude, flight director, and multiloop (longitudinal) control tasks. The response data are correlated with simultaneously recorded eye scanning statistics, previously reported in NASA CR-1535. The resulting combined response and scanning data and their interpretations provide a basis for validating and extending the theory of manual control displays.

  18. Threshold resummation of the rapidity distribution for Higgs production at NNLO +NNLL

    NASA Astrophysics Data System (ADS)

    Banerjee, Pulak; Das, Goutam; Dhani, Prasanna K.; Ravindran, V.

    2018-03-01

    We present a formalism that resums threshold-enhanced logarithms to all orders in perturbative QCD for the rapidity distribution of any colorless particle produced in hadron colliders. We achieve this by exploiting the factorization properties and K +G equations satisfied by the soft and virtual parts of the cross section. We compute for the first time compact and most general expressions in two-dimensional Mellin space for the resummed coefficients. Using various state-of-the-art multiloop and multileg results, we demonstrate the numerical impact of our resummed results up to next-to-next-to-leading order for the rapidity distribution of the Higgs boson at the LHC. We find that inclusion of these threshold logs through resummation improves the reliability of perturbative predictions.

  19. Algorithm 971: An Implementation of a Randomized Algorithm for Principal Component Analysis

    PubMed Central

    LI, HUAMIN; LINDERMAN, GEORGE C.; SZLAM, ARTHUR; STANTON, KELLY P.; KLUGER, YUVAL; TYGERT, MARK

    2017-01-01

    Recent years have witnessed intense development of randomized methods for low-rank approximation. These methods target principal component analysis and the calculation of truncated singular value decompositions. The present article presents an essentially black-box, foolproof implementation for Mathworks’ MATLAB, a popular software platform for numerical computation. As illustrated via several tests, the randomized algorithms for low-rank approximation outperform or at least match the classical deterministic techniques (such as Lanczos iterations run to convergence) in basically all respects: accuracy, computational efficiency (both speed and memory usage), ease-of-use, parallelizability, and reliability. However, the classical procedures remain the methods of choice for estimating spectral norms and are far superior for calculating the least singular values and corresponding singular vectors (or singular subspaces). PMID:28983138

  20. Flight-determined stability analysis of multiple-input-multiple-output control systems

    NASA Technical Reports Server (NTRS)

    Burken, John J.

    1992-01-01

    Singular value analysis can give conservative stability margin results. Applying structure to the uncertainty can reduce this conservatism. This paper presents flight-determined stability margins for the X-29A lateral-directional, multiloop control system. These margins are compared with the predicted unscaled singular values and scaled structured singular values. The algorithm was further evaluated with flight data by changing the roll-rate-to-aileron command-feedback gain by +/- 20 percent. Minimum eigenvalues of the return difference matrix which bound the singular values are also presented. Extracting multiloop singular values from flight data and analyzing the feedback gain variations validates this technique as a measure of robustness. This analysis can be used for near-real-time flight monitoring and safety testing.

  1. Flight-determined stability analysis of multiple-input-multiple-output control systems

    NASA Technical Reports Server (NTRS)

    Burken, John J.

    1992-01-01

    Singular value analysis can give conservative stability margin results. Applying structure to the uncertainty can reduce this conservatism. This paper presents flight-determined stability margins for the X-29A lateral-directional, multiloop control system. These margins are compared with the predicted unscaled singular values and scaled structured singular values. The algorithm was further evaluated with flight data by changing the roll-rate-to-aileron-command-feedback gain by +/- 20 percent. Also presented are the minimum eigenvalues of the return difference matrix which bound the singular values. Extracting multiloop singular values from flight data and analyzing the feedback gain variations validates this technique as a measure of robustness. This analysis can be used for near-real-time flight monitoring and safety testing.

  2. Multi-loop control of UPS inverter with a plug-in odd-harmonic repetitive controller.

    PubMed

    Razi, Reza; Karbasforooshan, Mohammad-Sadegh; Monfared, Mohammad

    2017-03-01

    This paper proposes an improved multi-loop control scheme for the single-phase uninterruptible power supply (UPS) inverter by using a plug-in odd-harmonic repetitive controller to regulate the output voltage. In the suggested control method, the output voltage and the filter capacitor current are used as the outer and inner loop feedback signals, respectively and the instantaneous value of the reference voltage feedforwarded to the output of the controller. Instead of conventional linear (proportional-integral/-resonant) and conventional repetitive controllers, a plug-in odd-harmonic repetitive controller is employed in the outer loop to regulate the output voltage, which occupies less memory space and offers faster tracking performance compared to the conventional one. Also, a simple proportional controller is used in the inner loop for active damping of possible resonances and improving the transient performance. The feedforward of the converter reference voltage enhances the robust performance of the system and simplifies the system modelling and the controller design. A step-by-step design procedure is presented for the proposed controller, which guarantees stability of the system under worst-case scenarios. Simulation and experimental results validate the excellent steady-state and transient performance of the proposed control scheme and provide the exact comparison of the proposed method with the conventional multi-loop control method. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  3. The Local Discontinuous Galerkin Method for Time-Dependent Convection-Diffusion Systems

    NASA Technical Reports Server (NTRS)

    Cockburn, Bernardo; Shu, Chi-Wang

    1997-01-01

    In this paper, we study the Local Discontinuous Galerkin methods for nonlinear, time-dependent convection-diffusion systems. These methods are an extension of the Runge-Kutta Discontinuous Galerkin methods for purely hyperbolic systems to convection-diffusion systems and share with those methods their high parallelizability, their high-order formal accuracy, and their easy handling of complicated geometries, for convection dominated problems. It is proven that for scalar equations, the Local Discontinuous Galerkin methods are L(sup 2)-stable in the nonlinear case. Moreover, in the linear case, it is shown that if polynomials of degree k are used, the methods are k-th order accurate for general triangulations; although this order of convergence is suboptimal, it is sharp for the LDG methods. Preliminary numerical examples displaying the performance of the method are shown.

  4. Fast reconstruction of optical properties for complex segmentations in near infrared imaging

    NASA Astrophysics Data System (ADS)

    Jiang, Jingjing; Wolf, Martin; Sánchez Majos, Salvador

    2017-04-01

    The intrinsic ill-posed nature of the inverse problem in near infrared imaging makes the reconstruction of fine details of objects deeply embedded in turbid media challenging even for the large amounts of data provided by time-resolved cameras. In addition, most reconstruction algorithms for this type of measurements are only suitable for highly symmetric geometries and rely on a linear approximation to the diffusion equation since a numerical solution of the fully non-linear problem is computationally too expensive. In this paper, we will show that a problem of practical interest can be successfully addressed making efficient use of the totality of the information supplied by time-resolved cameras. We set aside the goal of achieving high spatial resolution for deep structures and focus on the reconstruction of complex arrangements of large regions. We show numerical results based on a combined approach of wavelength-normalized data and prior geometrical information, defining a fully parallelizable problem in arbitrary geometries for time-resolved measurements. Fast reconstructions are obtained using a diffusion approximation and Monte-Carlo simulations, parallelized in a multicore computer and a GPU respectively.

  5. A Course in... Multivariable Control Methods.

    ERIC Educational Resources Information Center

    Deshpande, Pradeep B.

    1988-01-01

    Describes an engineering course for graduate study in process control. Lists four major topics: interaction analysis, multiloop controller design, decoupling, and multivariable control strategies. Suggests a course outline and gives information about each topic. (MVL)

  6. Efficient numerical evaluation of Feynman integrals

    NASA Astrophysics Data System (ADS)

    Li, Zhao; Wang, Jian; Yan, Qi-Shu; Zhao, Xiaoran

    2016-03-01

    Feynman loop integrals are a key ingredient for the calculation of higher order radiation effects, and are responsible for reliable and accurate theoretical prediction. We improve the efficiency of numerical integration in sector decomposition by implementing a quasi-Monte Carlo method associated with the CUDA/GPU technique. For demonstration we present the results of several Feynman integrals up to two loops in both Euclidean and physical kinematic regions in comparison with those obtained from FIESTA3. It is shown that both planar and non-planar two-loop master integrals in the physical kinematic region can be evaluated in less than half a minute with accuracy, which makes the direct numerical approach viable for precise investigation of higher order effects in multi-loop processes, e.g. the next-to-leading order QCD effect in Higgs pair production via gluon fusion with a finite top quark mass. Supported by the Natural Science Foundation of China (11305179 11475180), Youth Innovation Promotion Association, CAS, IHEP Innovation (Y4545170Y2), State Key Lab for Electronics and Particle Detectors, Open Project Program of State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, China (Y4KF061CJ1), Cluster of Excellence Precision Physics, Fundamental Interactions and Structure of Matter (PRISMA-EXC 1098)

  7. A study of the parallel algorithm for large-scale DC simulation of nonlinear systems

    NASA Astrophysics Data System (ADS)

    Cortés Udave, Diego Ernesto; Ogrodzki, Jan; Gutiérrez de Anda, Miguel Angel

    Newton-Raphson DC analysis of large-scale nonlinear circuits may be an extremely time consuming process even if sparse matrix techniques and bypassing of nonlinear models calculation are used. A slight decrease in the time required for this task may be enabled on multi-core, multithread computers if the calculation of the mathematical models for the nonlinear elements as well as the stamp management of the sparse matrix entries are managed through concurrent processes. This numerical complexity can be further reduced via the circuit decomposition and parallel solution of blocks taking as a departure point the BBD matrix structure. This block-parallel approach may give a considerable profit though it is strongly dependent on the system topology and, of course, on the processor type. This contribution presents the easy-parallelizable decomposition-based algorithm for DC simulation and provides a detailed study of its effectiveness.

  8. Solving Graph Laplacian Systems Through Recursive Bisections and Two-Grid Preconditioning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ponce, Colin; Vassilevski, Panayot S.

    2016-02-18

    We present a parallelizable direct method for computing the solution to graph Laplacian-based linear systems derived from graphs that can be hierarchically bipartitioned with small edge cuts. For a graph of size n with constant-size edge cuts, our method decomposes a graph Laplacian in time O(n log n), and then uses that decomposition to perform a linear solve in time O(n log n). We then use the developed technique to design a preconditioner for graph Laplacians that do not have this property. Finally, we augment this preconditioner with a two-grid method that accounts for much of the preconditioner's weaknesses. Wemore » present an analysis of this method, as well as a general theorem for the condition number of a general class of two-grid support graph-based preconditioners. Numerical experiments illustrate the performance of the studied methods.« less

  9. A computer program for uncertainty analysis integrating regression and Bayesian methods

    USGS Publications Warehouse

    Lu, Dan; Ye, Ming; Hill, Mary C.; Poeter, Eileen P.; Curtis, Gary

    2014-01-01

    This work develops a new functionality in UCODE_2014 to evaluate Bayesian credible intervals using the Markov Chain Monte Carlo (MCMC) method. The MCMC capability in UCODE_2014 is based on the FORTRAN version of the differential evolution adaptive Metropolis (DREAM) algorithm of Vrugt et al. (2009), which estimates the posterior probability density function of model parameters in high-dimensional and multimodal sampling problems. The UCODE MCMC capability provides eleven prior probability distributions and three ways to initialize the sampling process. It evaluates parametric and predictive uncertainties and it has parallel computing capability based on multiple chains to accelerate the sampling process. This paper tests and demonstrates the MCMC capability using a 10-dimensional multimodal mathematical function, a 100-dimensional Gaussian function, and a groundwater reactive transport model. The use of the MCMC capability is made straightforward and flexible by adopting the JUPITER API protocol. With the new MCMC capability, UCODE_2014 can be used to calculate three types of uncertainty intervals, which all can account for prior information: (1) linear confidence intervals which require linearity and Gaussian error assumptions and typically 10s–100s of highly parallelizable model runs after optimization, (2) nonlinear confidence intervals which require a smooth objective function surface and Gaussian observation error assumptions and typically 100s–1,000s of partially parallelizable model runs after optimization, and (3) MCMC Bayesian credible intervals which require few assumptions and commonly 10,000s–100,000s or more partially parallelizable model runs. Ready access allows users to select methods best suited to their work, and to compare methods in many circumstances.

  10. Efficient parallel implicit methods for rotary-wing aerodynamics calculations

    NASA Astrophysics Data System (ADS)

    Wissink, Andrew M.

    Euler/Navier-Stokes Computational Fluid Dynamics (CFD) methods are commonly used for prediction of the aerodynamics and aeroacoustics of modern rotary-wing aircraft. However, their widespread application to large complex problems is limited lack of adequate computing power. Parallel processing offers the potential for dramatic increases in computing power, but most conventional implicit solution methods are inefficient in parallel and new techniques must be adopted to realize its potential. This work proposes alternative implicit schemes for Euler/Navier-Stokes rotary-wing calculations which are robust and efficient in parallel. The first part of this work proposes an efficient parallelizable modification of the Lower Upper-Symmetric Gauss Seidel (LU-SGS) implicit operator used in the well-known Transonic Unsteady Rotor Navier Stokes (TURNS) code. The new hybrid LU-SGS scheme couples a point-relaxation approach of the Data Parallel-Lower Upper Relaxation (DP-LUR) algorithm for inter-processor communication with the Symmetric Gauss Seidel algorithm of LU-SGS for on-processor computations. With the modified operator, TURNS is implemented in parallel using Message Passing Interface (MPI) for communication. Numerical performance and parallel efficiency are evaluated on the IBM SP2 and Thinking Machines CM-5 multi-processors for a variety of steady-state and unsteady test cases. The hybrid LU-SGS scheme maintains the numerical performance of the original LU-SGS algorithm in all cases and shows a good degree of parallel efficiency. It experiences a higher degree of robustness than DP-LUR for third-order upwind solutions. The second part of this work examines use of Krylov subspace iterative solvers for the nonlinear CFD solutions. The hybrid LU-SGS scheme is used as a parallelizable preconditioner. Two iterative methods are tested, Generalized Minimum Residual (GMRES) and Orthogonal s-Step Generalized Conjugate Residual (OSGCR). The Newton method demonstrates good parallel performance on the IBM SP2, with OS-GCR giving slightly better performance than GMRES on large numbers of processors. For steady and quasi-steady calculations, the convergence rate is accelerated but the overall solution time remains about the same as the standard hybrid LU-SGS scheme. For unsteady calculations, however, the Newton method maintains a higher degree of time-accuracy which allows tbe use of larger timesteps and results in CPU savings of 20-35%.

  11. Human performance evaluation in dual-axis critical task tracking

    NASA Technical Reports Server (NTRS)

    Ritchie, M. L.; Nataraj, N. S.

    1975-01-01

    A dual axis tracking using a multiloop critical task was set up to evaluate human performance. The effects of control stick variation and display formats are evaluated. A secondary loading was used to measure the degradation in tracking performance.

  12. Numerical Polynomial Homotopy Continuation Method and String Vacua

    DOE PAGES

    Mehta, Dhagash

    2011-01-01

    Finding vmore » acua for the four-dimensional effective theories for supergravity which descend from flux compactifications and analyzing them according to their stability is one of the central problems in string phenomenology. Except for some simple toy models, it is, however, difficult to find all the vacua analytically. Recently developed algorithmic methods based on symbolic computer algebra can be of great help in the more realistic models. However, they suffer from serious algorithmic complexities and are limited to small system sizes. In this paper, we review a numerical method called the numerical polynomial homotopy continuation (NPHC) method, first used in the areas of lattice field theories, which by construction finds all of the vacua of a given potential that is known to have only isolated solutions. The NPHC method is known to suffer from no major algorithmic complexities and is embarrassingly parallelizable , and hence its applicability goes way beyond the existing symbolic methods. We first solve a simple toy model as a warm-up example to demonstrate the NPHC method at work. We then show that all the vacua of a more complicated model of a compactified M theory model, which has an S U ( 3 ) structure, can be obtained by using a desktop machine in just about an hour, a feat which was reported to be prohibitively difficult by the existing symbolic methods. Finally, we compare the various technicalities between the two methods.« less

  13. Aesthetic and functional outcomes using a multiloop edgewise archwire for camouflage orthodontic treatment of a severe Class III open bite malocclusion.

    PubMed

    Marañón-Vásquez, Guido Artemio; Soldevilla Galarza, Luciano Carlos; Tolentino Solis, Freddy Antonio; Wilson, Cliff; Romano, Fábio Lourenço

    2017-09-01

    Occasionally, orthodontists will be challenged to treat malocclusions and skeletal disharmonies, which by their complexity one might think that the only treatment alternative is the surgical-orthodontic approach. A male patient, aged 17 years old, was diagnosed with a skeletal Class III malocclusion, anterior open bite and negative overjet. An unpleasant profile was the patient's 'chief complaint' showing interest in facial aesthetics improvement. Nevertheless, the patient and his parents strongly preferred a non-surgical treatment approach. He was treated with a multiloop edgewise archwire to facilitate uprighting and distal en-masse movement of lower teeth, correct the Class III open bite malocclusion, change the inclination of the occlusal plane and obtain the consequent morphological-functional adaptation of the mandible. The Class III malocclusion was corrected and satisfactory changes in the patient's profile were obtained. Active treatment was completed in 2 years, and facial result remained stable at 2 years 6 months after debonding.

  14. Application of path-integral quantization to indistinguishable particle systems topologically confined by a magnetic field

    NASA Astrophysics Data System (ADS)

    Jacak, Janusz E.

    2018-01-01

    We demonstrate an original development of path-integral quantization in the case of a multiply connected configuration space of indistinguishable charged particles on a 2D manifold and exposed to a strong perpendicular magnetic field. The system occurs to be exceptionally homotopy-rich and the structure of the homotopy essentially depends on the magnetic field strength resulting in multiloop trajectories at specific conditions. We have proved, by a generalization of the Bohr-Sommerfeld quantization rule, that the size of a magnetic field flux quantum grows for multiloop orbits like (2 k +1 ) h/c with the number of loops k . Utilizing this property for electrons on the 2D substrate jellium, we have derived upon the path integration a complete FQHE hierarchy in excellent consistence with experiments. The path integral has been next developed to a sum over configurations, displaying various patterns of trajectory homotopies (topological configurations), which in the nonstationary case of quantum kinetics, reproduces some unclear formerly details in the longitudinal resistivity observed in experiments.

  15. Computer models of complex multiloop branched pipeline systems

    NASA Astrophysics Data System (ADS)

    Kudinov, I. V.; Kolesnikov, S. V.; Eremin, A. V.; Branfileva, A. N.

    2013-11-01

    This paper describes the principal theoretical concepts of the method used for constructing computer models of complex multiloop branched pipeline networks, and this method is based on the theory of graphs and two Kirchhoff's laws applied to electrical circuits. The models make it possible to calculate velocities, flow rates, and pressures of a fluid medium in any section of pipeline networks, when the latter are considered as single hydraulic systems. On the basis of multivariant calculations the reasons for existing problems can be identified, the least costly methods of their elimination can be proposed, and recommendations for planning the modernization of pipeline systems and construction of their new sections can be made. The results obtained can be applied to complex pipeline systems intended for various purposes (water pipelines, petroleum pipelines, etc.). The operability of the model has been verified on an example of designing a unified computer model of the heat network for centralized heat supply of the city of Samara.

  16. Free energy computations by minimization of Kullback-Leibler divergence: An efficient adaptive biasing potential method for sparse representations

    NASA Astrophysics Data System (ADS)

    Bilionis, I.; Koutsourelakis, P. S.

    2012-05-01

    The present paper proposes an adaptive biasing potential technique for the computation of free energy landscapes. It is motivated by statistical learning arguments and unifies the tasks of biasing the molecular dynamics to escape free energy wells and estimating the free energy function, under the same objective of minimizing the Kullback-Leibler divergence between appropriately selected densities. It offers rigorous convergence diagnostics even though history dependent, non-Markovian dynamics are employed. It makes use of a greedy optimization scheme in order to obtain sparse representations of the free energy function which can be particularly useful in multidimensional cases. It employs embarrassingly parallelizable sampling schemes that are based on adaptive Sequential Monte Carlo and can be readily coupled with legacy molecular dynamics simulators. The sequential nature of the learning and sampling scheme enables the efficient calculation of free energy functions parametrized by the temperature. The characteristics and capabilities of the proposed method are demonstrated in three numerical examples.

  17. Constraint treatment techniques and parallel algorithms for multibody dynamic analysis. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Chiou, Jin-Chern

    1990-01-01

    Computational procedures for kinematic and dynamic analysis of three-dimensional multibody dynamic (MBD) systems are developed from the differential-algebraic equations (DAE's) viewpoint. Constraint violations during the time integration process are minimized and penalty constraint stabilization techniques and partitioning schemes are developed. The governing equations of motion, a two-stage staggered explicit-implicit numerical algorithm, are treated which takes advantage of a partitioned solution procedure. A robust and parallelizable integration algorithm is developed. This algorithm uses a two-stage staggered central difference algorithm to integrate the translational coordinates and the angular velocities. The angular orientations of bodies in MBD systems are then obtained by using an implicit algorithm via the kinematic relationship between Euler parameters and angular velocities. It is shown that the combination of the present solution procedures yields a computationally more accurate solution. To speed up the computational procedures, parallel implementation of the present constraint treatment techniques, the two-stage staggered explicit-implicit numerical algorithm was efficiently carried out. The DAE's and the constraint treatment techniques were transformed into arrowhead matrices to which Schur complement form was derived. By fully exploiting the sparse matrix structural analysis techniques, a parallel preconditioned conjugate gradient numerical algorithm is used to solve the systems equations written in Schur complement form. A software testbed was designed and implemented in both sequential and parallel computers. This testbed was used to demonstrate the robustness and efficiency of the constraint treatment techniques, the accuracy of the two-stage staggered explicit-implicit numerical algorithm, and the speed up of the Schur-complement-based parallel preconditioned conjugate gradient algorithm on a parallel computer.

  18. Efficient Low Dissipative High Order Schemes for Multiscale MHD Flows, I: Basic Theory

    NASA Technical Reports Server (NTRS)

    Sjoegreen, Bjoern; Yee, H. C.

    2003-01-01

    The objective of this paper is to extend our recently developed highly parallelizable nonlinear stable high order schemes for complex multiscale hydrodynamic applications to the viscous MHD equations. These schemes employed multiresolution wavelets as adaptive numerical dissipation controls t o limit the amount of and to aid the selection and/or blending of the appropriate types of dissipation to be used. The new scheme is formulated for both the conservative and non-conservative form of the MHD equations in curvilinear grids. The four advantages of the present approach over existing MHD schemes reported in the open literature are as follows. First, the scheme is constructed for long-time integrations of shock/turbulence/combustion MHD flows. Available schemes are too diffusive for long-time integrations and/or turbulence/combustion problems. Second, unlike exist- ing schemes for the conservative MHD equations which suffer from ill-conditioned eigen- decompositions, the present scheme makes use of a well-conditioned eigen-decomposition obtained from a minor modification of the eigenvectors of the non-conservative MHD equations t o solve the conservative form of the MHD equations. Third, this approach of using the non-conservative eigensystem when solving the conservative equations also works well in the context of standard shock-capturing schemes for the MHD equations. Fourth, a new approach to minimize the numerical error of the divergence-free magnetic condition for high order schemes is introduced. Numerical experiments with typical MHD model problems revealed the applicability of the newly developed schemes for the MHD equations.

  19. A Dual-Loop Opto-Electronic Oscillator

    NASA Astrophysics Data System (ADS)

    Yao, X. S.; Maleki, L.; Ji, Y.; Lutes, G.; Tu, M.

    1998-07-01

    We describe and demonstrate a multiloop technique for single-mode selection in an opto-electronic oscillator (OEO). We present experimental results of a dual-loop OEO free running at 10 GHz that has the lowest phase noise (-140 dBc/Hz at 10 kHz from the carrier) of all free-running room-temperature oscillators to date.

  20. Multiloop Integral System Test (MIST): MIST Facility Functional Specification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habib, T F; Koksal, C G; Moskal, T E

    1991-04-01

    The Multiloop Integral System Test (MIST) is part of a multiphase program started in 1983 to address small-break loss-of-coolant accidents (SBLOCAs) specific to Babcock and Wilcox designed plants. MIST is sponsored by the US Nuclear Regulatory Commission, the Babcock Wilcox Owners Group, the Electric Power Research Institute, and Babcock and Wilcox. The unique features of the Babcock and Wilcox design, specifically the hot leg U-bends and steam generators, prevented the use of existing integral system data or existing integral facilities to address the thermal-hydraulic SBLOCA questions. MIST was specifically designed and constructed for this program, and an existing facility --more » the Once Through Integral System (OTIS) -- was also used. Data from MIST and OTIS are used to benchmark the adequacy of system codes, such as RELAP5 and TRAC, for predicting abnormal plant transients. The MIST Functional Specification documents as-built design features, dimensions, instrumentation, and test approach. It also presents the scaling basis for the facility and serves to define the scope of work for the facility design and construction. 13 refs., 112 figs., 38 tabs.« less

  1. A Method for Measuring the Effective Throughput Time Delay in Simulated Displays Involving Manual Control

    NASA Technical Reports Server (NTRS)

    Jewell, W. F.; Clement, W. F.

    1984-01-01

    The advent and widespread use of the computer-generated image (CGI) device to simulate visual cues has a mixed impact on the realism and fidelity of flight simulators. On the plus side, CGIs provide greater flexibility in scene content than terrain boards and closed circuit television based visual systems, and they have the potential for a greater field of view. However, on the minus side, CGIs introduce into the visual simulation relatively long time delays. In many CGIs, this delay is as much as 200 ms, which is comparable to the inherent delay time of the pilot. Because most GCIs use multiloop processing and smoothing algorithms and are linked to a multiloop host computer, it is seldom possible to identify a unique throughput time delay, and it is therefore difficult to quantify the performance of the closed loop pilot simulator system relative to the real world task. A method to address these issues using the critical task tester is described. Some empirical results from applying the method are presented, and a novel technique for improving the performance of GCIs is discussed.

  2. Experimental comparison of conventional and nonlinear model-based control of a mixing tank

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haeggblom, K.E.

    1993-11-01

    In this case study concerning control of a laboratory-scale mixing tank, conventional multiloop single-input single-output (SISO) control is compared with model-based'' control where the nonlinearity and multivariable characteristics of the process are explicitly taken into account. It is shown, especially if the operating range of the process is large, that the two outputs (level and temperature) cannot be adequately controlled by multiloop SISO control even if gain scheduling is used. By nonlinear multiple-input multiple-output (MIMO) control, on the other hand, very good control performance is obtained. The basic approach to nonlinear control used in this study is first to transformmore » the process into a globally linear and decoupled system, and then to design controllers for this system. Because of the properties of the resulting MIMO system, the controller design is very easy. Two nonlinear control system designs based on a steady-state and a dynamic model, respectively, are considered. In the dynamic case, both setpoint tracking and disturbance rejection can be addressed separately.« less

  3. A Parallel Sliding Region Algorithm to Make Agent-Based Modeling Possible for a Large-Scale Simulation: Modeling Hepatitis C Epidemics in Canada.

    PubMed

    Wong, William W L; Feng, Zeny Z; Thein, Hla-Hla

    2016-11-01

    Agent-based models (ABMs) are computer simulation models that define interactions among agents and simulate emergent behaviors that arise from the ensemble of local decisions. ABMs have been increasingly used to examine trends in infectious disease epidemiology. However, the main limitation of ABMs is the high computational cost for a large-scale simulation. To improve the computational efficiency for large-scale ABM simulations, we built a parallelizable sliding region algorithm (SRA) for ABM and compared it to a nonparallelizable ABM. We developed a complex agent network and performed two simulations to model hepatitis C epidemics based on the real demographic data from Saskatchewan, Canada. The first simulation used the SRA that processed on each postal code subregion subsequently. The second simulation processed the entire population simultaneously. It was concluded that the parallelizable SRA showed computational time saving with comparable results in a province-wide simulation. Using the same method, SRA can be generalized for performing a country-wide simulation. Thus, this parallel algorithm enables the possibility of using ABM for large-scale simulation with limited computational resources.

  4. Modelling and regulating of cardio-respiratory response for the enhancement of interval training

    PubMed Central

    2014-01-01

    Background The interval training method has been a well known exercise protocol which helps strengthen and improve one’s cardiovascular fitness. Purpose To develop an effective training protocol to improve cardiovascular fitness based on modelling and analysis of Heart Rate (HR) and Oxygen Uptake (VO2) dynamics. Methods In order to model the cardiorespiratory response to the onset and offset exercises, the (K4b2, Cosmed) gas analyzer was used to monitor and record the heart rate and oxygen uptake for ten healthy male subjects. An interval training protocol was developed for young health users and was simulated using a proposed RC switching model which was presented to accommodate the variations of the cardiorespiratory dynamics to running exercises. A hybrid system model was presented to describe the adaptation process and a multi-loop PI control scheme was designed for the tuning of interval training regime. Results By observing the original data for each subject, we can clearly identify that all subjects have similar HR and VO2 profiles. The proposed model is capable to simulate the exercise responses during onset and offset exercises; it ensures the continuity of the outputs within the interval training protocol. Under some mild assumptions, a hybrid system model can describe the adaption process and accordingly a multi-loop PI controller can be designed for the tuning of interval training protocol. The self-adaption feature of the proposed controller gives the exerciser the opportunity to reach his desired setpoints after a certain number of training sessions. Conclusions The established interval training protocol targets a range of 70-80% of HRmax which is mainly a training zone for the purpose of cardiovascular system development and improvement. Furthermore, the proposed multi-loop feedback controller has the potential to tune the interval training protocol according to the feedback from an individual exerciser. PMID:24499131

  5. Introducing parallelism to histogramming functions for GEM systems

    NASA Astrophysics Data System (ADS)

    Krawczyk, Rafał D.; Czarski, Tomasz; Kolasinski, Piotr; Pozniak, Krzysztof T.; Linczuk, Maciej; Byszuk, Adrian; Chernyshova, Maryna; Juszczyk, Bartlomiej; Kasprowicz, Grzegorz; Wojenski, Andrzej; Zabolotny, Wojciech

    2015-09-01

    This article is an assessment of potential parallelization of histogramming algorithms in GEM detector system. Histogramming and preprocessing algorithms in MATLAB were analyzed with regard to adding parallelism. Preliminary implementation of parallel strip histogramming resulted in speedup. Analysis of algorithms parallelizability is presented. Overview of potential hardware and software support to implement parallel algorithm is discussed.

  6. Multi-loop positivity of the planar $$ \\mathcal{N} $$ = 4 SYM six-point amplitude

    DOE PAGES

    Dixon, Lance J.; von Hippel, Matt; McLeod, Andrew J.; ...

    2017-02-22

    We study the six-point NMHV ratio function in planarmore » $$ \\mathcal{N} $$ = 4 SYM theory in the context of positive geometry. The Amplituhedron construction of the integrand for the amplitudes provides a kinematical region in which the integrand was observed to be positive. It is natural to conjecture that this property survives integration, i.e. that the final result for the ratio function is also positive in this region. Establishing such a result would imply that preserving positivity is a surprising property of the Minkowski contour of integration and it might indicate some deeper underlying structure. We find that the ratio function is positive everywhere we have tested it, including analytic results for special kinematical regions at one and two loops, as well as robust numerical evidence through five loops. There is also evidence for not just positivity, but monotonicity in a “radial” direction. We also investigate positivity of the MHV six-gluon amplitude. While the remainder function ceases to be positive at four loops, the BDS-like normalized MHV amplitude appears to be positive through five loops.« less

  7. The application of the analog signal to discrete time interval converter to the signal conditioner power supplies

    NASA Technical Reports Server (NTRS)

    Schoenfeld, A. D.; Yu, Y.

    1973-01-01

    The Analog Signal to Discrete Time Interval Converter microminiaturized module was utilized to control the signal conditioner power supplies. The multi-loop control provides outstanding static and dynamic performance characteristics, exceeding those generally associated with single-loop regulators. Eight converter boards, each containing three independent dc to dc converter, were built, tested, and delivered.

  8. Multiloop atom interferometer measurements of chameleon dark energy in microgravity

    NASA Astrophysics Data System (ADS)

    Chiow, Sheng-wey; Yu, Nan

    2018-02-01

    Chameleon field is one of the promising candidates of dark energy scalar fields. As in all viable candidate field theories, a screening mechanism is implemented to be consistent with all existing tests of general relativity. The screening effect in the chameleon theory manifests its influence limited only to the thin outer layer of a bulk object, thus producing extra forces orders of magnitude weaker than that of the gravitational force of the bulk. For pointlike particles such as atoms, the depth of screening is larger than the size of the particle, such that the screening mechanism is ineffective and the chameleon force is fully expressed on the atomic test particles. Extra force measurements using atom interferometry are thus much more sensitive than bulk mass based measurements, and indeed have placed the most stringent constraints on the parameters characterizing chameleon field. In this paper, we present a conceptual measurement approach for chameleon force detection using atom interferometry in microgravity, in which multiloop atom interferometers exploit specially designed periodic modulation of chameleon fields. We show that major systematics of the dark energy force measurements, i.e., effects of gravitational forces and their gradients, can be suppressed below all hypothetical chameleon signals in the parameter space of interest.

  9. Extreme skeletal open bite correction with vertical elastics.

    PubMed

    Cruz-Escalante, Marco Antonio; Aliaga-Del Castillo, Aron; Soldevilla, Luciano; Janson, Guilherme; Yatabe, Marilia; Zuazola, Ricardo Voss

    2017-11-01

    Severe skeletal open bites may be ideally treated with a combined surgical-orthodontic approach. Alternatively, compensations may be planned to camouflage the malocclusion with orthodontics alone. This case report describes the treatment of an 18-year-old man who presented with a severe open bite involving the anterior and posterior teeth up to the first molars, increased vertical dimension, bilateral Class III molar relationship, bilateral posterior crossbite, dental midline deviation, and absence of the maxillary right canine and the mandibular left first premolar. A treatment plan including the extraction of the mandibular right first premolar and based on uprighting and vertical control of the posterior teeth, combined with extrusion of the anterior teeth using multiloop edgewise archwire mechanics and elastics was chosen. After 6 months of alignment and 2 months of multiloop edgewise archwire mechanics, the open bite was significantly reduced. After 24 months of treatment, anterior teeth extrusion, posterior teeth intrusion, and counterclockwise mandibular rotation were accomplished. Satisfactory improvement of the overbite, overjet, sagittal malocclusion, and facial appearance were achieved. The mechanics used in this clinical case demonstrated good and stable results for open-bite correction at the 2-year posttreatment follow-up.

  10. Malleability and optimization of tetrahedral metamorphic element for deployable truss antenna reflector

    NASA Astrophysics Data System (ADS)

    Hu, Fei; Song, Yanping; Huang, Zhirong; Liu, Wenlan; Li, Wan

    2018-05-01

    The tetrahedral elements that make up the large deployable reflector (LDR) are a kind of metamorphic element, which belongs to the multi-loop coupling mechanism. Firstly, the method of combining topology with screw theory is put forward. The parametric model and the constrained matrix are established to analyze the malleability of 3RR-3RRR tetrahedral element. Secondly, the kinematics expression of each motion pair is deduced by the relationship between the velocity and the motion spinor. Finally, the configuration of the metamorphic element is optimized to make the parabolic antenna fully folded, so that the antenna can meet the maximum folding ratio. The results show that the 3RR-3RRR element is a single-degree of freedom (DOF) mechanism. What's more, three new configurations 3RS-3RRR, 3SR-3RRR and 3UU-3RRR are obtained on the basis of optimization. In particular, it proves to be that the LDR which consists of the 3RS-3RRR metamorphic element can achieve the maximum folding ratio. This paper provides a theoretical basis for the computer-aided design of the truss antennas, which has an excellent applicability in the field of aerospace and other multi-loop coupling mechanism.

  11. A compilation and analysis of helicopter handling qualities data. Volume 2: Data analysis

    NASA Technical Reports Server (NTRS)

    Heffley, R. K.

    1979-01-01

    A compilation and an analysis of helicopter handling qualities data are presented. Multiloop manual control methods are used to analyze the descriptive data, stability derivatives, and transfer functions for a six degrees of freedom, quasi static model. A compensatory loop structure is applied to coupled longitudinal, lateral and directional equations in such a way that key handling qualities features are examined directly.

  12. Generalizations of polylogarithms for Feynman integrals

    NASA Astrophysics Data System (ADS)

    Bogner, Christian

    2016-10-01

    In this talk, we discuss recent progress in the application of generalizations of polylogarithms in the symbolic computation of multi-loop integrals. We briefly review the Maple program MPL which supports a certain approach for the computation of Feynman integrals in terms of multiple polylogarithms. Furthermore we discuss elliptic generalizations of polylogarithms which have shown to be useful in the computation of the massive two-loop sunrise integral.

  13. Exploiting Non-sequence Data in Dynamic Model Learning

    DTIC Science & Technology

    2013-10-01

    For our experiments here and in Section 3.5, we implement the proposed algorithms in MATLAB and use the maximum directed spanning tree solver...embarrassingly parallelizable, whereas PM’s maximum directed spanning tree procedure is harder to parallelize. In this experiment, our MATLAB ...some estimation problems, this approach is able to give unique and consistent estimates while the maximum- likelihood method gets entangled in

  14. TH-A-9A-02: BEST IN PHYSICS (THERAPY) - 4D IMRT Planning Using Highly- Parallelizable Particle Swarm Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Modiri, A; Gu, X; Sawant, A

    2014-06-15

    Purpose: We present a particle swarm optimization (PSO)-based 4D IMRT planning technique designed for dynamic MLC tracking delivery to lung tumors. The key idea is to utilize the temporal dimension as an additional degree of freedom rather than a constraint in order to achieve improved sparing of organs at risk (OARs). Methods: The target and normal structures were manually contoured on each of the ten phases of a 4DCT scan acquired from a lung SBRT patient who exhibited 1.5cm tumor motion despite the use of abdominal compression. Corresponding ten IMRT plans were generated using the Eclipse treatment planning system. Thesemore » plans served as initial guess solutions for the PSO algorithm. Fluence weights were optimized over the entire solution space i.e., 10 phases × 12 beams × 166 control points. The size of the solution space motivated our choice of PSO, which is a highly parallelizable stochastic global optimization technique that is well-suited for such large problems. A summed fluence map was created using an in-house B-spline deformable image registration. Each plan was compared with a corresponding, internal target volume (ITV)-based IMRT plan. Results: The PSO 4D IMRT plan yielded comparable PTV coverage and significantly higher dose—sparing for parallel and serial OARs compared to the ITV-based plan. The dose-sparing achieved via PSO-4DIMRT was: lung Dmean = 28%; lung V20 = 90%; spinal cord Dmax = 23%; esophagus Dmax = 31%; heart Dmax = 51%; heart Dmean = 64%. Conclusion: Truly 4D IMRT that uses the temporal dimension as an additional degree of freedom can achieve significant dose sparing of serial and parallel OARs. Given the large solution space, PSO represents an attractive, parallelizable tool to achieve globally optimal solutions for such problems. This work was supported through funding from the National Institutes of Health and Varian Medical Systems. Amit Sawant has research funding from Varian Medical Systems, VisionRT Ltd. and Elekta.« less

  15. Low-noise sub-harmonic injection locked multiloop ring oscillator

    NASA Astrophysics Data System (ADS)

    Weilin, Xu; Di, Wu; Xueming, Wei; Baolin, Wei; Jihai, Duan; Fadi, Gui

    2016-09-01

    A three-stage differential voltage-controlled ring oscillator is presented for wide-tuning and low-phase noise requirement of clock and data recovery circuit in ultra wideband (UWB) wireless body area network. To improve the performance of phase noise of delay cell with coarse and fine frequency tuning, injection locked technology together with pseudo differential architecture are adopted. In addition, a multiloop is employed for frequency boosting. Two RVCOs, the standard RVCO without the IL block and the proposed IL RVCO, were fabricated in SMIC 0.18 μm 1P6M Salicide CMOS process. The proposed IL RVCO exhibits a measured phase noise of -112.37 dBc/Hz at 1 MHz offset from the center frequency of 1 GHz, while dissipating a current of 8 mA excluding the buffer from a 1.8-V supply voltage. It shows a 16.07 dB phase noise improvement at 1 MHz offset compared to the standard topology. Project supported by the National Natural Science Foundation of China (No. 61264001), the Guangxi Natural Science Foundation (Nos. 2013GXNSFAA019333, 2015GXNSFAA139301, 2014GXNSFAA118386), the Graduate Education Innovation Program of GUET (No. GDYCSZ201457), the Project of Guangxi Education Department (No. LD14066B) and the High-Level-Innovation Team and Outstanding Scholar Project of Guangxi Higher Education Institutes.

  16. A segmented multi-loop antenna for selective excitation of azimuthal mode number in a helicon plasma source.

    PubMed

    Shinohara, S; Tanikawa, T; Motomura, T

    2014-09-01

    A flat type, segmented multi-loop antenna was developed in the Tokai Helicon Device, built for producing high-density helicon plasma, with a diameter of 20 cm and an axial length of 100 cm. This antenna, composed of azimuthally splitting segments located on four different radial positions, i.e., r = 2.8, 4.8, 6.8, and 8.8 cm, can excite the azimuthal mode number m of 0, ±1, and ±2 by a proper choice of antenna feeder parts just on the rear side of the antenna. Power dependencies of the electron density ne were investigated with a radio frequency (rf) power less than 3 kW (excitation frequency ranged from 8 to 20 MHz) by the use of various types of antenna segments, and n(e) up to ~5 × 10(12) cm(-3) was obtained after the density jump from inductively coupled plasma to helicon discharges. Radial density profiles of m = 0 and ±1 modes with low and high rf powers were measured. For the cases of these modes after the density jump, the excited mode structures derived from the magnetic probe measurements were consistent with those expected from theory on helicon waves excited in the plasma.

  17. Successful Treatment of Postpeak Stage Patients with Class II Division 1 Malocclusion Using Non-extraction and Multiloop Edgewise Archwire Therapy: A Report on 16 Cases

    PubMed Central

    Liu, Jun; Zou, Ling; Zhao, Zhi-he; Welburn, Neala; Yang, Pu; Tang, Tian; Li, Yu

    2009-01-01

    Aim To determine cephalometrically the mechanism of the treatment effects of non-extraction and multiloop edgewise archwire (MEAW) technique on postpeak Class II Division 1 patients. Methodology In this retrospective study, 16 postpeak Class II Division 1 patients successfully corrected using a non-extraction and MEAW technique were cephalometrically evaluated and compared with 16 matched control subjects treated using an extraction technique. Using CorelDRAW® software, standardized digital cephalograms pre- and post-active treatments were traced and a reference grid was set up. The superimpositions were based on the cranial base, the mandibular and the maxilla regions,and skeletal and dental changes were measured. Changes following treatment were evaluated using the paired-sample t-test. Student's t-test for unpaired samples was used to assess the differences in changes between the MEAW and the extraction control groups. Results The correction of the molar relationships comprised 54% skeletal change (mainly the advancement of the mandible) and 46% dental change. Correction of the anterior teeth relationships comprised 30% skeletal change and 70% dental change. Conclusion The MEAW technique can produce the desired vertical and sagittal movement of the tooth segment and then effectively stimulate mandibular advancement by utilizing the residual growth potential of the condyle. PMID:20690424

  18. GPU accelerated edge-region based level set evolution constrained by 2D gray-scale histogram.

    PubMed

    Balla-Arabé, Souleymane; Gao, Xinbo; Wang, Bin

    2013-07-01

    Due to its intrinsic nature which allows to easily handle complex shapes and topological changes, the level set method (LSM) has been widely used in image segmentation. Nevertheless, LSM is computationally expensive, which limits its applications in real-time systems. For this purpose, we propose a new level set algorithm, which uses simultaneously edge, region, and 2D histogram information in order to efficiently segment objects of interest in a given scene. The computational complexity of the proposed LSM is greatly reduced by using the highly parallelizable lattice Boltzmann method (LBM) with a body force to solve the level set equation (LSE). The body force is the link with image data and is defined from the proposed LSE. The proposed LSM is then implemented using an NVIDIA graphics processing units to fully take advantage of the LBM local nature. The new algorithm is effective, robust against noise, independent to the initial contour, fast, and highly parallelizable. The edge and region information enable to detect objects with and without edges, and the 2D histogram information enable the effectiveness of the method in a noisy environment. Experimental results on synthetic and real images demonstrate subjectively and objectively the performance of the proposed method.

  19. Development and Characterization of a Parallelizable Perfusion Bioreactor for 3D Cell Culture.

    PubMed

    Egger, Dominik; Fischer, Monica; Clementi, Andreas; Ribitsch, Volker; Hansmann, Jan; Kasper, Cornelia

    2017-05-25

    The three dimensional (3D) cultivation of stem cells in dynamic bioreactor systems is essential in the context of regenerative medicine. Still, there is a lack of bioreactor systems that allow the cultivation of multiple independent samples under different conditions while ensuring comprehensive control over the mechanical environment. Therefore, we developed a miniaturized, parallelizable perfusion bioreactor system with two different bioreactor chambers. Pressure sensors were also implemented to determine the permeability of biomaterials which allows us to approximate the shear stress conditions. To characterize the flow velocity and shear stress profile of a porous scaffold in both bioreactor chambers, a computational fluid dynamics analysis was performed. Furthermore, the mixing behavior was characterized by acquisition of the residence time distributions. Finally, the effects of the different flow and shear stress profiles of the bioreactor chambers on osteogenic differentiation of human mesenchymal stem cells were evaluated in a proof of concept study. In conclusion, the data from computational fluid dynamics and shear stress calculations were found to be predictable for relative comparison of the bioreactor geometries, but not for final determination of the optimal flow rate. However, we suggest that the system is beneficial for parallel dynamic cultivation of multiple samples for 3D cell culture processes.

  20. Development and Characterization of a Parallelizable Perfusion Bioreactor for 3D Cell Culture

    PubMed Central

    Egger, Dominik; Fischer, Monica; Clementi, Andreas; Ribitsch, Volker; Hansmann, Jan; Kasper, Cornelia

    2017-01-01

    The three dimensional (3D) cultivation of stem cells in dynamic bioreactor systems is essential in the context of regenerative medicine. Still, there is a lack of bioreactor systems that allow the cultivation of multiple independent samples under different conditions while ensuring comprehensive control over the mechanical environment. Therefore, we developed a miniaturized, parallelizable perfusion bioreactor system with two different bioreactor chambers. Pressure sensors were also implemented to determine the permeability of biomaterials which allows us to approximate the shear stress conditions. To characterize the flow velocity and shear stress profile of a porous scaffold in both bioreactor chambers, a computational fluid dynamics analysis was performed. Furthermore, the mixing behavior was characterized by acquisition of the residence time distributions. Finally, the effects of the different flow and shear stress profiles of the bioreactor chambers on osteogenic differentiation of human mesenchymal stem cells were evaluated in a proof of concept study. In conclusion, the data from computational fluid dynamics and shear stress calculations were found to be predictable for relative comparison of the bioreactor geometries, but not for final determination of the optimal flow rate. However, we suggest that the system is beneficial for parallel dynamic cultivation of multiple samples for 3D cell culture processes. PMID:28952530

  1. Laser dynamics: The system dynamics and network theory of optoelectronic integrated circuit design

    NASA Astrophysics Data System (ADS)

    Tarng, Tom Shinming-T. K.

    Laser dynamics is the system dynamics, communication and network theory for the design of opto-electronic integrated circuit (OEIC). Combining the optical network theory and optical communication theory, the system analysis and design for the OEIC fundamental building blocks is considered. These building blocks include the direct current modulation, inject light modulation, wideband filter, super-gain optical amplifier, E/O and O/O optical bistability and current-controlled optical oscillator. Based on the rate equations, the phase diagram and phase portrait analysis is applied to the theoretical studies and numerical simulation. The OEIC system design methodologies are developed for the OEIC design. Stimulating-field-dependent rate equations are used to model the line-width narrowing/broadening mechanism for the CW mode and frequency chirp of semiconductor lasers. The momentary spectra are carrier-density-dependent. Furthermore, the phase portrait analysis and the nonlinear refractive index is used to simulate the single mode frequency chirp. The average spectra of chaos, period doubling, period pulsing, multi-loops and analog modulation are generated and analyzed. The bifurcation-chirp design chart with modulation depth and modulation frequency as parameters is provided for design purpose.

  2. Integrand reduction for two-loop scattering amplitudes through multivariate polynomial division

    NASA Astrophysics Data System (ADS)

    Mastrolia, Pierpaolo; Mirabella, Edoardo; Ossola, Giovanni; Peraro, Tiziano

    2013-04-01

    We describe the application of a novel approach for the reduction of scattering amplitudes, based on multivariate polynomial division, which we have recently presented. This technique yields the complete integrand decomposition for arbitrary amplitudes, regardless of the number of loops. It allows for the determination of the residue at any multiparticle cut, whose knowledge is a mandatory prerequisite for applying the integrand-reduction procedure. By using the division modulo Gröbner basis, we can derive a simple integrand recurrence relation that generates the multiparticle pole decomposition for integrands of arbitrary multiloop amplitudes. We apply the new reduction algorithm to the two-loop planar and nonplanar diagrams contributing to the five-point scattering amplitudes in N=4 super Yang-Mills and N=8 supergravity in four dimensions, whose numerator functions contain up to rank-two terms in the integration momenta. We determine all polynomial residues parametrizing the cuts of the corresponding topologies and subtopologies. We obtain the integral basis for the decomposition of each diagram from the polynomial form of the residues. Our approach is well suited for a seminumerical implementation, and its general mathematical properties provide an effective algorithm for the generalization of the integrand-reduction method to all orders in perturbation theory.

  3. Opto-electronic oscillator and its applications

    NASA Astrophysics Data System (ADS)

    Yao, X. S.; Maleki, Lute

    1997-04-01

    We review the properties of a new class of microwave oscillators called opto-electronic oscillators (OEO). We present theoretical and experimental results of a multi-loop technique for single mode selection. We then describe a new development called coupled OEO (COEO) in which the electrical oscillation is directly coupled with the optical oscillation, producing an OEO that generates stable optical pulses and single mode microwave oscillation simultaneously. Finally we discuss various applications of OEO.

  4. Evaluating Multi-Input/Multi-Output Digital Control Systems

    NASA Technical Reports Server (NTRS)

    Pototzky, Anthony S.; Wieseman, Carol D.; Hoadley, Sherwood T.; Mukhopadhyay, Vivek

    1994-01-01

    Controller-performance-evaluation (CPE) methodology for multi-input/multi-output (MIMO) digital control systems developed. Procedures identify potentially destabilizing controllers and confirm satisfactory performance of stabilizing ones. Methodology generic and used in many types of multi-loop digital-controller applications, including digital flight-control systems, digitally controlled spacecraft structures, and actively controlled wind-tunnel models. Also applicable to other complex, highly dynamic digital controllers, such as those in high-performance robot systems.

  5. Strings on plane-waves and spin chains on orbifolds

    NASA Astrophysics Data System (ADS)

    Sadri, Darius

    This thesis covers a number of topics in string theory focusing on various aspects of the AdS/CFT duality in various guises and regimes. In the first chapter we present a self-contained review of the Plane-wave/super-Yang-Mills duality. This duality is a specification of the usual AdS/CFT correspondence in the "Penrose limit". In chapter two we study the most general parallelizable pp-wave backgrounds which are non-dilatonic solutions in the NS-NS sector of type IIA and IIB string theories. We demonstrate that parallelizable pp-wave backgrounds are necessarily homogeneous plane-waves, and that a large class of homogeneous plane-waves are parallelizable, stating the necessary conditions. Quantization of string modes, their compactification and behaviour under T-duality are also studied, as are BPS Dp-branes on such backgrounds. In chapter three we consider giant gravitons on the maximally supersymmetric plane-wave background. We deduce the low energy effective light-cone Hamiltonian of the three-sphere giant graviton, and place sources in this effective gauge theory. Although non-vanishing net electric charge configurations are disallowed by Gauss' law, electric dipoles can be formed. From the string theory point of view these dipoles can be understood as open strings piercing the three-sphere, giving a two dimensional (worldsheet) description of giant gravitons. Chapter four presents some new ideas regarding the relation between super-conformal gauge theories and string theories with three-dimensional target spaces, possible relations of these systems to Hamiltonian lattice gauge theories, and integrable spin chains. We consider N = 1, D = 4 superconformal SU( N)px q Yang-Mills theories dual to AdS5 x S5/Zp x Zq orbifolds. We show that a specific sector of this dilatation operator can be thought of as the transfer matrix for a three-dimensional statistical mechanical system, which in turn is equivalent to a 2 + 1-dimensional string theory where the spatial slices are discretized on a triangular lattice, and comment on the integrability of this N = 1 gauge theory, its connection to three-dimensional lattice gauge theories, extensions to six-dimensional string theories, AdS/CFT type dualities and finally their construction via orbifolds and brane-box models. In the process we discover a new class of almost-BPS BMN type operators with large engineering dimensions but controllably small anomalous corrections.

  6. Return Difference Feedback Design for Robust Uncertainty Tolerance in Stochastic Multivariable Control Systems.

    DTIC Science & Technology

    1984-07-01

    34robustness" analysis for multiloop feedback systems. Reference [55] describes a simple method based on the Perron - Frobenius Theory of non-negative...Viewpoint, " Operator Theory : Advances and Applications, 12, pp. 277-302, 1984. - E. A. Jonckheere, "New Bound on the Sensitivity -- of the Solution of...Reidel, Dordrecht, Holland, 1984. M. G. Safonov, "Comments on Singular Value Theory in Uncertain Feedback Systems, " to appear IEEE Trans. on Automatic

  7. A segmented multi-loop antenna for selective excitation of azimuthal mode number in a helicon plasma source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shinohara, S., E-mail: sshinoha@cc.tuat.ac.jp; Tanikawa, T.; Motomura, T.

    2014-09-15

    A flat type, segmented multi-loop antenna was developed in the Tokai Helicon Device, built for producing high-density helicon plasma, with a diameter of 20 cm and an axial length of 100 cm. This antenna, composed of azimuthally splitting segments located on four different radial positions, i.e., r = 2.8, 4.8, 6.8, and 8.8 cm, can excite the azimuthal mode number m of 0, ±1, and ±2 by a proper choice of antenna feeder parts just on the rear side of the antenna. Power dependencies of the electron density n{sub e} were investigated with a radio frequency (rf) power less thanmore » 3 kW (excitation frequency ranged from 8 to 20 MHz) by the use of various types of antenna segments, and n{sub e} up to ∼5 × 10{sup 12} cm{sup −3} was obtained after the density jump from inductively coupled plasma to helicon discharges. Radial density profiles of m = 0 and ±1 modes with low and high rf powers were measured. For the cases of these modes after the density jump, the excited mode structures derived from the magnetic probe measurements were consistent with those expected from theory on helicon waves excited in the plasma.« less

  8. Simultaneous gains tuning in boiler/turbine PID-based controller clusters using iterative feedback tuning methodology.

    PubMed

    Zhang, Shu; Taft, Cyrus W; Bentsman, Joseph; Hussey, Aaron; Petrus, Bryan

    2012-09-01

    Tuning a complex multi-loop PID based control system requires considerable experience. In today's power industry the number of available qualified tuners is dwindling and there is a great need for better tuning tools to maintain and improve the performance of complex multivariable processes. Multi-loop PID tuning is the procedure for the online tuning of a cluster of PID controllers operating in a closed loop with a multivariable process. This paper presents the first application of the simultaneous tuning technique to the multi-input-multi-output (MIMO) PID based nonlinear controller in the power plant control context, with the closed-loop system consisting of a MIMO nonlinear boiler/turbine model and a nonlinear cluster of six PID-type controllers. Although simplified, the dynamics and cross-coupling of the process and the PID cluster are similar to those used in a real power plant. The particular technique selected, iterative feedback tuning (IFT), utilizes the linearized version of the PID cluster for signal conditioning, but the data collection and tuning is carried out on the full nonlinear closed-loop system. Based on the figure of merit for the control system performance, the IFT is shown to deliver performance favorably comparable to that attained through the empirical tuning carried out by an experienced control engineer. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  9. Methods for Multiloop Identification of Visual and Neuromuscular Pilot Responses.

    PubMed

    Olivari, Mario; Nieuwenhuizen, Frank M; Venrooij, Joost; Bülthoff, Heinrich H; Pollini, Lorenzo

    2015-12-01

    In this paper, identification methods are proposed to estimate the neuromuscular and visual responses of a multiloop pilot model. A conventional and widely used technique for simultaneous identification of the neuromuscular and visual systems makes use of cross-spectral density estimates. This paper shows that this technique requires a specific noninterference hypothesis, often implicitly assumed, that may be difficult to meet during actual experimental designs. A mathematical justification of the necessity of the noninterference hypothesis is given. Furthermore, two methods are proposed that do not have the same limitations. The first method is based on autoregressive models with exogenous inputs, whereas the second one combines cross-spectral estimators with interpolation in the frequency domain. The two identification methods are validated by offline simulations and contrasted to the classic method. The results reveal that the classic method fails when the noninterference hypothesis is not fulfilled; on the contrary, the two proposed techniques give reliable estimates. Finally, the three identification methods are applied to experimental data from a closed-loop control task with pilots. The two proposed techniques give comparable estimates, different from those obtained by the classic method. The differences match those found with the simulations. Thus, the two identification methods provide a good alternative to the classic method and make it possible to simultaneously estimate human's neuromuscular and visual responses in cases where the classic method fails.

  10. Diversity in computing technologies and strategies for dynamic resource allocation

    DOE PAGES

    Garzoglio, G.; Gutsche, O.

    2015-12-23

    Here, High Energy Physics (HEP) is a very data intensive and trivially parallelizable science discipline. HEP is probing nature at increasingly finer details requiring ever increasing computational resources to process and analyze experimental data. In this paper, we discuss how HEP provisioned resources so far using Grid technologies, how HEP is starting to include new resource providers like commercial Clouds and HPC installations, and how HEP is transparently provisioning resources at these diverse providers.

  11. Computational Fluid Dynamics for Atmospheric Entry

    DTIC Science & Technology

    2009-09-01

    equations. This method is a parallelizable variant of the Gauss - Seidel line-relaxation method of MacCormack (Ref. 33, 35), and is at the core of the...G.V. Candler, “The Solution of the Navier-Stokes Equations Gauss - Seidel Line Relaxation,” Computers and Fluids, Vol. 17, No. 1, 1989, pp. 135-150. 35... solution differs by 5% from the results obtained using the direct simulation Monte Carlo method . 3 Some authors advocate the use of higher-order continuum

  12. Hedgehog bases for A n cluster polylogarithms and an application to six-point amplitudes

    DOE PAGES

    Parker, Daniel E.; Scherlis, Adam; Spradlin, Marcus; ...

    2015-11-20

    Multi-loop scattering amplitudes in N=4 Yang-Mills theory possess cluster algebra structure. In order to develop a computational framework which exploits this connection, we show how to construct bases of Goncharov polylogarithm functions, at any weight, whose symbol alphabet consists of cluster coordinates on the A n cluster algebra. As a result, using such a basis we present a new expression for the 2-loop 6-particle NMHV amplitude which makes some of its cluster structure manifest.

  13. A Maple package for computing Gröbner bases for linear recurrence relations

    NASA Astrophysics Data System (ADS)

    Gerdt, Vladimir P.; Robertz, Daniel

    2006-04-01

    A Maple package for computing Gröbner bases of linear difference ideals is described. The underlying algorithm is based on Janet and Janet-like monomial divisions associated with finite difference operators. The package can be used, for example, for automatic generation of difference schemes for linear partial differential equations and for reduction of multiloop Feynman integrals. These two possible applications are illustrated by simple examples of the Laplace equation and a one-loop scalar integral of propagator type.

  14. Hierarchically Parallelized Constrained Nonlinear Solvers with Automated Substructuring

    NASA Technical Reports Server (NTRS)

    Padovan, Joe; Kwang, Abel

    1994-01-01

    This paper develops a parallelizable multilevel multiple constrained nonlinear equation solver. The substructuring process is automated to yield appropriately balanced partitioning of each succeeding level. Due to the generality of the procedure,_sequential, as well as partially and fully parallel environments can be handled. This includes both single and multiprocessor assignment per individual partition. Several benchmark examples are presented. These illustrate the robustness of the procedure as well as its capability to yield significant reductions in memory utilization and calculational effort due both to updating and inversion.

  15. Plenoptic Image Motion Deblurring.

    PubMed

    Chandramouli, Paramanand; Jin, Meiguang; Perrone, Daniele; Favaro, Paolo

    2018-04-01

    We propose a method to remove motion blur in a single light field captured with a moving plenoptic camera. Since motion is unknown, we resort to a blind deconvolution formulation, where one aims to identify both the blur point spread function and the latent sharp image. Even in the absence of motion, light field images captured by a plenoptic camera are affected by a non-trivial combination of both aliasing and defocus, which depends on the 3D geometry of the scene. Therefore, motion deblurring algorithms designed for standard cameras are not directly applicable. Moreover, many state of the art blind deconvolution algorithms are based on iterative schemes, where blurry images are synthesized through the imaging model. However, current imaging models for plenoptic images are impractical due to their high dimensionality. We observe that plenoptic cameras introduce periodic patterns that can be exploited to obtain highly parallelizable numerical schemes to synthesize images. These schemes allow extremely efficient GPU implementations that enable the use of iterative methods. We can then cast blind deconvolution of a blurry light field image as a regularized energy minimization to recover a sharp high-resolution scene texture and the camera motion. Furthermore, the proposed formulation can handle non-uniform motion blur due to camera shake as demonstrated on both synthetic and real light field data.

  16. Electrical Wave Propagation in a Minimally Realistic Fiber Architecture Model of the Left Ventricle

    NASA Astrophysics Data System (ADS)

    Song, Xianfeng; Setayeshgar, Sima

    2006-03-01

    Experimental results indicate a nested, layered geometry for the fiber surfaces of the left ventricle, where fiber directions are approximately aligned in each surface and gradually rotate through the thickness of the ventricle. Numerical and analytical results have highlighted the importance of this rotating anisotropy and its possible destabilizing role on the dynamics of scroll waves in excitable media with application to the heart. Based on the work of Peskin[1] and Peskin and McQueen[2], we present a minimally realistic model of the left ventricle that adequately captures the geometry and anisotropic properties of the heart as a conducting medium while being easily parallelizable, and computationally more tractable than fully realistic anatomical models. Complementary to fully realistic and anatomically-based computational approaches, studies using such a minimal model with the addition of successively realistic features, such as excitation-contraction coupling, should provide unique insight into the basic mechanisms of formation and obliteration of electrical wave instabilities. We describe our construction, implementation and validation of this model. [1] C. S. Peskin, Communications on Pure and Applied Mathematics 42, 79 (1989). [2] C. S. Peskin and D. M. McQueen, in Case Studies in Mathematical Modeling: Ecology, Physiology, and Cell Biology, 309(1996)

  17. 3D early embryogenesis image filtering by nonlinear partial differential equations.

    PubMed

    Krivá, Z; Mikula, K; Peyriéras, N; Rizzi, B; Sarti, A; Stasová, O

    2010-08-01

    We present nonlinear diffusion equations, numerical schemes to solve them and their application for filtering 3D images obtained from laser scanning microscopy (LSM) of living zebrafish embryos, with a goal to identify the optimal filtering method and its parameters. In the large scale applications dealing with analysis of 3D+time embryogenesis images, an important objective is a correct detection of the number and position of cell nuclei yielding the spatio-temporal cell lineage tree of embryogenesis. The filtering is the first and necessary step of the image analysis chain and must lead to correct results, removing the noise, sharpening the nuclei edges and correcting the acquisition errors related to spuriously connected subregions. In this paper we study such properties for the regularized Perona-Malik model and for the generalized mean curvature flow equations in the level-set formulation. A comparison with other nonlinear diffusion filters, like tensor anisotropic diffusion and Beltrami flow, is also included. All numerical schemes are based on the same discretization principles, i.e. finite volume method in space and semi-implicit scheme in time, for solving nonlinear partial differential equations. These numerical schemes are unconditionally stable, fast and naturally parallelizable. The filtering results are evaluated and compared first using the Mean Hausdorff distance between a gold standard and different isosurfaces of original and filtered data. Then, the number of isosurface connected components in a region of interest (ROI) detected in original and after the filtering is compared with the corresponding correct number of nuclei in the gold standard. Such analysis proves the robustness and reliability of the edge preserving nonlinear diffusion filtering for this type of data and lead to finding the optimal filtering parameters for the studied models and numerical schemes. Further comparisons consist in ability of splitting the very close objects which are artificially connected due to acquisition error intrinsically linked to physics of LSM. In all studied aspects it turned out that the nonlinear diffusion filter which is called geodesic mean curvature flow (GMCF) has the best performance. Copyright 2010 Elsevier B.V. All rights reserved.

  18. Group theoretic approach to the perturbative string S-matrix

    NASA Astrophysics Data System (ADS)

    Neveu, A.; West, P.

    1987-07-01

    A new approach to the computation of string scattering is given. From duality, unitarity and a generic overlap property, we determine entirely the N-string amplitude, including the integration measure, and its gauge properties. The techniques do not use any oscillator algebra, but the computation is reduced to a straightforward exercise in conformal group theory. This can be applied to fermionic trees and multiloop diagrams, but in this paper it is demonstrated on the open bosonic tree. Permanent address: Mathematics Department, King's College, Strand, London WC2R 2LS, UK.

  19. Algorithmic transformation of multi-loop master integrals to a canonical basis with CANONICA

    NASA Astrophysics Data System (ADS)

    Meyer, Christoph

    2018-01-01

    The integration of differential equations of Feynman integrals can be greatly facilitated by using a canonical basis. This paper presents the Mathematica package CANONICA, which implements a recently developed algorithm to automatize the transformation to a canonical basis. This represents the first publicly available implementation suitable for differential equations depending on multiple scales. In addition to the presentation of the package, this paper extends the description of some aspects of the algorithm, including a proof of the uniqueness of canonical forms up to constant transformations.

  20. Gluons for (almost) nothing, gravitons for free

    NASA Astrophysics Data System (ADS)

    Carrasco, John Joseph M.

    2013-07-01

    In this talk I describe a new method for organizing Yang-Mills scattering amplitudes that allow the definition of an entire multi-loop scattering amplitude in terms of a small number of "master" graphs. A small amount of information is required from the theory, and constraints propagate this information to the full amplitude. When organized in such away corresponding gravitational amplitudes are trivially found. This talk is based on work[1- 4] done in collaboration with Zvi Bern, Lance Dixon, Henrik Johansson, and Radu Roiban, and follows closely the presentation given in ref. [5].

  1. Analyses of shuttle orbiter approach and landing conditions

    NASA Technical Reports Server (NTRS)

    Teper, G. L.; Dimarco, R. J.; Ashkenas, I. L.; Hoh, R. H.

    1981-01-01

    A study of one shuttle orbiter approach and landing conditions are summarized. Causes of observed PIO like flight deficiencies are identified and potential cures are examined. Closed loop pilot/vehicle analyses are described and path/attitude stability boundaries defined. The latter novel technique proved of great value in delineating and illustrating the basic causes of this multiloop pilot control problem. The analytical results are shown to be consistent with flight test and fixed base simulation. Conclusions are drawn relating to possible improvements of the shuttle orbiter/digital flight control system.

  2. Jdpd: an open java simulation kernel for molecular fragment dissipative particle dynamics.

    PubMed

    van den Broek, Karina; Kuhn, Hubert; Zielesny, Achim

    2018-05-21

    Jdpd is an open Java simulation kernel for Molecular Fragment Dissipative Particle Dynamics with parallelizable force calculation, efficient caching options and fast property calculations. It is characterized by an interface and factory-pattern driven design for simple code changes and may help to avoid problems of polyglot programming. Detailed input/output communication, parallelization and process control as well as internal logging capabilities for debugging purposes are supported. The new kernel may be utilized in different simulation environments ranging from flexible scripting solutions up to fully integrated "all-in-one" simulation systems.

  3. Optoelectronic oscillator with improved phase noise and frequency stability

    NASA Astrophysics Data System (ADS)

    Eliyahu, Danny; Sariri, Kouros; Taylor, Joseph; Maleki, Lute

    2003-07-01

    In this paper we report on recent improvements in phase noise and frequency stability of a 10 GHz opto-electronic oscillator. In our OEO loop, the high Q elements (the optical fiber and the narrow bandpass microwave filter) are thermally stabilized using resistive heaters and temperature controllers, keeping their temperature above ambient. The thermally stabilized free running OEO demonstrates a short-term frequency stability of 0.02 ppm (over several hours) and frequency vs. temperature slope of -0.1 ppm/°C (compared to -8.3 ppm/°C for non thermally stabilized OEO). We obtained an exceptional spectral purity with phase noise level of -143 dBc/Hz at 10 kHz of offset frequency. We also describe the multi-loop configuration that reduces dramatically the spurious level at offset frequencies related to the loop round trip harmonic frequency. The multi-loop configuration has stronger mode selectivity due to interference between signals having different cavity lengths. A drop of the spurious level below -90 dBc was demonstrated. The effect of the oscillator aging on the frequency stability was studied as well by recording the oscillator frequency (in a chamber) over several weeks. We observed reversal in aging direction with logarithmic behavior of A ln(B t+1)-C ln(D t+1), where t is the time and A, B, C, D are constants. Initially, in the first several days, the positive aging dominates. However, later the negative aging mechanism dominates. We have concluded that the long-term aging behavioral model is consistent with the experimental results.

  4. Meromorphic solutions of recurrence relations and DRA method for multicomponent master integrals

    NASA Astrophysics Data System (ADS)

    Lee, Roman N.; Mingulov, Kirill T.

    2018-04-01

    We formulate a method to find the meromorphic solutions of higher-order recurrence relations in the form of the sum over poles with coefficients defined recursively. Several explicit examples of the application of this technique are given. The main advantage of the described approach is that the analytical properties of the solutions are very clear (the position of poles is explicit, the behavior at infinity can be easily determined). These are exactly the properties that are required for the application of the multiloop calculation method based on dimensional recurrence relations and analyticity (the DRA method).

  5. Analyses of Shuttle Orbiter approach and landing

    NASA Technical Reports Server (NTRS)

    Ashkenas, I. L.; Hoh, R. H.; Teper, G. L.

    1982-01-01

    A study of the Shuttle Orbiter approach and landing conditions is summarized. The causes of observed PIO-like flight deficiencies are listed, and possible corrective measures are examined. Closed-loop pilot/vehicle analyses are described, and a description is given of path-attitude stability boundaries. The latter novel approach is found to be of great value in delineating and illustrating the basic causes of this multiloop pilot control problem. It is shown that the analytical results are consistent with flight test and fixed-base simulation. Conclusions are drawn concerning possible improvements in the Shuttle Orbiter/Digital Flight Control System.

  6. On the parallel solution of parabolic equations

    NASA Technical Reports Server (NTRS)

    Gallopoulos, E.; Saad, Youcef

    1989-01-01

    Parallel algorithms for the solution of linear parabolic problems are proposed. The first of these methods is based on using polynomial approximation to the exponential. It does not require solving any linear systems and is highly parallelizable. The two other methods proposed are based on Pade and Chebyshev approximations to the matrix exponential. The parallelization of these methods is achieved by using partial fraction decomposition techniques to solve the resulting systems and thus offers the potential for increased time parallelism in time dependent problems. Experimental results from the Alliant FX/8 and the Cray Y-MP/832 vector multiprocessors are also presented.

  7. Moduli of quantum Riemannian geometries on <=4 points

    NASA Astrophysics Data System (ADS)

    Majid, S.; Raineri, E.

    2004-12-01

    We classify parallelizable noncommutative manifold structures on finite sets of small size in the general formalism of framed quantum manifolds and vielbeins introduced previously [S. Majid, Commun. Math. Phys. 225, 131 (2002)]. The full moduli space is found for ⩽3 points, and a restricted moduli space for 4 points. Generalized Levi-Cività connections and their curvatures are found for a variety of models including models of a discrete torus. The topological part of the moduli space is found for ⩽9 points based on the known atlas of regular graphs. We also remark on aspects of quantum gravity in this approach.

  8. Parallel Multi-Step/Multi-Rate Integration of Two-Time Scale Dynamic Systems

    NASA Technical Reports Server (NTRS)

    Chang, Johnny T.; Ploen, Scott R.; Sohl, Garett. A,; Martin, Bryan J.

    2004-01-01

    Increasing demands on the fidelity of simulations for real-time and high-fidelity simulations are stressing the capacity of modern processors. New integration techniques are required that provide maximum efficiency for systems that are parallelizable. However many current techniques make assumptions that are at odds with non-cascadable systems. A new serial multi-step/multi-rate integration algorithm for dual-timescale continuous state systems is presented which applies to these systems, and is extended to a parallel multi-step/multi-rate algorithm. The superior performance of both algorithms is demonstrated through a representative example.

  9. Multiloop integral system test (MIST): Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gloudemans, J.R.

    1991-04-01

    The Multiloop Integral System Test (MIST) is part of a multiphase program started in 1983 to address small-break loss-of-coolant accidents (SBLOCAs) specific to Babcock and Wilcox designed plants. MIST is sponsored by the US Nuclear Regulatory Commission, the Babcock Wilcox Owners Group, the Electric Power Research Institute, and Babcock and Wilcox. The unique features of the Babcock and Wilcox design, specifically the hot leg U-bends and steam generators, prevented the use of existing integral system data or existing integral facilities to address the thermal-hydraulic SBLOCA questions. MIST was specifically designed and constructed for this program, and an existing facility --more » the Once Through Integral System (OTIS) -- was also used. Data from MIST and OTIS are used to benchmark the adequacy of system codes, such as RELAP5 and TRAC, for predicting abnormal plant transients. The MIST program is reported in 11 volumes. Volumes 2 through 8 pertain to groups of Phase 3 tests by type; Volume 9 presents inter-group comparisons; Volume 10 provides comparisons between the RELAP5/MOD2 calculations and MIST observations, and Volume 11 (with addendum) presents the later Phase 4 tests. This is Volume 1 of the MIST final report, a summary of the entire MIST program. Major topics include, Test Advisory Group (TAG) issues, facility scaling and design, test matrix, observations, comparison of RELAP5 calculations to MIST observations, and MIST versus the TAG issues. MIST generated consistent integral-system data covering a wide range of transient interactions. MIST provided insight into integral system behavior and assisted the code effort. The MIST observations addressed each of the TAG issues. 11 refs., 29 figs., 9 tabs.« less

  10. Multiloop Rapid-Rise/Rapid Fall High-Voltage Power Supply

    NASA Technical Reports Server (NTRS)

    Bearden, Douglas

    2007-01-01

    A proposed multiloop power supply would generate a potential as high as 1.25 kV with rise and fall times <100 s. This power supply would, moreover, be programmable to generate output potentials from 20 to 1,250 V and would be capable of supplying a current of at least 300 A at 1,250 V. This power supply is intended to be a means of electronic shuttering of a microchannel plate that would be used to intensify the output of a charge-coupled-device imager to obtain exposure times as short as 1 ms. The basic design of this power supply could also be adapted to other applications in which high voltages and high slew rates are needed. At the time of reporting the information for this article, there was no commercially available power supply capable of satisfying the stated combination of voltage, rise-time, and fall-time requirements. The power supply would include a preregulator that would be used to program a voltage 1/30 of the desired output voltage. By means of a circuit that would include a pulse-width modulator (PWM), two voltage doublers, and a transformer having two primary and two secondary windings, the preregulator output voltage would be amplified by a factor of 30. A resistor would limit the current by controlling a drive voltage applied to field-effect transistors (FETs) during turn-on of the PWM. Two feedback loops would be used to regulate the high output voltage. A pulse transformer would be used to turn on four FETs to short-circuit output capacitors when the outputs of the PWM were disabled. Application of a 0-to-5-V square to a PWM shut-down pin would cause a 20-to-1,250-V square wave to appear at the output.

  11. Camouflage treatment of skeletal Class III malocclusion with multiloop edgewise arch wire and modified Class III elastics by maxillary mini-implant anchorage.

    PubMed

    He, Shushu; Gao, Jinhui; Wamalwa, Peter; Wang, Yunji; Zou, Shujuan; Chen, Song

    2013-07-01

    To evaluate the effect of the multiloop edgewise arch wire (MEAW) technique with maxillary mini-implants in the camouflage treatment of skeletal Class III malocclusion. Twenty patients were treated with the MEAW technique and modified Class III elastics from the maxillary mini-implants. Twenty-four patients were treated with MEAW and long Class III elastics from the upper second molars as control. Lateral cephalometric radiographs were obtained and analyzed before and after treatment, and 1 year after retention. Satisfactory occlusion was established in both groups. Through principal component analysis, it could be concluded the anterior-posterior dental position, skeletal sagittal and vertical position, and upper molar vertical position changed within groups and between groups; vertical lower teeth position and Wits distance changed in the experimental group and between groups. In the experimental group, the lower incisors tipped lingually 2.7 mm and extruded 2.4 mm. The lingual inclination of the lower incisors increased 3.5°. The mandibular first molars tipped distally 9.1° and intruded 0.4 mm. Their cusps moved 3.4 mm distally. In the control group, the upper incisors proclined 3°, and the upper first molar extruded 2 mm. SN-MP increased 1.6° and S-Go/N-ME decreased 1. The MEAW technique combined with modified Class III elastics by maxillary mini-implants can effectively tip the mandibular molars distally without any extrusion and tip the lower incisors lingually with extrusion to camouflage skeletal Class III malocclusions. Clockwise rotation of the mandible and further proclination of upper incisors can be avoided. The MEAW technique and modified Class III elastics provided an appropriate treatment strategy especially for patients with high angle and open bite tendency.

  12. Nonsurgical correction of a Class III malocclusion in an adult by miniscrew-assisted mandibular dentition distalization.

    PubMed

    Jing, Yan; Han, Xianglong; Guo, Yongwen; Li, Jingyu; Bai, Ding

    2013-06-01

    This article reports the successful use of miniscrews in the mandible to treat a 20-year-old Mongolian woman with a chief complaint of anterior crossbite. The patient had a skeletal Class III malocclusion with a mildly protrusive mandible, an anterior crossbite, and a deviated midline. In light of the advantages for reconstruction of the occlusal plane and distal en-masse movement of the mandibular arch, we used a multiloop edgewise archwire in the initial stage. However, the maxillary incisors were in excessive labioversion accompanied by little retraction of the mandibular incisors; these results were obviously not satisfying after 4 months of multiloop edgewise archwire treatment. Two miniscrews were subsequently implanted vertically in the external oblique ridge areas of the bilateral mandibular ramus as skeletal anchorage for en-masse distalization of the mandibular dentition. During treatment, the mandibular anterior teeth were retracted about 4.0 mm without negative lingual inclinations. The movement of the mandibular first molar was almost bodily translation. The maxillary incisors maintained good inclinations by rotating their brackets 180° along with the outstanding performance of the beta-titanium wire. The patient received a harmonious facial balance, an attractive smile, and ideal occlusal relationships. The outcome was stable after 1 year of retention. Our results suggest that the application of miniscrews in the posterior area of the mandible is an effective approach for Class III camouflage treatment. This technique requires minimal compliance and is particularly useful for correcting Class III patients with mild mandibular protrusion and minor crowding. Copyright © 2013 American Association of Orthodontists. Published by Mosby, Inc. All rights reserved.

  13. Model and Data Reduction for Control, Identification and Compressed Sensing

    NASA Astrophysics Data System (ADS)

    Kramer, Boris

    This dissertation focuses on problems in design, optimization and control of complex, large-scale dynamical systems from different viewpoints. The goal is to develop new algorithms and methods, that solve real problems more efficiently, together with providing mathematical insight into the success of those methods. There are three main contributions in this dissertation. In Chapter 3, we provide a new method to solve large-scale algebraic Riccati equations, which arise in optimal control, filtering and model reduction. We present a projection based algorithm utilizing proper orthogonal decomposition, which is demonstrated to produce highly accurate solutions at low rank. The method is parallelizable, easy to implement for practitioners, and is a first step towards a matrix free approach to solve AREs. Numerical examples for n ≥ 106 unknowns are presented. In Chapter 4, we develop a system identification method which is motivated by tangential interpolation. This addresses the challenge of fitting linear time invariant systems to input-output responses of complex dynamics, where the number of inputs and outputs is relatively large. The method reduces the computational burden imposed by a full singular value decomposition, by carefully choosing directions on which to project the impulse response prior to assembly of the Hankel matrix. The identification and model reduction step follows from the eigensystem realization algorithm. We present three numerical examples, a mass spring damper system, a heat transfer problem, and a fluid dynamics system. We obtain error bounds and stability results for this method. Chapter 5 deals with control and observation design for parameter dependent dynamical systems. We address this by using local parametric reduced order models, which can be used online. Data available from simulations of the system at various configurations (parameters, boundary conditions) is used to extract a sparse basis to represent the dynamics (via dynamic mode decomposition). Subsequently, a new, compressed sensing based classification algorithm is developed which incorporates the extracted dynamic information into the sensing basis. We show that this augmented classification basis makes the method more robust to noise, and results in superior identification of the correct parameter. Numerical examples consist of a Navier-Stokes, as well as a Boussinesq flow application.

  14. Direct Simulation and Theoretical Study of Sub- and Supersonic Wakes

    NASA Astrophysics Data System (ADS)

    Hickey, Jean-Pierre

    Wakes are constitutive components of engineering, aeronautical and geophysical flows. Despite their canonical nature, many fundamental questions surrounding wakes remain unanswered. The present work studies the nature of archetypal planar splitter-plate wakes in the sub- and supersonic regimes from a theoretical as well as a numerical perspective. A highly-parallelizable computational fluid dynamic solver was developed, from scratch, for the very-large scale direct numerical simulations of high-speed free shear flows. Wakes maintain a near indelible memory of their origins; thus, changes to the state of the flow on the generating body lead to multiple self-similar states in the far wake. To understand the source of the lack of universality, three distinct wake evolution scenarios are investigated in the incompressible limit: the Kelvin-Helmholtz transition, the bypass transition in an asymmetric wake and the initially turbulent wake. The multiplicity of self-similar states is the result of a plurality of far wake structural organizations, which maintains the memory of the flow. The structural organization is predicated on the presence or absence of near wake anti-symmetric perturbations (as a result of shedding, instability modes and/or trailing edge receptivity). The plurality of large-scale structural organization contrasts with the commonality observed in the mid-sized structures, which are dominated by inclined vortical rods, and not, as previously assumed, by horseshoe structures. The compressibility effects are a direct function of the maximal velocity defect in the wake and are therefore only important in the transitional region - the far wake having an essentially incompressible character. The compressibility simultaneously modifies the growth rate and wavelength of the primary instability mode with a concomitant effect on the emerging transitional structures. As a direct result, the spanwise rollers have an increasing ellipticity and cross-wake domain of influence with the increasing Mach number of the wake. Consequently, structural pairing - a key feature of wake transition - is inhibited at a critical Mach number, which greatly modifies the transitional dynamics. In idealized wakes, the increased stability caused by the compressibility effects leads to a vortex breakdown of secondary structures prior to the full transition of the principal mode. These findings open the door to novel mixing enhancement and flow control possibilities in the high-speed wake transition. Keywords: FLUID DYNAMICS, DIRECT NUMERICAL SIMULATIONS, FREE SHEAR FLOWS, TURBULENCE, NUMERICAL METHODS

  15. Intestinal absorption of dideoxynucleosides: characterization using a multiloop in situ technique.

    PubMed

    Mirchandani, H L; Chien, Y W

    1995-01-01

    The intestinal absorption of dideoxynucleosides was studied in rabbits, using a closed-loop mesenteric-sampling in situ technique developed in this laboratory, and the kinetic profiles were characterized. Each of the dideoxynucleosides exhibited different dependence on the intestinal regions studied: 3'-azido-2',3'-dideoxythymidine was best absorbed from the ileum, while 2',3'-dideoxyinosine and 2',3'-dideoxycytidine were preferentially absorbed from the jejunum. The results were validated by the mass-balance approach; the percent of drug retained in the intestinal lumen and that degraded at the intestinal pH, by colonic flora, in the intestinal tissue, and in plasma were assessed.

  16. Modeling and Analysis of Power Processing Systems (MAPPS). Volume 1: Technical report

    NASA Technical Reports Server (NTRS)

    Lee, F. C.; Rahman, S.; Carter, R. A.; Wu, C. H.; Yu, Y.; Chang, R.

    1980-01-01

    Computer aided design and analysis techniques were applied to power processing equipment. Topics covered include: (1) discrete time domain analysis of switching regulators for performance analysis; (2) design optimization of power converters using augmented Lagrangian penalty function technique; (3) investigation of current-injected multiloop controlled switching regulators; and (4) application of optimization for Navy VSTOL energy power system. The generation of the mathematical models and the development and application of computer aided design techniques to solve the different mathematical models are discussed. Recommendations are made for future work that would enhance the application of the computer aided design techniques for power processing systems.

  17. Assessment of the MHD capability in the ATHENA code using data from the ALEX facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roth, P.A.

    1989-03-01

    The ATHENA (Advanced Thermal Hydraulic Energy Network Analyzer) code is a system transient analysis code with multi-loop, multi-fluid capabilities, which is available to the fusion community at the National Magnetic Fusion Energy Computing Center (NMFECC). The work reported here assesses the ATHENA magnetohydrodynamic (MHD) pressure drop model for liquid metals flowing through a strong magnetic field. An ATHENA model was developed for two simple geometry, adiabatic test sections used in the Argonne Liquid Metal Experiment (ALEX) at Argonne National Laboratory (ANL). The pressure drops calculated by ATHENA agreed well with the experimental results from the ALEX facility.

  18. On-line evaluation of multiloop digital controller performance

    NASA Technical Reports Server (NTRS)

    Wieseman, Carol D.

    1993-01-01

    The purpose of this presentation is to inform the Guidance and Control community of capabilities which were developed by the Aeroservoelasticity Branch to evaluate the performance of multivariable control laws, on-line, during wind-tunnel testing. The capabilities are generic enough to be useful for all kinds of on-line analyses involving multivariable control in experimental testing. Consequently, it was decided to present this material at this workshop even though it has been presented elsewhere. Topics covered include: essential on-line analysis requirements; on-line analysis capabilities; on-line analysis software; frequency domain procedures; controller performance evaluation frequency-domain flutter suppression; and plant determination.

  19. Modeling pilot interaction with automated digital avionics systems: Guidance and control algorithms for contour and nap-of-the-Earth flight

    NASA Technical Reports Server (NTRS)

    Hess, Ronald A.

    1990-01-01

    A collection of technical papers are presented that cover modeling pilot interaction with automated digital avionics systems and guidance and control algorithms for contour and nap-of-the-earth flight. The titles of the papers presented are as follows: (1) Automation effects in a multiloop manual control system; (2) A qualitative model of human interaction with complex dynamic systems; (3) Generalized predictive control of dynamic systems; (4) An application of generalized predictive control to rotorcraft terrain-following flight; (5) Self-tuning generalized predictive control applied to terrain-following flight; and (6) Precise flight path control using a predictive algorithm.

  20. Balanced bridge feedback control system

    NASA Technical Reports Server (NTRS)

    Lurie, Boris J. (Inventor)

    1990-01-01

    In a system having a driver, a motor, and a mechanical plant, a multiloop feedback control apparatus for controlling the movement and/or positioning of a mechanical plant, the control apparatus has a first local bridge feedback loop for feeding back a signal representative of a selected ratio of voltage and current at the output driver, and a second bridge feedback loop for feeding back a signal representative of a selected ratio of force and velocity at the output of the motor. The control apparatus may further include an outer loop for feeding back a signal representing the angular velocity and/or position of the mechanical plant.

  1. Stability of multiloop LQ regulators with nonlinearities. I - Regions of attraction. II - Regions of ultimate boundedness

    NASA Technical Reports Server (NTRS)

    Joshi, S. M.

    1986-01-01

    An investigation is conducted for the closed loop stability of linear time-invariant systems controlled by linear quadratic (LQ) regulators, in cases where nonlinearities exist in the control channels lying outside the stability sector in regions away from the origin. The estimate of the region of attraction thus obtained furnishes methods for the selection of performance function weights for more robust LQ designs. Attention is then given to the closed loop stability of linear time-invariant systems controlled by the LQ regulators when the nonlinearities in the loops escape the stability sector in a bounded region containing the origin.

  2. PASTA: Ultra-Large Multiple Sequence Alignment for Nucleotide and Amino-Acid Sequences.

    PubMed

    Mirarab, Siavash; Nguyen, Nam; Guo, Sheng; Wang, Li-San; Kim, Junhyong; Warnow, Tandy

    2015-05-01

    We introduce PASTA, a new multiple sequence alignment algorithm. PASTA uses a new technique to produce an alignment given a guide tree that enables it to be both highly scalable and very accurate. We present a study on biological and simulated data with up to 200,000 sequences, showing that PASTA produces highly accurate alignments, improving on the accuracy and scalability of the leading alignment methods (including SATé). We also show that trees estimated on PASTA alignments are highly accurate--slightly better than SATé trees, but with substantial improvements relative to other methods. Finally, PASTA is faster than SATé, highly parallelizable, and requires relatively little memory.

  3. A parallelizable real-time motion tracking algorithm with applications to ultrasonic strain imaging.

    PubMed

    Jiang, J; Hall, T J

    2007-07-07

    Ultrasound-based mechanical strain imaging systems utilize signals from conventional diagnostic ultrasound systems to image tissue elasticity contrast that provides new diagnostically valuable information. Previous works (Hall et al 2003 Ultrasound Med. Biol. 29 427, Zhu and Hall 2002 Ultrason. Imaging 24 161) demonstrated that uniaxial deformation with minimal elevation motion is preferred for breast strain imaging and real-time strain image feedback to operators is important to accomplish this goal. The work reported here enhances the real-time speckle tracking algorithm with two significant modifications. One fundamental change is that the proposed algorithm is a column-based algorithm (a column is defined by a line of data parallel to the ultrasound beam direction, i.e. an A-line), as opposed to a row-based algorithm (a row is defined by a line of data perpendicular to the ultrasound beam direction). Then, displacement estimates from its adjacent columns provide good guidance for motion tracking in a significantly reduced search region to reduce computational cost. Consequently, the process of displacement estimation can be naturally split into at least two separated tasks, computed in parallel, propagating outward from the center of the region of interest (ROI). The proposed algorithm has been implemented and optimized in a Windows system as a stand-alone ANSI C++ program. Results of preliminary tests, using numerical and tissue-mimicking phantoms, and in vivo tissue data, suggest that high contrast strain images can be consistently obtained with frame rates (10 frames s(-1)) that exceed our previous methods.

  4. Condition number estimation of preconditioned matrices.

    PubMed

    Kushida, Noriyuki

    2015-01-01

    The present paper introduces a condition number estimation method for preconditioned matrices. The newly developed method provides reasonable results, while the conventional method which is based on the Lanczos connection gives meaningless results. The Lanczos connection based method provides the condition numbers of coefficient matrices of systems of linear equations with information obtained through the preconditioned conjugate gradient method. Estimating the condition number of preconditioned matrices is sometimes important when describing the effectiveness of new preconditionerers or selecting adequate preconditioners. Operating a preconditioner on a coefficient matrix is the simplest method of estimation. However, this is not possible for large-scale computing, especially if computation is performed on distributed memory parallel computers. This is because, the preconditioned matrices become dense, even if the original matrices are sparse. Although the Lanczos connection method can be used to calculate the condition number of preconditioned matrices, it is not considered to be applicable to large-scale problems because of its weakness with respect to numerical errors. Therefore, we have developed a robust and parallelizable method based on Hager's method. The feasibility studies are curried out for the diagonal scaling preconditioner and the SSOR preconditioner with a diagonal matrix, a tri-daigonal matrix and Pei's matrix. As a result, the Lanczos connection method contains around 10% error in the results even with a simple problem. On the other hand, the new method contains negligible errors. In addition, the newly developed method returns reasonable solutions when the Lanczos connection method fails with Pei's matrix, and matrices generated with the finite element method.

  5. SU-E-T-22: A Deterministic Solver of the Boltzmann-Fokker-Planck Equation for Dose Calculation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, X; Gao, H; Paganetti, H

    2015-06-15

    Purpose: The Boltzmann-Fokker-Planck equation (BFPE) accurately models the migration of photons/charged particles in tissues. While the Monte Carlo (MC) method is popular for solving BFPE in a statistical manner, we aim to develop a deterministic BFPE solver based on various state-of-art numerical acceleration techniques for rapid and accurate dose calculation. Methods: Our BFPE solver is based on the structured grid that is maximally parallelizable, with the discretization in energy, angle and space, and its cross section coefficients are derived or directly imported from the Geant4 database. The physical processes that are taken into account are Compton scattering, photoelectric effect, pairmore » production for photons, and elastic scattering, ionization and bremsstrahlung for charged particles.While the spatial discretization is based on the diamond scheme, the angular discretization synergizes finite element method (FEM) and spherical harmonics (SH). Thus, SH is used to globally expand the scattering kernel and FFM is used to locally discretize the angular sphere. As a Result, this hybrid method (FEM-SH) is both accurate in dealing with forward-peaking scattering via FEM, and efficient for multi-energy-group computation via SH. In addition, FEM-SH enables the analytical integration in energy variable of delta scattering kernel for elastic scattering with reduced truncation error from the numerical integration based on the classic SH-based multi-energy-group method. Results: The accuracy of the proposed BFPE solver was benchmarked against Geant4 for photon dose calculation. In particular, FEM-SH had improved accuracy compared to FEM, while both were within 2% of the results obtained with Geant4. Conclusion: A deterministic solver of the Boltzmann-Fokker-Planck equation is developed for dose calculation, and benchmarked against Geant4. Xiang Hong and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000) and the Shanghai Pujiang Talent Program (#14PJ1404500)« less

  6. Development of control strategies for safe microburst penetration: A progress report

    NASA Technical Reports Server (NTRS)

    Psiaki, Mark L.

    1987-01-01

    A single-engine, propeller-driven, general-aviation model was incorporated into the nonlinear simulation and into the linear analysis of root loci and frequency response. Full-scale wind tunnel data provided its aerodynamic model, and the thrust model included the airspeed dependent effects of power and propeller efficiency. Also, the parameters of the Jet Transport model were changed to correspond more closely to the Boeing 727. In order to study their effects on steady-state repsonse to vertical wind inputs, altitude and total specific energy (air-relative and inertial) feedback capabilities were added to the nonlinear and linear models. Multiloop system design goals were defined. Attempts were made to develop controllers which achieved these goals.

  7. Perturbative Quantum Gravity from Gauge Theory

    NASA Astrophysics Data System (ADS)

    Carrasco, John Joseph

    In this dissertation we present the graphical techniques recently developed in the construction of multi-loop scattering amplitudes using the method of generalized unitarity. We construct the three-loop and four-loop four-point amplitudes of N = 8 supergravity using these methods and the Kawaii, Lewellen and Tye tree-level relations which map tree-level gauge theory amplitudes to tree-level gravity theory amplitudes. We conclude by extending a tree-level duality between color and kinematics, generic to gauge theories, to a loop level conjecture, allowing the easy relation between loop-level gauge and gravity kinematics. We provide non-trivial evidence for this conjecture at three-loops in the particular case of maximal supersymmetry.

  8. PASTA: Ultra-Large Multiple Sequence Alignment for Nucleotide and Amino-Acid Sequences

    PubMed Central

    Mirarab, Siavash; Nguyen, Nam; Guo, Sheng; Wang, Li-San; Kim, Junhyong

    2015-01-01

    Abstract We introduce PASTA, a new multiple sequence alignment algorithm. PASTA uses a new technique to produce an alignment given a guide tree that enables it to be both highly scalable and very accurate. We present a study on biological and simulated data with up to 200,000 sequences, showing that PASTA produces highly accurate alignments, improving on the accuracy and scalability of the leading alignment methods (including SATé). We also show that trees estimated on PASTA alignments are highly accurate—slightly better than SATé trees, but with substantial improvements relative to other methods. Finally, PASTA is faster than SATé, highly parallelizable, and requires relatively little memory. PMID:25549288

  9. Type 0 open string amplitudes and the tensionless limit

    NASA Astrophysics Data System (ADS)

    Rojas, Francisco

    2014-12-01

    The sum over planar multiloop diagrams in the NS + sector of type 0 open strings in flat spacetime has been proposed by Thorn as a candidate to resolve nonperturbative issues of gauge theories in the large N limit. With S U (N ) Chan-Paton factors, the sum over planar open string multiloop diagrams describes the 't Hooft limit N →∞ with N gs2 held fixed. By including only planar diagrams in the sum the usual mechanism for the cancellation of loop divergences (which occurs, for example, among the planar and Möbius strip diagrams by choosing a specific gauge group) is not available and a renormalization procedure is needed. In this article the renormalization is achieved by suspending total momentum conservation by an amount p ≡∑ i n ki≠0 at the level of the integrands in the integrals over the moduli and analytically continuing them to p =0 at the very end. This procedure has been successfully tested for the 2 and 3 gluon planar loop amplitudes by Thorn. Gauge invariance is respected and the correct running of the coupling in the limiting gauge field theory was also correctly obtained. In this article we extend those results in two directions. First, we generalize the renormalization method to an arbitrary n -gluon planar loop amplitude giving full details for the 4-point case. One of our main results is to provide a fully renormalized amplitude which is free of both UV and the usual spurious divergences leaving only the physical singularities in it. Second, using the complete renormalized amplitude, we extract the high-energy scattering regime at fixed angle (tensionless limit). Apart from obtaining the usual exponential falloff at high energies, we compute the full dependence on the scattering angle which shows the existence of a smooth connection between the Regge and hard scattering regimes.

  10. Application of a hybrid MPI/OpenMP approach for parallel groundwater model calibration using multi-core computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, Guoping; D'Azevedo, Ed F; Zhang, Fan

    2010-01-01

    Calibration of groundwater models involves hundreds to thousands of forward solutions, each of which may solve many transient coupled nonlinear partial differential equations, resulting in a computationally intensive problem. We describe a hybrid MPI/OpenMP approach to exploit two levels of parallelisms in software and hardware to reduce calibration time on multi-core computers. HydroGeoChem 5.0 (HGC5) is parallelized using OpenMP for direct solutions for a reactive transport model application, and a field-scale coupled flow and transport model application. In the reactive transport model, a single parallelizable loop is identified to account for over 97% of the total computational time using GPROF.more » Addition of a few lines of OpenMP compiler directives to the loop yields a speedup of about 10 on a 16-core compute node. For the field-scale model, parallelizable loops in 14 of 174 HGC5 subroutines that require 99% of the execution time are identified. As these loops are parallelized incrementally, the scalability is found to be limited by a loop where Cray PAT detects over 90% cache missing rates. With this loop rewritten, similar speedup as the first application is achieved. The OpenMP-parallelized code can be run efficiently on multiple workstations in a network or multiple compute nodes on a cluster as slaves using parallel PEST to speedup model calibration. To run calibration on clusters as a single task, the Levenberg Marquardt algorithm is added to HGC5 with the Jacobian calculation and lambda search parallelized using MPI. With this hybrid approach, 100 200 compute cores are used to reduce the calibration time from weeks to a few hours for these two applications. This approach is applicable to most of the existing groundwater model codes for many applications.« less

  11. A Hybrid MPI/OpenMP Approach for Parallel Groundwater Model Calibration on Multicore Computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, Guoping; D'Azevedo, Ed F; Zhang, Fan

    2010-01-01

    Groundwater model calibration is becoming increasingly computationally time intensive. We describe a hybrid MPI/OpenMP approach to exploit two levels of parallelism in software and hardware to reduce calibration time on multicore computers with minimal parallelization effort. At first, HydroGeoChem 5.0 (HGC5) is parallelized using OpenMP for a uranium transport model with over a hundred species involving nearly a hundred reactions, and a field scale coupled flow and transport model. In the first application, a single parallelizable loop is identified to consume over 97% of the total computational time. With a few lines of OpenMP compiler directives inserted into the code,more » the computational time reduces about ten times on a compute node with 16 cores. The performance is further improved by selectively parallelizing a few more loops. For the field scale application, parallelizable loops in 15 of the 174 subroutines in HGC5 are identified to take more than 99% of the execution time. By adding the preconditioned conjugate gradient solver and BICGSTAB, and using a coloring scheme to separate the elements, nodes, and boundary sides, the subroutines for finite element assembly, soil property update, and boundary condition application are parallelized, resulting in a speedup of about 10 on a 16-core compute node. The Levenberg-Marquardt (LM) algorithm is added into HGC5 with the Jacobian calculation and lambda search parallelized using MPI. With this hybrid approach, compute nodes at the number of adjustable parameters (when the forward difference is used for Jacobian approximation), or twice that number (if the center difference is used), are used to reduce the calibration time from days and weeks to a few hours for the two applications. This approach can be extended to global optimization scheme and Monte Carol analysis where thousands of compute nodes can be efficiently utilized.« less

  12. NASA Tech Briefs, January 2007

    NASA Technical Reports Server (NTRS)

    2007-01-01

    Topics covered include: Flexible Skins Containing Integrated Sensors and Circuitry; Artificial Hair Cells for Sensing Flows; Video Guidance Sensor and Time-of-Flight Rangefinder; Optical Beam-Shear Sensors; Multiple-Agent Air/Ground Autonomous Exploration Systems; A 640 512-Pixel Portable Long-Wavelength Infrared Camera; An Array of Optical Receivers for Deep-Space Communications; Microstrip Antenna Arrays on Multilayer LCP Substrates; Applications for Subvocal Speech; Multiloop Rapid-Rise/Rapid Fall High-Voltage Power Supply; The PICWidget; Fusing Symbolic and Numerical Diagnostic Computations; Probabilistic Reasoning for Robustness in Automated Planning; Short-Term Forecasting of Radiation Belt and Ring Current; JMS Proxy and C/C++ Client SDK; XML Flight/Ground Data Dictionary Management; Cross-Compiler for Modeling Space-Flight Systems; Composite Elastic Skins for Shape-Changing Structures; Glass/Ceramic Composites for Sealing Solid Oxide Fuel Cells; Aligning Optical Fibers by Means of Actuated MEMS Wedges; Manufacturing Large Membrane Mirrors at Low Cost; Double-Vacuum-Bag Process for Making Resin- Matrix Composites; Surface Bacterial-Spore Assay Using Tb3+/DPA Luminescence; Simplified Microarray Technique for Identifying mRNA in Rare Samples; High-Resolution, Wide-Field-of-View Scanning Telescope; Multispectral Imager With Improved Filter Wheel and Optics; Integral Radiator and Storage Tank; Compensation for Phase Anisotropy of a Metal Reflector; Optical Characterization of Molecular Contaminant Films; Integrated Hardware and Software for No-Loss Computing; Decision-Tree Formulation With Order-1 Lateral Execution; GIS Methodology for Planning Planetary-Rover Operations; Optimal Calibration of the Spitzer Space Telescope; Automated Detection of Events of Scientific Interest; Representation-Independent Iteration of Sparse Data Arrays; Mission Operations of the Mars Exploration Rovers; and More About Software for No-Loss Computing.

  13. High-fidelity large eddy simulation for supersonic jet noise prediction

    NASA Astrophysics Data System (ADS)

    Aikens, Kurt M.

    The problem of intense sound radiation from supersonic jets is a concern for both civil and military applications. As a result, many experimental and computational efforts are focused at evaluating possible noise suppression techniques. Large-eddy simulation (LES) is utilized in many computational studies to simulate the turbulent jet flowfield. Integral methods such as the Ffowcs Williams-Hawkings (FWH) method are then used for propagation of the sound waves to the farfield. Improving the accuracy of this two-step methodology and evaluating beveled converging-diverging nozzles for noise suppression are the main tasks of this work. First, a series of numerical experiments are undertaken to ensure adequate numerical accuracy of the FWH methodology. This includes an analysis of different treatments for the downstream integration surface: with or without including an end-cap, averaging over multiple end-caps, and including an approximate surface integral correction term. Secondly, shock-capturing methods based on characteristic filtering and adaptive spatial filtering are used to extend a highly-parallelizable multiblock subsonic LES code to enable simulations of supersonic jets. The code is based on high-order numerical methods for accurate prediction of the acoustic sources and propagation of the sound waves. Furthermore, this new code is more efficient than the legacy version, allows cylindrical multiblock topologies, and is capable of simulating nozzles with resolved turbulent boundary layers when coupled with an approximate turbulent inflow boundary condition. Even though such wall-resolved simulations are more physically accurate, their expense is often prohibitive. To make simulations more economical, a wall model is developed and implemented. The wall modeling methodology is validated for turbulent quasi-incompressible and compressible zero pressure gradient flat plate boundary layers, and for subsonic and supersonic jets. The supersonic code additions and the wall model treatment are then utilized to simulate military-style nozzles with and without beveling of the nozzle exit plane. Experiments of beveled converging-diverging nozzles have found reduced noise levels for some observer locations. Predicting the noise for these geometries provides a good initial test of the overall methodology for a more complex nozzle. The jet flowfield and acoustic data are analyzed and compared to similar experiments and excellent agreement is found. Potential areas of improvement are discussed for future research.

  14. Condition Number Estimation of Preconditioned Matrices

    PubMed Central

    Kushida, Noriyuki

    2015-01-01

    The present paper introduces a condition number estimation method for preconditioned matrices. The newly developed method provides reasonable results, while the conventional method which is based on the Lanczos connection gives meaningless results. The Lanczos connection based method provides the condition numbers of coefficient matrices of systems of linear equations with information obtained through the preconditioned conjugate gradient method. Estimating the condition number of preconditioned matrices is sometimes important when describing the effectiveness of new preconditionerers or selecting adequate preconditioners. Operating a preconditioner on a coefficient matrix is the simplest method of estimation. However, this is not possible for large-scale computing, especially if computation is performed on distributed memory parallel computers. This is because, the preconditioned matrices become dense, even if the original matrices are sparse. Although the Lanczos connection method can be used to calculate the condition number of preconditioned matrices, it is not considered to be applicable to large-scale problems because of its weakness with respect to numerical errors. Therefore, we have developed a robust and parallelizable method based on Hager’s method. The feasibility studies are curried out for the diagonal scaling preconditioner and the SSOR preconditioner with a diagonal matrix, a tri-daigonal matrix and Pei’s matrix. As a result, the Lanczos connection method contains around 10% error in the results even with a simple problem. On the other hand, the new method contains negligible errors. In addition, the newly developed method returns reasonable solutions when the Lanczos connection method fails with Pei’s matrix, and matrices generated with the finite element method. PMID:25816331

  15. A parallelizable real-time motion tracking algorithm with applications to ultrasonic strain imaging

    NASA Astrophysics Data System (ADS)

    Jiang, J.; Hall, T. J.

    2007-07-01

    Ultrasound-based mechanical strain imaging systems utilize signals from conventional diagnostic ultrasound systems to image tissue elasticity contrast that provides new diagnostically valuable information. Previous works (Hall et al 2003 Ultrasound Med. Biol. 29 427, Zhu and Hall 2002 Ultrason. Imaging 24 161) demonstrated that uniaxial deformation with minimal elevation motion is preferred for breast strain imaging and real-time strain image feedback to operators is important to accomplish this goal. The work reported here enhances the real-time speckle tracking algorithm with two significant modifications. One fundamental change is that the proposed algorithm is a column-based algorithm (a column is defined by a line of data parallel to the ultrasound beam direction, i.e. an A-line), as opposed to a row-based algorithm (a row is defined by a line of data perpendicular to the ultrasound beam direction). Then, displacement estimates from its adjacent columns provide good guidance for motion tracking in a significantly reduced search region to reduce computational cost. Consequently, the process of displacement estimation can be naturally split into at least two separated tasks, computed in parallel, propagating outward from the center of the region of interest (ROI). The proposed algorithm has been implemented and optimized in a Windows® system as a stand-alone ANSI C++ program. Results of preliminary tests, using numerical and tissue-mimicking phantoms, and in vivo tissue data, suggest that high contrast strain images can be consistently obtained with frame rates (10 frames s-1) that exceed our previous methods.

  16. Application of underwater spectrometric system for survey of ponds of the MR reactor (NRC Kurchatov institute)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stepanov, Vyacheslav; Potapov, Victor; Safronov, Alexey

    2013-07-01

    The underwater spectrometric system for survey the bottom of material science multi-loop reactor MR ponds was elaborated. This system uses CdZnTe (CZT) detectors that allow for spectrometric measurements in high radiation fields. The underwater system was used in the spectrometric survey of the bottom of the MR reactor pool, as well as in the survey located in the MR storage pool of highly radioactive containers and parts of the reactor construction. As a result of these works irradiated nuclear fuel was detected on the bottom of pools, and obtained estimates of the effective surface activity detected radionuclides and created bymore » them the dose rate. (authors)« less

  17. TRAC-PF1/MOD1 support calculations for the MIST/OTIS program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fujita, R.K.; Knight, T.D.

    1984-01-01

    We are using the Transient Reactor Analysis Code (TRAC), specifically version TRAC-PF1/MOD1, to perform analyses in support of the MultiLoop Integral-System Test (MIST) and the Once-Through Integral-System (OTIS) experiment program. We have analyzed Geradrohr Dampferzeuger Anlage (GERDA) Test 1605AA to benchmark the TRAC-PF1/MOD1 code against phenomena expected to occur in a raised-loop B and W plant during a small-break loss-of-coolant accident (SBLOCA). These results show that the code can calculate both single- and two-phase natural circulation, flow interruption, boiler-condenser-mode (BCM) heat transfer, and primary-system refill in a B and W-type geometry with low-elevation auxiliary feedwater. 19 figures, 7 tables.

  18. Assessment of the MHD capability in the ATHENA code using data from the ALEX (Argonne Liquid Metal Experiment) facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roth, P.A.

    1988-10-28

    The ATHENA (Advanced Thermal Hydraulic Energy Network Analyzer) code is a system transient analysis code with multi-loop, multi-fluid capabilities, which is available to the fusion community at the National Magnetic Fusion Energy Computing Center (NMFECC). The work reported here assesses the ATHENA magnetohydrodynamic (MHD) pressure drop model for liquid metals flowing through a strong magnetic field. An ATHENA model was developed for two simple geometry, adiabatic test sections used in the Argonne Liquid Metal Experiment (ALEX) at Argonne National Laboratory (ANL). The pressure drops calculated by ATHENA agreed well with the experimental results from the ALEX facility. 13 refs., 4more » figs., 2 tabs.« less

  19. Estimation of regions of attraction and ultimate boundedness for multiloop LQ regulators. [Linear Quadratic

    NASA Technical Reports Server (NTRS)

    Joshi, S. M.

    1984-01-01

    Closed-loop stability is investigated for multivariable linear time-invariant systems controlled by optimal full state feedback linear quadratic (LQ) regulators, with nonlinear gains present in the feedback channels. Estimates are obtained for the region of attraction when the nonlinearities escape the (0.5, infinity) sector in regions away from the origin and for the region of ultimate boundedness when the nonlinearities escape the sector near the origin. The expressions for these regions also provide methods for selecting the performance function parameters in order to obtain LQ designs with better tolerance for nonlinearities. The analytical results are illustrated by applying them to the problem of controlling the rigid-body pitch angle and elastic motion of a large, flexible space antenna.

  20. Automated neurovascular tracing and analysis of the knife-edge scanning microscope Rat Nissl data set using a computing cluster.

    PubMed

    Sungjun Lim; Nowak, Michael R; Yoonsuck Choe

    2016-08-01

    We present a novel, parallelizable algorithm capable of automatically reconstructing and calculating anatomical statistics of cerebral vascular networks embedded in large volumes of Rat Nissl-stained data. In this paper, we report the results of our method using Rattus somatosensory cortical data acquired using Knife-Edge Scanning Microscopy. Our algorithm performs the reconstruction task with averaged precision, recall, and F2-score of 0.978, 0.892, and 0.902 respectively. Calculated anatomical statistics show some conformance to values previously reported. The results that can be obtained from our method are expected to help explicate the relationship between the structural organization of the microcirculation and normal (and abnormal) cerebral functioning.

  1. Fabrication of Superconducting Quantum Interference Device Magnetometers on a Glass Epoxy Polyimide Resin Substrate with Copper Terminals

    NASA Astrophysics Data System (ADS)

    Kawai, Jun; Kawabata, Miki; Oyama, Daisuke; Uehara, Gen

    We have developed fabrication technique of superconducting quantum interference device (SQUID) magnetometers based on Nb/AlAlOx/Nb junctions directly on a glass epoxy polyimide resin substrate, which has copper terminals embedded in advance. The advantage of this method is that no additional substrate and wirebonds are needed for assembly. Compared to conventional SQUID magnetometers, which are assembled with a SQUID chip fabricated on a Si substrate and wirebonding technique, low risk of disconnection can be expected. A directly-coupled multi-loop SQUID magnetometer fabricated with this method has as good noise performance as a SQUID magnetometer with the same design fabricated on a Si wafer. The magnetometer sustained its performance through thermal cycle test 13 times so far.

  2. All two-loop maximally helicity-violating amplitudes in multi-Regge kinematics from applied symbology

    NASA Astrophysics Data System (ADS)

    Prygarin, Alexander; Spradlin, Marcus; Vergu, Cristian; Volovich, Anastasia

    2012-04-01

    Recent progress on scattering amplitudes has benefited from the mathematical technology of symbols for efficiently handling the types of polylogarithm functions which frequently appear in multiloop computations. The symbol for all two-loop maximally helicity violating amplitudes in planar supersymmetric Yang-Mills theory is known, but explicit analytic formulas for the amplitudes are hard to come by except in special limits where things simplify, such as multi-Regge kinematics. By applying symbology we obtain a formula for the leading behavior of the imaginary part (the Mandelstam cut contribution) of this amplitude in multi-Regge kinematics for any number of gluons. Our result predicts a simple recursive structure which agrees with a direct Balitsky-Fadin-Kuraev-Lipatov computation carried out in a parallel publication.

  3. Three-Dimensional High-Order Spectral Finite Volume Method for Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Liu, Yen; Vinokur, Marcel; Wang, Z. J.; Kwak, Dochan (Technical Monitor)

    2002-01-01

    Many areas require a very high-order accurate numerical solution of conservation laws for complex shapes. This paper deals with the extension to three dimensions of the Spectral Finite Volume (SV) method for unstructured grids, which was developed to solve such problems. We first summarize the limitations of traditional methods such as finite-difference, and finite-volume for both structured and unstructured grids. We then describe the basic formulation of the spectral finite volume method. What distinguishes the SV method from conventional high-order finite-volume methods for unstructured triangular or tetrahedral grids is the data reconstruction. Instead of using a large stencil of neighboring cells to perform a high-order reconstruction, the stencil is constructed by partitioning each grid cell, called a spectral volume (SV), into 'structured' sub-cells, called control volumes (CVs). One can show that if all the SV cells are partitioned into polygonal or polyhedral CV sub-cells in a geometrically similar manner, the reconstructions for all the SVs become universal, irrespective of their shapes, sizes, orientations, or locations. It follows that the reconstruction is reduced to a weighted sum of unknowns involving just a few simple adds and multiplies, and those weights are universal and can be pre-determined once for all. The method is thus very efficient, accurate, and yet geometrically flexible. The most critical part of the SV method is the partitioning of the SV into CVs. In this paper we present the partitioning of a tetrahedral SV into polyhedral CVs with one free parameter for polynomial reconstructions up to degree of precision five. (Note that the order of accuracy of the method is one order higher than the reconstruction degree of precision.) The free parameter will be determined by minimizing the Lebesgue constant of the reconstruction matrix or similar criteria to obtain optimized partitions. The details of an efficient, parallelizable code to solve three-dimensional problems for any order of accuracy are then presented. Important aspects of the data structure are discussed. Comparisons with the Discontinuous Galerkin (DG) method are made. Numerical examples for wave propagation problems are presented.

  4. An efficient framework for optimization and parameter sensitivity analysis in arterial growth and remodeling computations

    PubMed Central

    Sankaran, Sethuraman; Humphrey, Jay D.; Marsden, Alison L.

    2013-01-01

    Computational models for vascular growth and remodeling (G&R) are used to predict the long-term response of vessels to changes in pressure, flow, and other mechanical loading conditions. Accurate predictions of these responses are essential for understanding numerous disease processes. Such models require reliable inputs of numerous parameters, including material properties and growth rates, which are often experimentally derived, and inherently uncertain. While earlier methods have used a brute force approach, systematic uncertainty quantification in G&R models promises to provide much better information. In this work, we introduce an efficient framework for uncertainty quantification and optimal parameter selection, and illustrate it via several examples. First, an adaptive sparse grid stochastic collocation scheme is implemented in an established G&R solver to quantify parameter sensitivities, and near-linear scaling with the number of parameters is demonstrated. This non-intrusive and parallelizable algorithm is compared with standard sampling algorithms such as Monte-Carlo. Second, we determine optimal arterial wall material properties by applying robust optimization. We couple the G&R simulator with an adaptive sparse grid collocation approach and a derivative-free optimization algorithm. We show that an artery can achieve optimal homeostatic conditions over a range of alterations in pressure and flow; robustness of the solution is enforced by including uncertainty in loading conditions in the objective function. We then show that homeostatic intramural and wall shear stress is maintained for a wide range of material properties, though the time it takes to achieve this state varies. We also show that the intramural stress is robust and lies within 5% of its mean value for realistic variability of the material parameters. We observe that prestretch of elastin and collagen are most critical to maintaining homeostasis, while values of the material properties are most critical in determining response time. Finally, we outline several challenges to the G&R community for future work. We suggest that these tools provide the first systematic and efficient framework to quantify uncertainties and optimally identify G&R model parameters. PMID:23626380

  5. The Interplay of Opacities and Rotation in Promoting the Explosion of Core-Collapse Supernovae

    NASA Astrophysics Data System (ADS)

    Vartanyan, David; Burrows, Adam; Radice, David

    2018-01-01

    For over five decades, the mechanism of explosion in core-collapse supernovae has been a central unsolved problem in astrophysics, challenging both our computational capabilities and our understanding of relevant physics. Current simulations often produce explosions, but they are at times underenergetic. The neutrino mechanism, wherein a fraction of emitted neutrinos is absorbed in the mantle of the star to reignite the stalled shock, remains the dominant model for reviving explosions in massive stars undergoing core collapse. We present here a diverse suite of 2D axisymmetric simulations produced by FORNAX, a highly parallelizable multidimensional supernova simulation code. We explore the effects of various corrections, including the many-body correction, to neutrino-matter opacities and the possible role of rotation in promoting explosion amongst various core-collapse progenitors.

  6. Analysis and optimization of population annealing

    NASA Astrophysics Data System (ADS)

    Amey, Christopher; Machta, Jonathan

    2018-03-01

    Population annealing is an easily parallelizable sequential Monte Carlo algorithm that is well suited for simulating the equilibrium properties of systems with rough free-energy landscapes. In this work we seek to understand and improve the performance of population annealing. We derive several useful relations between quantities that describe the performance of population annealing and use these relations to suggest methods to optimize the algorithm. These optimization methods were tested by performing large-scale simulations of the three-dimensional (3D) Edwards-Anderson (Ising) spin glass and measuring several observables. The optimization methods were found to substantially decrease the amount of computational work necessary as compared to previously used, unoptimized versions of population annealing. We also obtain more accurate values of several important observables for the 3D Edwards-Anderson model.

  7. Hierarchical and Parallelizable Direct Volume Rendering for Irregular and Multiple Grids

    NASA Technical Reports Server (NTRS)

    Wilhelms, Jane; VanGelder, Allen; Tarantino, Paul; Gibbs, Jonathan

    1996-01-01

    A general volume rendering technique is described that efficiently produces images of excellent quality from data defined over irregular grids having a wide variety of formats. Rendering is done in software, eliminating the need for special graphics hardware, as well as any artifacts associated with graphics hardware. Images of volumes with about one million cells can be produced in one to several minutes on a workstation with a 150 MHz processor. A significant advantage of this method for applications such as computational fluid dynamics is that it can process multiple intersecting grids. Such grids present problems for most current volume rendering techniques. Also, the wide range of cell sizes (by a factor of 10,000 or more), which is typical of such applications, does not present difficulties, as it does for many techniques. A spatial hierarchical organization makes it possible to access data from a restricted region efficiently. The tree has greater depth in regions of greater detail, determined by the number of cells in the region. It also makes it possible to render useful 'preview' images very quickly (about one second for one-million-cell grids) by displaying each region associated with a tree node as one cell. Previews show enough detail to navigate effectively in very large data sets. The algorithmic techniques include use of a kappa-d tree, with prefix-order partitioning of triangles, to reduce the number of primitives that must be processed for one rendering, coarse-grain parallelism for a shared-memory MIMD architecture, a new perspective transformation that achieves greater numerical accuracy, and a scanline algorithm with depth sorting and a new clipping technique.

  8. Relaxation and Preconditioning for High Order Discontinuous Galerkin Methods with Applications to Aeroacoustics and High Speed Flows

    NASA Technical Reports Server (NTRS)

    Shu, Chi-Wang

    2004-01-01

    This project is about the investigation of the development of the discontinuous Galerkin finite element methods, for general geometry and triangulations, for solving convection dominated problems, with applications to aeroacoustics. Other related issues in high order WENO finite difference and finite volume methods have also been investigated. methods are two classes of high order, high resolution methods suitable for convection dominated simulations with possible discontinuous or sharp gradient solutions. In [18], we first review these two classes of methods, pointing out their similarities and differences in algorithm formulation, theoretical properties, implementation issues, applicability, and relative advantages. We then present some quantitative comparisons of the third order finite volume WENO methods and discontinuous Galerkin methods for a series of test problems to assess their relative merits in accuracy and CPU timing. In [3], we review the development of the Runge-Kutta discontinuous Galerkin (RKDG) methods for non-linear convection-dominated problems. These robust and accurate methods have made their way into the main stream of computational fluid dynamics and are quickly finding use in a wide variety of applications. They combine a special class of Runge-Kutta time discretizations, that allows the method to be non-linearly stable regardless of its accuracy, with a finite element space discretization by discontinuous approximations, that incorporates the ideas of numerical fluxes and slope limiters coined during the remarkable development of the high-resolution finite difference and finite volume schemes. The resulting RKDG methods are stable, high-order accurate, and highly parallelizable schemes that can easily handle complicated geometries and boundary conditions. We review the theoretical and algorithmic aspects of these methods and show several applications including nonlinear conservation laws, the compressible and incompressible Navier-Stokes equations, and Hamilton-Jacobi-like equations.

  9. Identification of FOPDT and SOPDT process dynamics using closed loop test.

    PubMed

    Bajarangbali, Raghunath; Majhi, Somanath; Pandey, Saurabh

    2014-07-01

    In this paper, identification of stable and unstable first order, second order overdamped and underdamped process dynamics with time delay is presented. Relay with hysteresis is used to induce a limit cycle output and using this information, unknown process model parameters are estimated. State space based generalized analytical expressions are derived to achieve accurate results. To show the performance of the proposed method expressions are also derived for systems with a zero. In real time systems, measurement noise is an important issue during identification of process dynamics. A relay with hysteresis reduces the effect of measurement noise, in addition a new multiloop control strategy is proposed to recover the original limit cycle. Simulation results are included to validate the effectiveness of the proposed method. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  10. Analysis and design of a standardized control module for switching regulators

    NASA Astrophysics Data System (ADS)

    Lee, F. C.; Mahmoud, M. F.; Yu, Y.; Kolecki, J. C.

    1982-07-01

    Three basic switching regulators: buck, boost, and buck/boost, employing a multiloop standardized control module (SCM) were characterized by a common small signal block diagram. Employing the unified model, regulator performances such as stability, audiosusceptibility, output impedance, and step load transient are analyzed and key performance indexes are expressed in simple analytical forms. More importantly, the performance characteristics of all three regulators are shown to enjoy common properties due to the unique SCM control scheme which nullifies the positive zero and provides adaptive compensation to the moving poles of the boost and buck/boost converters. This allows a simple unified design procedure to be devised for selecting the key SCM control parameters for an arbitrarily given power stage configuration and parameter values, such that all regulator performance specifications can be met and optimized concurrently in a single design attempt.

  11. Integrand-level reduction of loop amplitudes by computational algebraic geometry methods

    NASA Astrophysics Data System (ADS)

    Zhang, Yang

    2012-09-01

    We present an algorithm for the integrand-level reduction of multi-loop amplitudes of renormalizable field theories, based on computational algebraic geometry. This algorithm uses (1) the Gröbner basis method to determine the basis for integrand-level reduction, (2) the primary decomposition of an ideal to classify all inequivalent solutions of unitarity cuts. The resulting basis and cut solutions can be used to reconstruct the integrand from unitarity cuts, via polynomial fitting techniques. The basis determination part of the algorithm has been implemented in the Mathematica package, BasisDet. The primary decomposition part can be readily carried out by algebraic geometry softwares, with the output of the package BasisDet. The algorithm works in both D = 4 and D = 4 - 2 ɛ dimensions, and we present some two and three-loop examples of applications of this algorithm.

  12. The integrated manual and automatic control of complex flight systems

    NASA Technical Reports Server (NTRS)

    Schmidt, David K.

    1991-01-01

    Research dealt with the general area of optimal flight control synthesis for manned flight vehicles. The work was generic; no specific vehicle was the focus of study. However, the class of vehicles generally considered were those for which high authority, multivariable control systems might be considered, for the purpose of stabilization and the achievement of optimal handling characteristics. Within this scope, the topics of study included several optimal control synthesis techniques, control-theoretic modeling of the human operator in flight control tasks, and the development of possible handling qualities metrics and/or measures of merit. Basic contributions were made in all these topics, including human operator (pilot) models for multi-loop tasks, optimal output feedback flight control synthesis techniques; experimental validations of the methods developed, and fundamental modeling studies of the air-to-air tracking and flared landing tasks.

  13. The Process of Parallelizing the Conjunction Prediction Algorithm of ESA's SSA Conjunction Prediction Service Using GPGPU

    NASA Astrophysics Data System (ADS)

    Fehr, M.; Navarro, V.; Martin, L.; Fletcher, E.

    2013-08-01

    Space Situational Awareness[8] (SSA) is defined as the comprehensive knowledge, understanding and maintained awareness of the population of space objects, the space environment and existing threats and risks. As ESA's SSA Conjunction Prediction Service (CPS) requires the repetitive application of a processing algorithm against a data set of man-made space objects, it is crucial to exploit the highly parallelizable nature of this problem. Currently the CPS system makes use of OpenMP[7] for parallelization purposes using CPU threads, but only a GPU with its hundreds of cores can fully benefit from such high levels of parallelism. This paper presents the adaptation of several core algorithms[5] of the CPS for general-purpose computing on graphics processing units (GPGPU) using NVIDIAs Compute Unified Device Architecture (CUDA).

  14. Refolding and simultaneous purification by three-phase partitioning of recombinant proteins from inclusion bodies

    PubMed Central

    Raghava, Smita; Barua, Bipasha; Singh, Pradeep K.; Das, Mili; Madan, Lalima; Bhattacharyya, Sanchari; Bajaj, Kanika; Gopal, B.; Varadarajan, Raghavan; Gupta, Munishwar N.

    2008-01-01

    Many recombinant eukaryotic proteins tend to form insoluble aggregates called inclusion bodies, especially when expressed in Escherichia coli. We report the first application of the technique of three-phase partitioning (TPP) to obtain correctly refolded active proteins from solubilized inclusion bodies. TPP was used for refolding 12 different proteins overexpressed in E. coli. In each case, the protein refolded by TPP gave either higher refolding yield than the earlier reported method or succeeded where earlier efforts have failed. TPP-refolded proteins were characterized and compared to conventionally purified proteins in terms of their spectral characteristics and/or biological activity. The methodology is scaleable and parallelizable and does not require subsequent concentration steps. This approach may serve as a useful complement to existing refolding strategies of diverse proteins from inclusion bodies. PMID:18780821

  15. On B-type Open-Closed Landau-Ginzburg Theories Defined on Calabi-Yau Stein Manifolds

    NASA Astrophysics Data System (ADS)

    Babalic, Elena Mirela; Doryn, Dmitry; Lazaroiu, Calin Iuliu; Tavakol, Mehdi

    2018-05-01

    We consider the bulk algebra and topological D-brane category arising from the differential model of the open-closed B-type topological Landau-Ginzburg theory defined by a pair (X,W), where X is a non-compact Calabi-Yau manifold and W is a complex-valued holomorphic function. When X is a Stein manifold (but not restricted to be a domain of holomorphy), we extract equivalent descriptions of the bulk algebra and of the category of topological D-branes which are constructed using only the analytic space associated to X. In particular, we show that the D-brane category is described by projective factorizations defined over the ring of holomorphic functions of X. We also discuss simplifications of the analytic models which arise when X is holomorphically parallelizable and illustrate these in a few classes of examples.

  16. Static assignment of complex stochastic tasks using stochastic majorization

    NASA Technical Reports Server (NTRS)

    Nicol, David; Simha, Rahul; Towsley, Don

    1992-01-01

    We consider the problem of statically assigning many tasks to a (smaller) system of homogeneous processors, where a task's structure is modeled as a branching process, and all tasks are assumed to have identical behavior. We show how the theory of majorization can be used to obtain a partial order among possible task assignments. Our results show that if the vector of numbers of tasks assigned to each processor under one mapping is majorized by that of another mapping, then the former mapping is better than the latter with respect to a large number of objective functions. In particular, we show how measurements of finishing time, resource utilization, and reliability are all captured by the theory. We also show how the theory may be applied to the problem of partitioning a pool of processors for distribution among parallelizable tasks.

  17. Advanced functional network analysis in the geosciences: The pyunicorn package

    NASA Astrophysics Data System (ADS)

    Donges, Jonathan F.; Heitzig, Jobst; Runge, Jakob; Schultz, Hanna C. H.; Wiedermann, Marc; Zech, Alraune; Feldhoff, Jan; Rheinwalt, Aljoscha; Kutza, Hannes; Radebach, Alexander; Marwan, Norbert; Kurths, Jürgen

    2013-04-01

    Functional networks are a powerful tool for analyzing large geoscientific datasets such as global fields of climate time series originating from observations or model simulations. pyunicorn (pythonic unified complex network and recurrence analysis toolbox) is an open-source, fully object-oriented and easily parallelizable package written in the language Python. It allows for constructing functional networks (aka climate networks) representing the structure of statistical interrelationships in large datasets and, subsequently, investigating this structure using advanced methods of complex network theory such as measures for networks of interacting networks, node-weighted statistics or network surrogates. Additionally, pyunicorn allows to study the complex dynamics of geoscientific systems as recorded by time series by means of recurrence networks and visibility graphs. The range of possible applications of the package is outlined drawing on several examples from climatology.

  18. Closed-loop, pilot/vehicle analysis of the approach and landing task

    NASA Technical Reports Server (NTRS)

    Anderson, M. R.; Schmidt, D. K.

    1986-01-01

    In the case of approach and landing, it is universally accepted that the pilot uses more than one vehicle response, or output, to close his control loops. Therefore, to model this task, a multi-loop analysis technique is required. The analysis problem has been in obtaining reasonable analytic estimates of the describing functions representing the pilot's loop compensation. Once these pilot describing functions are obtained, appropriate performance and workload metrics must then be developed for the landing task. The optimal control approach provides a powerful technique for obtaining the necessary describing functions, once the appropriate task objective is defined in terms of a quadratic objective function. An approach is presented through the use of a simple, reasonable objective function and model-based metrics to evaluate loop performance and pilot workload. The results of an analysis of the LAHOS (Landing and Approach of Higher Order Systems) study performed by R.E. Smith is also presented.

  19. Multi-Purpose Thermal Hydraulic Loop: Advanced Reactor Technology Integral System Test (ARTIST) Facility for Support of Advanced Reactor Technologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    James E. O'Brien; Piyush Sabharwall; SuJong Yoon

    2001-11-01

    Effective and robust high temperature heat transfer systems are fundamental to the successful deployment of advanced reactors for both power generation and non-electric applications. Plant designs often include an intermediate heat transfer loop (IHTL) with heat exchangers at either end to deliver thermal energy to the application while providing isolation of the primary reactor system. In order to address technical feasibility concerns and challenges a new high-temperature multi-fluid, multi-loop test facility “Advanced Reactor Technology Integral System Test facility” (ARTIST) is under development at the Idaho National Laboratory. The facility will include three flow loops: high-temperature helium, molten salt, and steam/water.more » Details of some of the design aspects and challenges of this facility, which is currently in the conceptual design phase, are discussed« less

  20. On regulators with a prescribed degree of stability. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Ng, P. T. P.

    1981-01-01

    Several important aspects of the Regulator with a Prescribed Degree of Stability (RPDS) methodology and its applications are considered. The solution of the time varying RPDS problem as well as the characterization of RPDS closed loop eigenstructure properties are obtained. Based on the asymptotic behavior of RPDS root loci, a one step algorithm for designing Regulators with Prescribed Damping Ratio (RPDR) is developed. The robustness properties of RPDS are characterized in terms of the properties of the return difference and the inverse return difference matrices for the RPDS state feedback loop. This class of regulators is found to possess excellent multiloop margins with respect to stability and degree of stability properties. The ability of RPDS design to tolerate changing operating conditions and unmodelled dynamics are illustrated with a multiterminal dc/ac power system example. The output feedback realization of RPDS requires the use of Linear Quadratic Gaussian (LQG) methodology.

  1. Chaotic mixing by microswimmers moving on quasiperiodic orbits

    NASA Astrophysics Data System (ADS)

    Jalali, Mir Abbas; Khoshnood, Atefeh; Alam, Mohammad-Reza

    2015-11-01

    Life on the Earth is strongly dependent upon mixing across a vast range of scales. For example, mixing distributes nutrients for microorganisms in aquatic environments, and balances the spatial energy distribution in the oceans and the atmosphere. From industrial point of view, mixing is essential in many microfluidic processes and lab-on-a-chip operations, polymer engineering, pharmaceutics, food engineering, petroleum engineering, and biotechnology. Efficient mixing, typically characterized by chaotic advection, is hard to achieve in low Reynolds number conditions because of the linear nature of the Stokes equation that governs the motion. We report the first demonstration of chaotic mixing induced by a microswimmer that strokes on quasiperiodic orbits with multi-loop turning paths. Our findings can be utilized to understand the interactions of microorganisms with their environments, and to design autonomous robotic mixers that can sweep and mix an entire volume of complex-geometry containers.

  2. Spinor formulation of topologically massive gravity

    NASA Astrophysics Data System (ADS)

    Aliev, A. N.; Nutku, Y.

    1995-12-01

    In the framework of real 2-component spinors in three dimensional space-time we present a description of topologically massive gravity (TMG) in terms of differential forms with triad scalar coefficients. This is essentially a real version of the Newman-Penrose formalism in general relativity. A triad formulation of TMG was considered earlier by Hall, Morgan and Perjes, however, due to an unfortunate choice of signature some of the spinors underlying the Hall-Morgan-Perjes formalism are real, while others are pure imaginary. We obtain the basic geometrical identities as well as the TMG field equations including a cosmological constant for the appropriate signature. As an application of this formalism we discuss the Bianchi Type $VIII - IX$ exact solutions of TMG and point out that they are parallelizable manifolds. We also consider various re-identifications of these homogeneous spaces that result in black hole solutions of TMG.

  3. Stochastic evaluation of second-order many-body perturbation energies.

    PubMed

    Willow, Soohaeng Yoo; Kim, Kwang S; Hirata, So

    2012-11-28

    With the aid of the Laplace transform, the canonical expression of the second-order many-body perturbation correction to an electronic energy is converted into the sum of two 13-dimensional integrals, the 12-dimensional parts of which are evaluated by Monte Carlo integration. Weight functions are identified that are analytically normalizable, are finite and non-negative everywhere, and share the same singularities as the integrands. They thus generate appropriate distributions of four-electron walkers via the Metropolis algorithm, yielding correlation energies of small molecules within a few mE(h) of the correct values after 10(8) Monte Carlo steps. This algorithm does away with the integral transformation as the hotspot of the usual algorithms, has a far superior size dependence of cost, does not suffer from the sign problem of some quantum Monte Carlo methods, and potentially easily parallelizable and extensible to other more complex electron-correlation theories.

  4. The Role of Molecular Dynamics Potential of Mean Force Calculations in the Investigation of Enzyme Catalysis.

    PubMed

    Yang, Y; Pan, L; Lightstone, F C; Merz, K M

    2016-01-01

    The potential of mean force simulations, widely applied in Monte Carlo or molecular dynamics simulations, are useful tools to examine the free energy variation as a function of one or more specific reaction coordinate(s) for a given system. Implementation of the potential of mean force in the simulations of biological processes, such as enzyme catalysis, can help overcome the difficulties of sampling specific regions on the energy landscape and provide useful insights to understand the catalytic mechanism. The potential of mean force simulations usually require many, possibly parallelizable, short simulations instead of a few extremely long simulations and, therefore, are fairly manageable for most research facilities. In this chapter, we provide detailed protocols for applying the potential of mean force simulations to investigate enzymatic mechanisms for several different enzyme systems. © 2016 Elsevier Inc. All rights reserved.

  5. Approximating the Generalized Voronoi Diagram of Closely Spaced Objects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edwards, John; Daniel, Eric; Pascucci, Valerio

    2015-06-22

    We present an algorithm to compute an approximation of the generalized Voronoi diagram (GVD) on arbitrary collections of 2D or 3D geometric objects. In particular, we focus on datasets with closely spaced objects; GVD approximation is expensive and sometimes intractable on these datasets using previous algorithms. With our approach, the GVD can be computed using commodity hardware even on datasets with many, extremely tightly packed objects. Our approach is to subdivide the space with an octree that is represented with an adjacency structure. We then use a novel adaptive distance transform to compute the distance function on octree vertices. Themore » computed distance field is sampled more densely in areas of close object spacing, enabling robust and parallelizable GVD surface generation. We demonstrate our method on a variety of data and show example applications of the GVD in 2D and 3D.« less

  6. Approximate Bayesian computation for spatial SEIR(S) epidemic models.

    PubMed

    Brown, Grant D; Porter, Aaron T; Oleson, Jacob J; Hinman, Jessica A

    2018-02-01

    Approximate Bayesia n Computation (ABC) provides an attractive approach to estimation in complex Bayesian inferential problems for which evaluation of the kernel of the posterior distribution is impossible or computationally expensive. These highly parallelizable techniques have been successfully applied to many fields, particularly in cases where more traditional approaches such as Markov chain Monte Carlo (MCMC) are impractical. In this work, we demonstrate the application of approximate Bayesian inference to spatially heterogeneous Susceptible-Exposed-Infectious-Removed (SEIR) stochastic epidemic models. These models have a tractable posterior distribution, however MCMC techniques nevertheless become computationally infeasible for moderately sized problems. We discuss the practical implementation of these techniques via the open source ABSEIR package for R. The performance of ABC relative to traditional MCMC methods in a small problem is explored under simulation, as well as in the spatially heterogeneous context of the 2014 epidemic of Chikungunya in the Americas. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Fast multipole methods on a cluster of GPUs for the meshless simulation of turbulence

    NASA Astrophysics Data System (ADS)

    Yokota, R.; Narumi, T.; Sakamaki, R.; Kameoka, S.; Obi, S.; Yasuoka, K.

    2009-11-01

    Recent advances in the parallelizability of fast N-body algorithms, and the programmability of graphics processing units (GPUs) have opened a new path for particle based simulations. For the simulation of turbulence, vortex methods can now be considered as an interesting alternative to finite difference and spectral methods. The present study focuses on the efficient implementation of the fast multipole method and pseudo-particle method on a cluster of NVIDIA GeForce 8800 GT GPUs, and applies this to a vortex method calculation of homogeneous isotropic turbulence. The results of the present vortex method agree quantitatively with that of the reference calculation using a spectral method. We achieved a maximum speed of 7.48 TFlops using 64 GPUs, and the cost performance was near 9.4/GFlops. The calculation of the present vortex method on 64 GPUs took 4120 s, while the spectral method on 32 CPUs took 4910 s.

  8. A Commodity Computing Cluster

    NASA Astrophysics Data System (ADS)

    Teuben, P. J.; Wolfire, M. G.; Pound, M. W.; Mundy, L. G.

    We have assembled a cluster of Intel-Pentium based PCs running Linux to compute a large set of Photodissociation Region (PDR) and Dust Continuum models. For various reasons the cluster is heterogeneous, currently ranging from a single Pentium-II 333 MHz to dual Pentium-III 450 MHz CPU machines. Although this will be sufficient for our ``embarrassingly parallelizable problem'' it may present some challenges for as yet unplanned future use. In addition the cluster was used to construct a MIRIAD benchmark, and compared to equivalent Ultra-Sparc based workstations. Currently the cluster consists of 8 machines, 14 CPUs, 50GB of disk-space, and a total peak speed of 5.83 GHz, or about 1.5 Gflops. The total cost of this cluster has been about $12,000, including all cabling, networking equipment, rack, and a CD-R backup system. The URL for this project is http://dustem.astro.umd.edu.

  9. Analysis of composite ablators using massively parallel computation

    NASA Technical Reports Server (NTRS)

    Shia, David

    1995-01-01

    In this work, the feasibility of using massively parallel computation to study the response of ablative materials is investigated. Explicit and implicit finite difference methods are used on a massively parallel computer, the Thinking Machines CM-5. The governing equations are a set of nonlinear partial differential equations. The governing equations are developed for three sample problems: (1) transpiration cooling, (2) ablative composite plate, and (3) restrained thermal growth testing. The transpiration cooling problem is solved using a solution scheme based solely on the explicit finite difference method. The results are compared with available analytical steady-state through-thickness temperature and pressure distributions and good agreement between the numerical and analytical solutions is found. It is also found that a solution scheme based on the explicit finite difference method has the following advantages: incorporates complex physics easily, results in a simple algorithm, and is easily parallelizable. However, a solution scheme of this kind needs very small time steps to maintain stability. A solution scheme based on the implicit finite difference method has the advantage that it does not require very small times steps to maintain stability. However, this kind of solution scheme has the disadvantages that complex physics cannot be easily incorporated into the algorithm and that the solution scheme is difficult to parallelize. A hybrid solution scheme is then developed to combine the strengths of the explicit and implicit finite difference methods and minimize their weaknesses. This is achieved by identifying the critical time scale associated with the governing equations and applying the appropriate finite difference method according to this critical time scale. The hybrid solution scheme is then applied to the ablative composite plate and restrained thermal growth problems. The gas storage term is included in the explicit pressure calculation of both problems. Results from ablative composite plate problems are compared with previous numerical results which did not include the gas storage term. It is found that the through-thickness temperature distribution is not affected much by the gas storage term. However, the through-thickness pressure and stress distributions, and the extent of chemical reactions are different from the previous numerical results. Two types of chemical reaction models are used in the restrained thermal growth testing problem: (1) pressure-independent Arrhenius type rate equations and (2) pressure-dependent Arrhenius type rate equations. The numerical results are compared to experimental results and the pressure-dependent model is able to capture the trend better than the pressure-independent one. Finally, a performance study is done on the hybrid algorithm using the ablative composite plate problem. It is found that there is a good speedup of performance on the CM-5. For 32 CPU's, the speedup of performance is 20. The efficiency of the algorithm is found to be a function of the size and execution time of a given problem and the effective parallelization of the algorithm. It also seems that there is an optimum number of CPU's to use for a given problem.

  10. Use of active control systems to improve bending and rotor flapping response of a tilt rotor VTOL airplane

    NASA Technical Reports Server (NTRS)

    Whitaker, H. P.; Cheng, Y.

    1975-01-01

    The results are summarized of an analytical study of the use of active control systems for the purpose of reducing the root mean square response of wing vertical bending and rotor flapping to atmospheric turbulence for a tilt-rotor VTOL airplane. Only the wing/rotor assembly was considered so that results of a wind tunnel test program would be applicable in a subsequent phase of the research. The capabilities and limitations of simple single feedback configurations were identified, and the most promising multiloop feedback configurations were then investigated. Design parameters were selected so as to minimize either wing bending or rotor flapping response. Within the constraints imposed by practical levels of feedback gains and complexity and by considerations of safety, reduction in response due to turbulence of the order of 30 to 50 percent is predicted using the rotor longitudinal cyclic and a trailing edge wing flap as control effectors.

  11. Control theory analysis of a three-axis VTOL flight director. M.S. Thesis - Pennsylvania State Univ.

    NASA Technical Reports Server (NTRS)

    Niessen, F. R.

    1971-01-01

    A control theory analysis of a VTOL flight director and the results of a fixed-based simulator evaluation of the flight-director commands are discussed. The VTOL configuration selected for this study is a helicopter-type VTOL which controls the direction of the thrust vector by means of vehicle-attitude changes and, furthermore, employs high-gain attitude stabilization. This configuration is the same as one which was simulated in actual instrument flight tests with a variable stability helicopter. Stability analyses are made for each of the flight-director commands, assuming a single input-output, multi-loop system model for each control axis. The analyses proceed from the inner-loops to the outer-loops, using an analytical pilot model selected on the basis of the innermost-loop dynamics. The time response of the analytical model of the system is primarily used to adjust system gains, while root locus plots are used to identify dominant modes and mode interactions.

  12. RNA secondary structure prediction using soft computing.

    PubMed

    Ray, Shubhra Sankar; Pal, Sankar K

    2013-01-01

    Prediction of RNA structure is invaluable in creating new drugs and understanding genetic diseases. Several deterministic algorithms and soft computing-based techniques have been developed for more than a decade to determine the structure from a known RNA sequence. Soft computing gained importance with the need to get approximate solutions for RNA sequences by considering the issues related with kinetic effects, cotranscriptional folding, and estimation of certain energy parameters. A brief description of some of the soft computing-based techniques, developed for RNA secondary structure prediction, is presented along with their relevance. The basic concepts of RNA and its different structural elements like helix, bulge, hairpin loop, internal loop, and multiloop are described. These are followed by different methodologies, employing genetic algorithms, artificial neural networks, and fuzzy logic. The role of various metaheuristics, like simulated annealing, particle swarm optimization, ant colony optimization, and tabu search is also discussed. A relative comparison among different techniques, in predicting 12 known RNA secondary structures, is presented, as an example. Future challenging issues are then mentioned.

  13. Comparison of simulator fidelity model predictions with in-simulator evaluation data

    NASA Technical Reports Server (NTRS)

    Parrish, R. V.; Mckissick, B. T.; Ashworth, B. R.

    1983-01-01

    A full factorial in simulator experiment of a single axis, multiloop, compensatory pitch tracking task is described. The experiment was conducted to provide data to validate extensions to an analytic, closed loop model of a real time digital simulation facility. The results of the experiment encompassing various simulation fidelity factors, such as visual delay, digital integration algorithms, computer iteration rates, control loading bandwidths and proprioceptive cues, and g-seat kinesthetic cues, are compared with predictions obtained from the analytic model incorporating an optimal control model of the human pilot. The in-simulator results demonstrate more sensitivity to the g-seat and to the control loader conditions than were predicted by the model. However, the model predictions are generally upheld, although the predicted magnitudes of the states and of the error terms are sometimes off considerably. Of particular concern is the large sensitivity difference for one control loader condition, as well as the model/in-simulator mismatch in the magnitude of the plant states when the other states match.

  14. Versatile low-Reynolds-number swimmer with three-dimensional maneuverability.

    PubMed

    Jalali, Mir Abbas; Alam, Mohammad-Reza; Mousavi, SeyyedHossein

    2014-11-01

    We design and simulate the motion of a swimmer, the Quadroar, with three-dimensional translation and reorientation capabilities in low-Reynolds-number conditions. The Quadroar is composed of an I-shaped frame whose body link is a simple linear actuator and four disks that can rotate about the axes of flange links. The time symmetry is broken by a combination of disk rotations and the one-dimensional expansion or contraction of the body link. The Quadroar propels on forward and transverse straight lines and performs full three-dimensional reorientation maneuvers, which enable it to swim along arbitrary trajectories. We find continuous operation modes that propel the swimmer on planar and three-dimensional periodic and quasiperiodic orbits. Precessing quasiperiodic orbits consist of slow lingering phases with cardioid or multiloop turns followed by directional propulsive phases. Quasiperiodic orbits allow the swimmer to access large parts of its neighboring space without using complex control strategies. We also discuss the feasibility of fabricating a nanoscale Quadroar by photoactive molecular rotors.

  15. Manual control models of industrial management

    NASA Technical Reports Server (NTRS)

    Crossman, E. R. F. W.

    1972-01-01

    The industrial engineer is often required to design and implement control systems and organization for manufacturing and service facilities, to optimize quality, delivery, and yield, and minimize cost. Despite progress in computer science most such systems still employ human operators and managers as real-time control elements. Manual control theory should therefore be applicable to at least some aspects of industrial system design and operations. Formulation of adequate model structures is an essential prerequisite to progress in this area; since real-world production systems invariably include multilevel and multiloop control, and are implemented by timeshared human effort. A modular structure incorporating certain new types of functional element, has been developed. This forms the basis for analysis of an industrial process operation. In this case it appears that managerial controllers operate in a discrete predictive mode based on fast time modelling, with sampling interval related to plant dynamics. Successive aggregation causes reduced response bandwidth and hence increased sampling interval as a function of level.

  16. Prediction of pilot reserve attention capacity during air-to-air target tracking

    NASA Technical Reports Server (NTRS)

    Onstott, E. D.; Faulkner, W. H.

    1977-01-01

    Reserve attention capacity of a pilot was calculated using a pilot model that allocates exclusive model attention according to the ranking of task urgency functions whose variables are tracking error and error rate. The modeled task consisted of tracking a maneuvering target aircraft both vertically and horizontally, and when possible, performing a diverting side task which was simulated by the precise positioning of an electrical stylus and modeled as a task of constant urgency in the attention allocation algorithm. The urgency of the single loop vertical task is simply the magnitude of the vertical tracking error, while the multiloop horizontal task requires a nonlinear urgency measure of error and error rate terms. Comparison of model results with flight simulation data verified the computed model statistics of tracking error of both axes, lateral and longitudinal stick amplitude and rate, and side task episodes. Full data for the simulation tracking statistics as well as the explicit equations and structure of the urgency function multiaxis pilot model are presented.

  17. Dynamic covalent chemistry enables formation of antimicrobial peptide quaternary assemblies in a completely abiotic manner

    NASA Astrophysics Data System (ADS)

    Reuther, James F.; Dees, Justine L.; Kolesnichenko, Igor V.; Hernandez, Erik T.; Ukraintsev, Dmitri V.; Guduru, Rusheel; Whiteley, Marvin; Anslyn, Eric V.

    2018-01-01

    Naturally occurring peptides and proteins often use dynamic disulfide bonds to impart defined tertiary/quaternary structures for the formation of binding pockets with uniform size and function. Although peptide synthesis and modification are well established, controlling quaternary structure formation remains a significant challenge. Here, we report the facile incorporation of aryl aldehyde and acyl hydrazide functionalities into peptide oligomers via solid-phase copper-catalysed azide-alkyne cycloaddition (SP-CuAAC) click reactions. When mixed, these complementary functional groups rapidly react in aqueous media at neutral pH to form peptide-peptide intermolecular macrocycles with highly tunable ring sizes. Moreover, sequence-specific figure-of-eight, dumbbell-shaped, zipper-like and multi-loop quaternary structures were formed selectively. Controlling the proportions of reacting peptides with mismatched numbers of complementary reactive groups results in the formation of higher-molecular-weight sequence-defined ladder polymers. This also amplified antimicrobial effectiveness in select cases. This strategy represents a general approach to the creation of complex abiotic peptide quaternary structures.

  18. Tunable Universal Filter with Current Follower and Transconductance Amplifiers and Study of Parasitic Influences

    NASA Astrophysics Data System (ADS)

    Jeřábek, Jan; Šotner, Roman; Vrba, Kamil

    2011-11-01

    A universal filter with dual-output current follower (DO-CF), two transconductance amplifiers (OTAs) and two passive elements is presented in this paper. The filter is tunable, of the single-input multiple-output (SIMO) type, and operates in the current mode. Our solution utilizes a low-impedance input node and high-impedance outputs. All types of the active elements used can be realized using our UCC-N1B 0520 integrated circuit and therefore the paper contains not only simulation results that were obtained with the help of behavioral model of the UCC-N1B 0520 element, but also the characteristics that were gained by measurement with the mentioned circuit. The presented simulation and measurement results prove the quality of designed filter. Similar multi-loop structures are very-well known, but there are some drawbacks that are not discussed in similar papers. This paper also contains detailed study of parasitic influences on the filter performance.

  19. Ni-MH battery charger with a compensator for electric vehicles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, H.W.; Han, C.S.; Kim, C.S.

    1996-09-01

    The development of a high-performance battery and safe and reliable charging methods are two important factors for commercialization of the Electric Vehicles (EV). Hyundai and Ovonic together spent many years in the research on optimum charging method for Ni-MH battery. This paper presents in detail the results of intensive experimental analysis, performed by Hyundai in collaboration with Ovonic. An on-board Ni-MH battery charger and its controller which are designed to use as a standard home electricity supply are described. In addition, a 3 step constant current recharger with the temperature and the battery aging compensator is proposed. This has amore » multi-loop algorithm function to detect its 80% and fully charged state, and carry out equalization charging control. The algorithm is focused on safety, reliability, efficiency, charging speed and thermal management (maintaining uniform temperatures within a battery pack). It is also designed to minimize the necessity for user input.« less

  20. Conic Sector Analysis of Hybrid Control Systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Thompson, P. M.

    1982-01-01

    A hybrid control system contains an analog plant and a hybrid (or sampled-data) compensator. In this thesis a new conic sector is determined which is constructive and can be used to: (1) determine closed loop stability, (2) analyze robustness with respect to modelling uncertainties, (3) analyze steady state response to commands, and (4) select the sample rate. The use of conic sectors allows the designer to treat hybrid control systems as though they were analog control systems. The center of the conic sector can be used as a rigorous linear time invariant approximation of the hybrid control system, and the radius places a bound on the errors of this approximation. The hybrid feedback system can be multivariable, and the sampler is assumed to be synchronous. Algorithms to compute the conic sector are presented. Several examples demonstrate how the conic sector analysis techniques are applied. Extensions to single loop multirate hybrid feedback systems are presented. Further extensions are proposed for multiloop multirate hybrid feedback system and for single rate systems with asynchronous sampling.

  1. Model-based Robotic Dynamic Motion Control for the Robonaut 2 Humanoid Robot

    NASA Technical Reports Server (NTRS)

    Badger, Julia M.; Hulse, Aaron M.; Taylor, Ross C.; Curtis, Andrew W.; Gooding, Dustin R.; Thackston, Allison

    2013-01-01

    Robonaut 2 (R2), an upper-body dexterous humanoid robot, has been undergoing experimental trials on board the International Space Station (ISS) for more than a year. R2 will soon be upgraded with two climbing appendages, or legs, as well as a new integrated model-based control system. This control system satisfies two important requirements; first, that the robot can allow humans to enter its workspace during operation and second, that the robot can move its large inertia with enough precision to attach to handrails and seat track while climbing around the ISS. This is achieved by a novel control architecture that features an embedded impedance control law on the motor drivers called Multi-Loop control which is tightly interfaced with a kinematic and dynamic coordinated control system nicknamed RoboDyn that resides on centralized processors. This paper presents the integrated control algorithm as well as several test results that illustrate R2's safety features and performance.

  2. Monte Carlo dose calculation using a cell processor based PlayStation 3 system

    NASA Astrophysics Data System (ADS)

    Chow, James C. L.; Lam, Phil; Jaffray, David A.

    2012-02-01

    This study investigates the performance of the EGSnrc computer code coupled with a Cell-based hardware in Monte Carlo simulation of radiation dose in radiotherapy. Performance evaluations of two processor-intensive functions namely, HOWNEAR and RANMAR_GET in the EGSnrc code were carried out basing on the 20-80 rule (Pareto principle). The execution speeds of the two functions were measured by the profiler gprof specifying the number of executions and total time spent on the functions. A testing architecture designed for Cell processor was implemented in the evaluation using a PlayStation3 (PS3) system. The evaluation results show that the algorithms examined are readily parallelizable on the Cell platform, provided that an architectural change of the EGSnrc was made. However, as the EGSnrc performance was limited by the PowerPC Processing Element in the PS3, PC coupled with graphics processing units or GPCPU may provide a more viable avenue for acceleration.

  3. Matrix decomposition graphics processing unit solver for Poisson image editing

    NASA Astrophysics Data System (ADS)

    Lei, Zhao; Wei, Li

    2012-10-01

    In recent years, gradient-domain methods have been widely discussed in the image processing field, including seamless cloning and image stitching. These algorithms are commonly carried out by solving a large sparse linear system: the Poisson equation. However, solving the Poisson equation is a computational and memory intensive task which makes it not suitable for real-time image editing. A new matrix decomposition graphics processing unit (GPU) solver (MDGS) is proposed to settle the problem. A matrix decomposition method is used to distribute the work among GPU threads, so that MDGS will take full advantage of the computing power of current GPUs. Additionally, MDGS is a hybrid solver (combines both the direct and iterative techniques) and has two-level architecture. These enable MDGS to generate identical solutions with those of the common Poisson methods and achieve high convergence rate in most cases. This approach is advantageous in terms of parallelizability, enabling real-time image processing, low memory-taken and extensive applications.

  4. Depth-Based Selective Blurring in Stereo Images Using Accelerated Framework

    NASA Astrophysics Data System (ADS)

    Mukherjee, Subhayan; Guddeti, Ram Mohana Reddy

    2014-09-01

    We propose a hybrid method for stereo disparity estimation by combining block and region-based stereo matching approaches. It generates dense depth maps from disparity measurements of only 18 % image pixels (left or right). The methodology involves segmenting pixel lightness values using fast K-Means implementation, refining segment boundaries using morphological filtering and connected components analysis; then determining boundaries' disparities using sum of absolute differences (SAD) cost function. Complete disparity maps are reconstructed from boundaries' disparities. We consider an application of our method for depth-based selective blurring of non-interest regions of stereo images, using Gaussian blur to de-focus users' non-interest regions. Experiments on Middlebury dataset demonstrate that our method outperforms traditional disparity estimation approaches using SAD and normalized cross correlation by up to 33.6 % and some recent methods by up to 6.1 %. Further, our method is highly parallelizable using CPU-GPU framework based on Java Thread Pool and APARAPI with speed-up of 5.8 for 250 stereo video frames (4,096 × 2,304).

  5. Efficient Stochastic Rendering of Static and Animated Volumes Using Visibility Sweeps.

    PubMed

    von Radziewsky, Philipp; Kroes, Thomas; Eisemann, Martin; Eisemann, Elmar

    2017-09-01

    Stochastically solving the rendering integral (particularly visibility) is the de-facto standard for physically-based light transport but it is computationally expensive, especially when displaying heterogeneous volumetric data. In this work, we present efficient techniques to speed-up the rendering process via a novel visibility-estimation method in concert with an unbiased importance sampling (involving environmental lighting and visibility inside the volume), filtering, and update techniques for both static and animated scenes. Our major contributions include a progressive estimate of partial occlusions based on a fast sweeping-plane algorithm. These occlusions are stored in an octahedral representation, which can be conveniently transformed into a quadtree-based hierarchy suited for a joint importance sampling. Further, we propose sweep-space filtering, which suppresses the occurrence of fireflies and investigate different update schemes for animated scenes. Our technique is unbiased, requires little precomputation, is highly parallelizable, and is applicable to a various volume data sets, dynamic transfer functions, animated volumes and changing environmental lighting.

  6. Performance of a cavity-method-based algorithm for the prize-collecting Steiner tree problem on graphs

    NASA Astrophysics Data System (ADS)

    Biazzo, Indaco; Braunstein, Alfredo; Zecchina, Riccardo

    2012-08-01

    We study the behavior of an algorithm derived from the cavity method for the prize-collecting steiner tree (PCST) problem on graphs. The algorithm is based on the zero temperature limit of the cavity equations and as such is formally simple (a fixed point equation resolved by iteration) and distributed (parallelizable). We provide a detailed comparison with state-of-the-art algorithms on a wide range of existing benchmarks, networks, and random graphs. Specifically, we consider an enhanced derivative of the Goemans-Williamson heuristics and the dhea solver, a branch and cut integer linear programming based approach. The comparison shows that the cavity algorithm outperforms the two algorithms in most large instances both in running time and quality of the solution. Finally we prove a few optimality properties of the solutions provided by our algorithm, including optimality under the two postprocessing procedures defined in the Goemans-Williamson derivative and global optimality in some limit cases.

  7. A hierarchical exact accelerated stochastic simulation algorithm

    NASA Astrophysics Data System (ADS)

    Orendorff, David; Mjolsness, Eric

    2012-12-01

    A new algorithm, "HiER-leap" (hierarchical exact reaction-leaping), is derived which improves on the computational properties of the ER-leap algorithm for exact accelerated simulation of stochastic chemical kinetics. Unlike ER-leap, HiER-leap utilizes a hierarchical or divide-and-conquer organization of reaction channels into tightly coupled "blocks" and is thereby able to speed up systems with many reaction channels. Like ER-leap, HiER-leap is based on the use of upper and lower bounds on the reaction propensities to define a rejection sampling algorithm with inexpensive early rejection and acceptance steps. But in HiER-leap, large portions of intra-block sampling may be done in parallel. An accept/reject step is used to synchronize across blocks. This method scales well when many reaction channels are present and has desirable asymptotic properties. The algorithm is exact, parallelizable and achieves a significant speedup over the stochastic simulation algorithm and ER-leap on certain problems. This algorithm offers a potentially important step towards efficient in silico modeling of entire organisms.

  8. Stochastic DT-MRI connectivity mapping on the GPU.

    PubMed

    McGraw, Tim; Nadar, Mariappan

    2007-01-01

    We present a method for stochastic fiber tract mapping from diffusion tensor MRI (DT-MRI) implemented on graphics hardware. From the simulated fibers we compute a connectivity map that gives an indication of the probability that two points in the dataset are connected by a neuronal fiber path. A Bayesian formulation of the fiber model is given and it is shown that the inversion method can be used to construct plausible connectivity. An implementation of this fiber model on the graphics processing unit (GPU) is presented. Since the fiber paths can be stochastically generated independently of one another, the algorithm is highly parallelizable. This allows us to exploit the data-parallel nature of the GPU fragment processors. We also present a framework for the connectivity computation on the GPU. Our implementation allows the user to interactively select regions of interest and observe the evolving connectivity results during computation. Results are presented from the stochastic generation of over 250,000 fiber steps per iteration at interactive frame rates on consumer-grade graphics hardware.

  9. Memory-efficient RNA energy landscape exploration

    PubMed Central

    Mann, Martin; Kucharík, Marcel; Flamm, Christoph; Wolfinger, Michael T.

    2014-01-01

    Motivation: Energy landscapes provide a valuable means for studying the folding dynamics of short RNA molecules in detail by modeling all possible structures and their transitions. Higher abstraction levels based on a macro-state decomposition of the landscape enable the study of larger systems; however, they are still restricted by huge memory requirements of exact approaches. Results: We present a highly parallelizable local enumeration scheme that enables the computation of exact macro-state transition models with highly reduced memory requirements. The approach is evaluated on RNA secondary structure landscapes using a gradient basin definition for macro-states. Furthermore, we demonstrate the need for exact transition models by comparing two barrier-based approaches, and perform a detailed investigation of gradient basins in RNA energy landscapes. Availability and implementation: Source code is part of the C++ Energy Landscape Library available at http://www.bioinf.uni-freiburg.de/Software/. Contact: mmann@informatik.uni-freiburg.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24833804

  10. Basis function models for animal movement

    USGS Publications Warehouse

    Hooten, Mevin B.; Johnson, Devin S.

    2017-01-01

    Advances in satellite-based data collection techniques have served as a catalyst for new statistical methodology to analyze these data. In wildlife ecological studies, satellite-based data and methodology have provided a wealth of information about animal space use and the investigation of individual-based animal–environment relationships. With the technology for data collection improving dramatically over time, we are left with massive archives of historical animal telemetry data of varying quality. While many contemporary statistical approaches for inferring movement behavior are specified in discrete time, we develop a flexible continuous-time stochastic integral equation framework that is amenable to reduced-rank second-order covariance parameterizations. We demonstrate how the associated first-order basis functions can be constructed to mimic behavioral characteristics in realistic trajectory processes using telemetry data from mule deer and mountain lion individuals in western North America. Our approach is parallelizable and provides inference for heterogenous trajectories using nonstationary spatial modeling techniques that are feasible for large telemetry datasets. Supplementary materials for this article are available online.

  11. Posttest analysis of MIST Test 3109AA using TRAC-PF1/MOD1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steiner, J.L.; Siebe, D.A.; Boyack, B.E.

    This document discusses a posttest calculation and analysis of Multi-loop Integral System Test (MIST) 3109AA as the nominal test for the MIST program. It is a test of a small-break loss-of-coolant accident (SBLOCA) with a scaled 10-cm{sup 2} break in the B1 cold leg. The test exhibited the major post-SBLOCA phenomena, as expected, including depressurization to saturation, intermittent and interrupted loop flow, boiler-condenser mode cooling, refill, and postrefill cooldown. Full high-pressure injection and auxiliary feedwater were available, reactor coolant pumps were not available, and reactor-vessel vent valves and guard heaters were automatically controlled. Constant level control in the steam-generator secondariesmore » was used after steam-generator secondary refill and symmetric steam-generator pressure control was used. We performed the calculation using TRAC-PF1/MODI. Agreement between test data and the calculation was generally reasonable. All major trends and phenomena were correctly predicted. It is believed that the correct conclusions about trends and phenomena will be reached if the code is used in similar applications.« less

  12. Posttest analysis of MIST Test 3109AA using TRAC-PF1/MOD1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steiner, J.L.; Siebe, D.A.; Boyack, B.E.

    This document discusses a posttest calculation and analysis of Multi-loop Integral System Test (MIST) 3109AA as the nominal test for the MIST program. It is a test of a small-break loss-of-coolant accident (SBLOCA) with a scaled 10-cm[sup 2] break in the B1 cold leg. The test exhibited the major post-SBLOCA phenomena, as expected, including depressurization to saturation, intermittent and interrupted loop flow, boiler-condenser mode cooling, refill, and postrefill cooldown. Full high-pressure injection and auxiliary feedwater were available, reactor coolant pumps were not available, and reactor-vessel vent valves and guard heaters were automatically controlled. Constant level control in the steam-generator secondariesmore » was used after steam-generator secondary refill and symmetric steam-generator pressure control was used. We performed the calculation using TRAC-PF1/MODI. Agreement between test data and the calculation was generally reasonable. All major trends and phenomena were correctly predicted. It is believed that the correct conclusions about trends and phenomena will be reached if the code is used in similar applications.« less

  13. The CdZnTe Detector with Slit Collimator for Measure Distribution of the Specific Activity Radionuclide in the Ground

    NASA Astrophysics Data System (ADS)

    Stepanov, V. E.; Volkovich, A. G.; Potapov, V. N.; Semin, I. A.; Stepanov, A. V.; Simirskii, Iu. N.

    2018-01-01

    From 2011 in the NRC "Kurchatov Institute" carry out the dismantling of the MR multiloop research reactor. Now the reactor and all technological equipment in the premises of the reactor were dismantled. Now the measurements of radioactive contamination in the reactor premises are made. The most contaminated parts of premises - floor and the ground beneath it. To measure the distribution of specific activity in the ground the CdZnTe detector (volume 500MM3) was used. Detector placed in a lead shielding with a slit collimation hole. The upper part of shielding is made movable to close and open the slit of the collimator. At each point two measurements carried out: with open and closed collimator. The software for determination specific activity of radionuclides in ground was developed. The mathematical model of spectrometric system based on the Monte-Carlo method. Measurements of specific activity of ground were made. Using the results of measurements the thickness of the removed layer of ground and the amount of radioactive waste were calculated.

  14. Accelerator diagnosis and control by Neural Nets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spencer, J.E.

    1989-01-01

    Neural Nets (NN) have been described as a solution looking for a problem. In the last conference, Artificial Intelligence (AI) was considered in the accelerator context. While good for local surveillance and control, its use for large complex systems (LCS) was much more restricted. By contrast, NN provide a good metaphor for LCS. It can be argued that they are logically equivalent to multi-loop feedback/forward control of faulty systems, and therefore provide an ideal adaptive control system. Thus, where AI may be good for maintaining a 'golden orbit,' NN should be good for obtaining it via a quantitative approach tomore » 'look and adjust' methods like operator tweaking which use pattern recognition to deal with hardware and software limitations, inaccuracies or errors as well as imprecise knowledge or understanding of effects like annealing and hysteresis. Further, insights from NN allow one to define feasibility conditions for LCS in terms of design constraints and tolerances. Hardware and software implications are discussed and several LCS of current interest are compared and contrasted. 15 refs., 5 figs.« less

  15. Top-forms of leading singularities in nonplanar multi-loop amplitudes

    NASA Astrophysics Data System (ADS)

    Chen, Baoyi; Chen, Gang; Cheung, Yeuk-Kwan E.; Xie, Ruofei; Xin, Yuan

    2018-02-01

    The on-shell diagram is a very important tool in studying scattering amplitudes. In this paper we discuss the on-shell diagrams without external BCFW bridges. We introduce an extra step of adding an auxiliary external momentum line. Then we can decompose the on-shell diagrams by removing external BCFW bridges to a planar diagram whose top-form is well known now. The top-form of the on-shell diagram with the auxiliary line can be obtained by adding the BCFW bridges in an inverse order as discussed in our former paper (Chen et al. in Eur Phys J C 77(2):80 2017). To get the top-form of the original diagram, the soft limit of the auxiliary line is needed. We obtain the evolution rule for the Grassmannian integral and the geometry constraint in the soft limit. This completes the top-form description of leading singularities in nonplanar scattering amplitudes of N=4 Super Yang-Mills (SYM), which is valid for arbitrary higher-loops and beyond the Maximally-Helicity-Violation (MHV) amplitudes.

  16. The contact condition influence on stability and energy efficiency of quadruped robot

    NASA Astrophysics Data System (ADS)

    Lei, Jingtao; Wang, Tianmiao; Gao, Feng

    2008-10-01

    Quadruped robot has attribute of serial and parallel manipulator with multi-loop mechanism, with more DOF of each leg and intermittent contact with ground during walking, the trot gait of quadruped robot belongs to dynamic waking, compared to the crawl gait, the walking speed is higher, but the robot becomes unstable, it is difficult to keep dynamically stable walking. In this paper, we mainly analyze the condition for the quadruped robot to realize dynamically stable walking, establish centroid orbit equation based on ZMP (Zero Moment Point) stability theory, on the other hand , we study contact impact and friction influence on stability and energy efficiency. Because of the periodic contact between foots and ground, the contact impact and friction are considered to establish spring-damp nonlinear dynamics model. Robot need to be controlled to meet ZMP stability condition and contact constraint condition. Based on the virtual prototyping model, we study control algorithm considering contact condition, the contact compensator and friction compensator are adopted. The contact force and the influence of different contact conditions on the energy efficiency during whole gait cycle are obtained.

  17. ASPeak: an abundance sensitive peak detection algorithm for RIP-Seq.

    PubMed

    Kucukural, Alper; Özadam, Hakan; Singh, Guramrit; Moore, Melissa J; Cenik, Can

    2013-10-01

    Unlike DNA, RNA abundances can vary over several orders of magnitude. Thus, identification of RNA-protein binding sites from high-throughput sequencing data presents unique challenges. Although peak identification in ChIP-Seq data has been extensively explored, there are few bioinformatics tools tailored for peak calling on analogous datasets for RNA-binding proteins. Here we describe ASPeak (abundance sensitive peak detection algorithm), an implementation of an algorithm that we previously applied to detect peaks in exon junction complex RNA immunoprecipitation in tandem experiments. Our peak detection algorithm yields stringent and robust target sets enabling sensitive motif finding and downstream functional analyses. ASPeak is implemented in Perl as a complete pipeline that takes bedGraph files as input. ASPeak implementation is freely available at https://sourceforge.net/projects/as-peak under the GNU General Public License. ASPeak can be run on a personal computer, yet is designed to be easily parallelizable. ASPeak can also run on high performance computing clusters providing efficient speedup. The documentation and user manual can be obtained from http://master.dl.sourceforge.net/project/as-peak/manual.pdf.

  18. Size-Based Separation of Particles and Cells Utilizing Viscoelastic Effects in Straight Microchannels.

    PubMed

    Liu, Chao; Xue, Chundong; Chen, Xiaodong; Shan, Lei; Tian, Yu; Hu, Guoqing

    2015-06-16

    Viscoelasticity-induced particle migration has recently received increasing attention due to its ability to obtain high-quality focusing over a wide range of flow rates. However, its application is limited to low throughput regime since the particles can defocus as flow rate increases. Using an engineered carrier medium with constant and low viscosity and strong elasticity, the sample flow rates are improved to be 1 order of magnitude higher than those in existing studies. Utilizing differential focusing of particles of different sizes, here, we present sheathless particle/cell separation in simple straight microchannels that possess excellent parallelizability for further throughput enhancement. The present method can be implemented over a wide range of particle/cell sizes and flow rates. We successfully separate small particles from larger particles, MCF-7 cells from red blood cells (RBCs), and Escherichia coli (E. coli) bacteria from RBCs in different straight microchannels. The proposed method could broaden the applications of viscoelastic microfluidic devices to particle/cell separation due to the enhanced sample throughput and simple channel design.

  19. Manticore and CS mode : parallelizable encryption with joint cipher-state authentication.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Torgerson, Mark Dolan; Draelos, Timothy John; Schroeppel, Richard Crabtree

    2004-10-01

    We describe a new mode of encryption with inexpensive authentication, which uses information from the internal state of the cipher to provide the authentication. Our algorithms have a number of benefits: (1) the encryption has properties similar to CBC mode, yet the encipherment and authentication can be parallelized and/or pipelined, (2) the authentication overhead is minimal, and (3) the authentication process remains resistant against some IV reuse. We offer a Manticore class of authenticated encryption algorithms based on cryptographic hash functions, which support variable block sizes up to twice the hash output length and variable key lengths. A proof ofmore » security is presented for the MTC4 and Pepper algorithms. We then generalize the construction to create the Cipher-State (CS) mode of encryption that uses the internal state of any round-based block cipher as an authenticator. We provide hardware and software performance estimates for all of our constructions and give a concrete example of the CS mode of encryption that uses AES as the encryption primitive and adds a small speed overhead (10-15%) compared to AES alone.« less

  20. Disk-based compression of data from genome sequencing.

    PubMed

    Grabowski, Szymon; Deorowicz, Sebastian; Roguski, Łukasz

    2015-05-01

    High-coverage sequencing data have significant, yet hard to exploit, redundancy. Most FASTQ compressors cannot efficiently compress the DNA stream of large datasets, since the redundancy between overlapping reads cannot be easily captured in the (relatively small) main memory. More interesting solutions for this problem are disk based, where the better of these two, from Cox et al. (2012), is based on the Burrows-Wheeler transform (BWT) and achieves 0.518 bits per base for a 134.0 Gbp human genome sequencing collection with almost 45-fold coverage. We propose overlapping reads compression with minimizers, a compression algorithm dedicated to sequencing reads (DNA only). Our method makes use of a conceptually simple and easily parallelizable idea of minimizers, to obtain 0.317 bits per base as the compression ratio, allowing to fit the 134.0 Gbp dataset into only 5.31 GB of space. http://sun.aei.polsl.pl/orcom under a free license. sebastian.deorowicz@polsl.pl Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  1. Verification of Electromagnetic Physics Models for Parallel Computing Architectures in the GeantV Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amadio, G.; et al.

    An intensive R&D and programming effort is required to accomplish new challenges posed by future experimental high-energy particle physics (HEP) programs. The GeantV project aims to narrow the gap between the performance of the existing HEP detector simulation software and the ideal performance achievable, exploiting latest advances in computing technology. The project has developed a particle detector simulation prototype capable of transporting in parallel particles in complex geometries exploiting instruction level microparallelism (SIMD and SIMT), task-level parallelism (multithreading) and high-level parallelism (MPI), leveraging both the multi-core and the many-core opportunities. We present preliminary verification results concerning the electromagnetic (EM) physicsmore » models developed for parallel computing architectures within the GeantV project. In order to exploit the potential of vectorization and accelerators and to make the physics model effectively parallelizable, advanced sampling techniques have been implemented and tested. In this paper we introduce a set of automated statistical tests in order to verify the vectorized models by checking their consistency with the corresponding Geant4 models and to validate them against experimental data.« less

  2. Model-Based Reasoning in Humans Becomes Automatic with Training.

    PubMed

    Economides, Marcos; Kurth-Nelson, Zeb; Lübbert, Annika; Guitart-Masip, Marc; Dolan, Raymond J

    2015-09-01

    Model-based and model-free reinforcement learning (RL) have been suggested as algorithmic realizations of goal-directed and habitual action strategies. Model-based RL is more flexible than model-free but requires sophisticated calculations using a learnt model of the world. This has led model-based RL to be identified with slow, deliberative processing, and model-free RL with fast, automatic processing. In support of this distinction, it has recently been shown that model-based reasoning is impaired by placing subjects under cognitive load--a hallmark of non-automaticity. Here, using the same task, we show that cognitive load does not impair model-based reasoning if subjects receive prior training on the task. This finding is replicated across two studies and a variety of analysis methods. Thus, task familiarity permits use of model-based reasoning in parallel with other cognitive demands. The ability to deploy model-based reasoning in an automatic, parallelizable fashion has widespread theoretical implications, particularly for the learning and execution of complex behaviors. It also suggests a range of important failure modes in psychiatric disorders.

  3. Practical Use of Computationally Frugal Model Analysis Methods

    DOE PAGES

    Hill, Mary C.; Kavetski, Dmitri; Clark, Martyn; ...

    2015-03-21

    Computationally frugal methods of model analysis can provide substantial benefits when developing models of groundwater and other environmental systems. Model analysis includes ways to evaluate model adequacy and to perform sensitivity and uncertainty analysis. Frugal methods typically require 10s of parallelizable model runs; their convenience allows for other uses of the computational effort. We suggest that model analysis be posed as a set of questions used to organize methods that range from frugal to expensive (requiring 10,000 model runs or more). This encourages focus on method utility, even when methods have starkly different theoretical backgrounds. We note that many frugalmore » methods are more useful when unrealistic process-model nonlinearities are reduced. Inexpensive diagnostics are identified for determining when frugal methods are advantageous. Examples from the literature are used to demonstrate local methods and the diagnostics. We suggest that the greater use of computationally frugal model analysis methods would allow questions such as those posed in this work to be addressed more routinely, allowing the environmental sciences community to obtain greater scientific insight from the many ongoing and future modeling efforts« less

  4. Block-Level Added Redundancy Explicit Authentication for Parallelized Encryption and Integrity Checking of Processor-Memory Transactions

    NASA Astrophysics Data System (ADS)

    Elbaz, Reouven; Torres, Lionel; Sassatelli, Gilles; Guillemin, Pierre; Bardouillet, Michel; Martinez, Albert

    The bus between the System on Chip (SoC) and the external memory is one of the weakest points of computer systems: an adversary can easily probe this bus in order to read private data (data confidentiality concern) or to inject data (data integrity concern). The conventional way to protect data against such attacks and to ensure data confidentiality and integrity is to implement two dedicated engines: one performing data encryption and another data authentication. This approach, while secure, prevents parallelizability of the underlying computations. In this paper, we introduce the concept of Block-Level Added Redundancy Explicit Authentication (BL-AREA) and we describe a Parallelized Encryption and Integrity Checking Engine (PE-ICE) based on this concept. BL-AREA and PE-ICE have been designed to provide an effective solution to ensure both security services while allowing for full parallelization on processor read and write operations and optimizing the hardware resources. Compared to standard encryption which ensures only confidentiality, we show that PE-ICE additionally guarantees code and data integrity for less than 4% of run-time performance overhead.

  5. Unified functional network and nonlinear time series analysis for complex systems science: The pyunicorn package

    NASA Astrophysics Data System (ADS)

    Donges, Jonathan F.; Heitzig, Jobst; Beronov, Boyan; Wiedermann, Marc; Runge, Jakob; Feng, Qing Yi; Tupikina, Liubov; Stolbova, Veronika; Donner, Reik V.; Marwan, Norbert; Dijkstra, Henk A.; Kurths, Jürgen

    2015-11-01

    We introduce the pyunicorn (Pythonic unified complex network and recurrence analysis toolbox) open source software package for applying and combining modern methods of data analysis and modeling from complex network theory and nonlinear time series analysis. pyunicorn is a fully object-oriented and easily parallelizable package written in the language Python. It allows for the construction of functional networks such as climate networks in climatology or functional brain networks in neuroscience representing the structure of statistical interrelationships in large data sets of time series and, subsequently, investigating this structure using advanced methods of complex network theory such as measures and models for spatial networks, networks of interacting networks, node-weighted statistics, or network surrogates. Additionally, pyunicorn provides insights into the nonlinear dynamics of complex systems as recorded in uni- and multivariate time series from a non-traditional perspective by means of recurrence quantification analysis, recurrence networks, visibility graphs, and construction of surrogate time series. The range of possible applications of the library is outlined, drawing on several examples mainly from the field of climatology.

  6. Morphological characterization of as-received and in vivo orthodontic stainless steel archwires.

    PubMed

    Daems, Julie; Celis, Jean-Pierre; Willems, Guy

    2009-06-01

    This study was undertaken to evaluate the material degradation of clinical bracket-archwire-contacting surfaces after in vivo orthodontic use. Twenty-four stainless steel multiloop edgewise archwires with two different cross sections (0.016 x 0.016 and 0.016 x 0.022 inches) were used for at least 6 months in the mouths of 14 patients. The surfaces of both as-received (cross-section of 0.016 x 0.016, 0.016 x 0.022, and 0.017 x 0.025 inches) and the in vivo wires were examined using scanning electron microscopy. The as-received wires exhibited an inhomogeneous surface with different surface irregularities resulting from the manufacturing process. For the in vivo archwires, an increase in the variety, type, and number of surface irregularities were observed. Crevice corrosion occurred not only at surface irregularities formed during manufacturing and orthodontic handling but also at the bracket-archwire-contacting surfaces and at the archwire surfaces coated with plaque and food remnants. This corrosion may be linked to the formation of a micro-environment at these locations. In addition, a limited number of signs of degradation induced during in vivo testing due to wear and friction were observed.

  7. Mrst '96: Current Ideas in Theoretical Physics - Proceedings of the Eighteenth Annual Montréal-Rochester-Syracuse-Toronto Meeting

    NASA Astrophysics Data System (ADS)

    O'Donnell, Patrick J.; Smith, Brian Hendee

    1996-11-01

    The Table of Contents for the full book PDF is as follows: * Preface * Roberto Mendel, An Appreciaton * The Infamous Coulomb Gauge * Renormalized Path Integral in Quantum Mechanics * New Analysis of the Divergence of Perturbation Theory * The Last of the Soluble Two Dimensional Field Theories? * Rb and Heavy Quark Mixing * Rb Problem: Loop Contributions and Supersymmetry * QCD Radiative Effects in Inclusive Hadronic B Decays * CP-Violating Dipole Moments of Quarks in the Kobayashi-Maskawa Model * Hints of Dynamical Symmetry Breaking? * Pi Pi Scattering in an Effective Chiral Lagrangian * Pion-Resonance Parameters from QCD Sum Rules * Higgs Theorem, Effective Action, and its Gauge Invariance * SUSY and the Decay H_2^0 to gg * Effective Higgs-to-Light Quark Coupling Induced by Heavy Quark Loops * Heavy Charged Lepton Production in Superstring Inspired E6 Models * The Elastic Properties of a Flat Crystalline Membrane * Gauge Dependence of Topological Observables in Chern-Simons Theory * Entanglement Entropy From Edge States * A Simple General Treatment of Flavor Oscillations * From Schrödinger to Maupertuis: Least Action Principles from Quantum Mechanics * The Matrix Method for Multi-Loop Feynman Integrals * Simplification in QCD and Electroweak Calculations * Programme * List of Participants

  8. A reactive, scalable, and transferable model for molecular energies from a neural network approach based on local information

    NASA Astrophysics Data System (ADS)

    Unke, Oliver T.; Meuwly, Markus

    2018-06-01

    Despite the ever-increasing computer power, accurate ab initio calculations for large systems (thousands to millions of atoms) remain infeasible. Instead, approximate empirical energy functions are used. Most current approaches are either transferable between different chemical systems, but not particularly accurate, or they are fine-tuned to a specific application. In this work, a data-driven method to construct a potential energy surface based on neural networks is presented. Since the total energy is decomposed into local atomic contributions, the evaluation is easily parallelizable and scales linearly with system size. With prediction errors below 0.5 kcal mol-1 for both unknown molecules and configurations, the method is accurate across chemical and configurational space, which is demonstrated by applying it to datasets from nonreactive and reactive molecular dynamics simulations and a diverse database of equilibrium structures. The possibility to use small molecules as reference data to predict larger structures is also explored. Since the descriptor only uses local information, high-level ab initio methods, which are computationally too expensive for large molecules, become feasible for generating the necessary reference data used to train the neural network.

  9. On the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods

    PubMed Central

    Lee, Anthony; Yau, Christopher; Giles, Michael B.; Doucet, Arnaud; Holmes, Christopher C.

    2011-01-01

    We present a case-study on the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods. Graphics cards, containing multiple Graphics Processing Units (GPUs), are self-contained parallel computational devices that can be housed in conventional desktop and laptop computers and can be thought of as prototypes of the next generation of many-core processors. For certain classes of population-based Monte Carlo algorithms they offer massively parallel simulation, with the added advantage over conventional distributed multi-core processors that they are cheap, easily accessible, easy to maintain, easy to code, dedicated local devices with low power consumption. On a canonical set of stochastic simulation examples including population-based Markov chain Monte Carlo methods and Sequential Monte Carlo methods, we nd speedups from 35 to 500 fold over conventional single-threaded computer code. Our findings suggest that GPUs have the potential to facilitate the growth of statistical modelling into complex data rich domains through the availability of cheap and accessible many-core computation. We believe the speedup we observe should motivate wider use of parallelizable simulation methods and greater methodological attention to their design. PMID:22003276

  10. Improving detection of low SNR targets using moment-based detection

    NASA Astrophysics Data System (ADS)

    Young, Shannon R.; Steward, Bryan J.; Hawks, Michael; Gross, Kevin C.

    2016-05-01

    Increases in the number of cameras deployed, frame rate, and detector array sizes have led to a dramatic increase in the volume of motion imagery data that is collected. Without a corresponding increase in analytical manpower, much of the data is not analyzed to full potential. This creates a need for fast, automated, and robust methods for detecting signals of interest. Current approaches fall into two categories: detect-before-track (DBT), which are fast but often poor at detecting dim targets, and track-before-detect (TBD) methods which can offer better performance but are typically much slower. This research seeks to contribute to the near real time detection of low SNR, unresolved moving targets through an extension of earlier work on higher order moments anomaly detection, a method that exploits both spatial and temporal information but is still computationally efficient and massively parallelizable. It was found that intelligent selection of parameters can improve probability of detection by as much as 25% compared to earlier work with higherorder moments. The present method can reduce detection thresholds by 40% compared to the Reed-Xiaoli anomaly detector for low SNR targets (for a given probability of detection and false alarm).

  11. Globalized Newton-Krylov-Schwarz Algorithms and Software for Parallel Implicit CFD

    NASA Technical Reports Server (NTRS)

    Gropp, W. D.; Keyes, D. E.; McInnes, L. C.; Tidriri, M. D.

    1998-01-01

    Implicit solution methods are important in applications modeled by PDEs with disparate temporal and spatial scales. Because such applications require high resolution with reasonable turnaround, "routine" parallelization is essential. The pseudo-transient matrix-free Newton-Krylov-Schwarz (Psi-NKS) algorithmic framework is presented as an answer. We show that, for the classical problem of three-dimensional transonic Euler flow about an M6 wing, Psi-NKS can simultaneously deliver: globalized, asymptotically rapid convergence through adaptive pseudo- transient continuation and Newton's method-, reasonable parallelizability for an implicit method through deferred synchronization and favorable communication-to-computation scaling in the Krylov linear solver; and high per- processor performance through attention to distributed memory and cache locality, especially through the Schwarz preconditioner. Two discouraging features of Psi-NKS methods are their sensitivity to the coding of the underlying PDE discretization and the large number of parameters that must be selected to govern convergence. We therefore distill several recommendations from our experience and from our reading of the literature on various algorithmic components of Psi-NKS, and we describe a freely available, MPI-based portable parallel software implementation of the solver employed here.

  12. Knowledge, transparency, and refutability in groundwater models, an example from the Death Valley regional groundwater flow system

    USGS Publications Warehouse

    Hill, Mary C.; Faunt, Claudia C.; Belcher, Wayne; Sweetkind, Donald; Tiedeman, Claire; Kavetski, Dmitri

    2013-01-01

    This work demonstrates how available knowledge can be used to build more transparent and refutable computer models of groundwater systems. The Death Valley regional groundwater flow system, which surrounds a proposed site for a high level nuclear waste repository of the United States of America, and the Nevada National Security Site (NNSS), where nuclear weapons were tested, is used to explore model adequacy, identify parameters important to (and informed by) observations, and identify existing old and potential new observations important to predictions. Model development is pursued using a set of fundamental questions addressed with carefully designed metrics. Critical methods include using a hydrogeologic model, managing model nonlinearity by designing models that are robust while maintaining realism, using error-based weighting to combine disparate types of data, and identifying important and unimportant parameters and observations and optimizing parameter values with computationally frugal schemes. The frugal schemes employed in this study require relatively few (10–1000 s), parallelizable model runs. This is beneficial because models able to approximate the complex site geology defensibly tend to have high computational cost. The issue of model defensibility is particularly important given the contentious political issues involved.

  13. Line-Constrained Camera Location Estimation in Multi-Image Stereomatching.

    PubMed

    Donné, Simon; Goossens, Bart; Philips, Wilfried

    2017-08-23

    Stereomatching is an effective way of acquiring dense depth information from a scene when active measurements are not possible. So-called lightfield methods take a snapshot from many camera locations along a defined trajectory (usually uniformly linear or on a regular grid-we will assume a linear trajectory) and use this information to compute accurate depth estimates. However, they require the locations for each of the snapshots to be known: the disparity of an object between images is related to both the distance of the camera to the object and the distance between the camera positions for both images. Existing solutions use sparse feature matching for camera location estimation. In this paper, we propose a novel method that uses dense correspondences to do the same, leveraging an existing depth estimation framework to also yield the camera locations along the line. We illustrate the effectiveness of the proposed technique for camera location estimation both visually for the rectification of epipolar plane images and quantitatively with its effect on the resulting depth estimation. Our proposed approach yields a valid alternative for sparse techniques, while still being executed in a reasonable time on a graphics card due to its highly parallelizable nature.

  14. Boosting Bayesian parameter inference of stochastic differential equation models with methods from statistical physics

    NASA Astrophysics Data System (ADS)

    Albert, Carlo; Ulzega, Simone; Stoop, Ruedi

    2016-04-01

    Measured time-series of both precipitation and runoff are known to exhibit highly non-trivial statistical properties. For making reliable probabilistic predictions in hydrology, it is therefore desirable to have stochastic models with output distributions that share these properties. When parameters of such models have to be inferred from data, we also need to quantify the associated parametric uncertainty. For non-trivial stochastic models, however, this latter step is typically very demanding, both conceptually and numerically, and always never done in hydrology. Here, we demonstrate that methods developed in statistical physics make a large class of stochastic differential equation (SDE) models amenable to a full-fledged Bayesian parameter inference. For concreteness we demonstrate these methods by means of a simple yet non-trivial toy SDE model. We consider a natural catchment that can be described by a linear reservoir, at the scale of observation. All the neglected processes are assumed to happen at much shorter time-scales and are therefore modeled with a Gaussian white noise term, the standard deviation of which is assumed to scale linearly with the system state (water volume in the catchment). Even for constant input, the outputs of this simple non-linear SDE model show a wealth of desirable statistical properties, such as fat-tailed distributions and long-range correlations. Standard algorithms for Bayesian inference fail, for models of this kind, because their likelihood functions are extremely high-dimensional intractable integrals over all possible model realizations. The use of Kalman filters is illegitimate due to the non-linearity of the model. Particle filters could be used but become increasingly inefficient with growing number of data points. Hamiltonian Monte Carlo algorithms allow us to translate this inference problem to the problem of simulating the dynamics of a statistical mechanics system and give us access to most sophisticated methods that have been developed in the statistical physics community over the last few decades. We demonstrate that such methods, along with automated differentiation algorithms, allow us to perform a full-fledged Bayesian inference, for a large class of SDE models, in a highly efficient and largely automatized manner. Furthermore, our algorithm is highly parallelizable. For our toy model, discretized with a few hundred points, a full Bayesian inference can be performed in a matter of seconds on a standard PC.

  15. Application of a sensitivity analysis technique to high-order digital flight control systems

    NASA Technical Reports Server (NTRS)

    Paduano, James D.; Downing, David R.

    1987-01-01

    A sensitivity analysis technique for multiloop flight control systems is studied. This technique uses the scaled singular values of the return difference matrix as a measure of the relative stability of a control system. It then uses the gradients of these singular values with respect to system and controller parameters to judge sensitivity. The sensitivity analysis technique is first reviewed; then it is extended to include digital systems, through the derivation of singular-value gradient equations. Gradients with respect to parameters which do not appear explicitly as control-system matrix elements are also derived, so that high-order systems can be studied. A complete review of the integrated technique is given by way of a simple example: the inverted pendulum problem. The technique is then demonstrated on the X-29 control laws. Results show linear models of real systems can be analyzed by this sensitivity technique, if it is applied with care. A computer program called SVA was written to accomplish the singular-value sensitivity analysis techniques. Thus computational methods and considerations form an integral part of many of the discussions. A user's guide to the program is included. The SVA is a fully public domain program, running on the NASA/Dryden Elxsi computer.

  16. Segmented surface coil resonator for in vivo EPR applications at 1.1GHz.

    PubMed

    Petryakov, Sergey; Samouilov, Alexandre; Chzhan-Roytenberg, Michael; Kesselring, Eric; Sun, Ziqi; Zweier, Jay L

    2009-05-01

    A four-loop segmented surface coil resonator (SSCR) with electronic frequency and coupling adjustments was constructed with 18mm aperture and loading capability suitable for in vivo Electron Paramagnetic Resonance (EPR) spectroscopy and imaging applications at L-band. Increased sample volume and loading capability were achieved by employing a multi-loop three-dimensional surface coil structure. Symmetrical design of the resonator with coupling to each loop resulted in high homogeneity of RF magnetic field. Parallel loops were coupled to the feeder cable via balancing circuitry containing varactor diodes for electronic coupling and tuning over a wide range of loading conditions. Manually adjusted high Q trimmer capacitors were used for initial tuning with subsequent tuning electronically controlled using varactor diodes. This design provides transparency and homogeneity of magnetic field modulation in the sample volume, while matching components are shielded to minimize interference with modulation and ambient RF fields. It can accommodate lossy samples up to 90% of its aperture with high homogeneity of RF and modulation magnetic fields and can function as a surface loop or a slice volume resonator. Along with an outer coaxial NMR surface coil, the SSCR enabled EPR/NMR co-imaging of paramagnetic probes in living rats to a depth of 20mm.

  17. Amplitudes in the N=4 supersymmetric Yang-Mills theory from quantum geometry of momentum space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gorsky, A.

    We discuss multiloop maximally helicity violating amplitudes in the N=4 supersymmetric Yang-Mills theory in terms of effective gravity in the momentum space with IR regulator branes as degrees of freedom. Kinematical invariants of external particles yield the moduli spaces of complex or Kahler structures which are the playgrounds for the Kodaira-Spencer or Kahler type gravity. We suggest fermionic representation of the loop maximally helicity violating amplitudes in the N=4 supersymmetric Yang-Mills theory assuming the identification of the IR regulator branes with Kodaira-Spencer fermions in the B model and Lagrangian branes in the A model. The two-easy mass box diagram ismore » related to the correlator of fermionic currents on the spectral curve in the B model or hyperbolic volume in the A model and it plays the role of a building block in the whole picture. The Bern-Dixon-Smirnov-like ansatz has the interpretation as the semiclassical limit of a fermionic correlator. It is argued that fermionic representation implies a kind of integrability on the moduli spaces. We conjecture the interpretation of the reggeon degrees of freedom in terms of the open strings stretched between the IR regulator branes.« less

  18. TRAC-PF1/MOD1 pretest predictions of MIST experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boyack, B.E.; Steiner, J.L.; Siebe, D.A.

    Los Alamos National Laboratory is a participant in the Integral System Test (IST) program initiated in June 1983 to provide integral system test data on specific issues and phenomena relevant to post small-break loss-of-coolant accidents (SBLOCAs) in Babcock and Wilcox plant designs. The Multi-Loop Integral System Test (MIST) facility is the largest single component in the IST program. During Fiscal Year 1986, Los Alamos performed five MIST pretest analyses. The five experiments were chosen on the basis of their potential either to approach the facility limits or to challenge the predictive capability of the TRAC-PF1/MOD1 code. Three SBLOCA tests weremore » examined which included nominal test conditions, throttled auxiliary feedwater and asymmetric steam-generator cooldown, and reduced high-pressure-injection (HPI) capacity, respectively. Also analyzed were two ''feed-and-bleed'' cooling tests with reduced HPI and delayed HPI initiation. Results of the tests showed that the MIST facility limits would not be approached in the five tests considered. Early comparisons with preliminary test data indicate that the TRAC-PF1/MOD1 code is correctly calculating the dominant phenomena occurring in the MIST facility during the tests. Posttest analyses are planned to provide a quantitative assessment of the code's ability to predict MIST transients.« less

  19. Robust Stability Analysis of the Space Launch System Control Design: A Singular Value Approach

    NASA Technical Reports Server (NTRS)

    Pei, Jing; Newsome, Jerry R.

    2015-01-01

    Classical stability analysis consists of breaking the feedback loops one at a time and determining separately how much gain or phase variations would destabilize the stable nominal feedback system. For typical launch vehicle control design, classical control techniques are generally employed. In addition to stability margins, frequency domain Monte Carlo methods are used to evaluate the robustness of the design. However, such techniques were developed for Single-Input-Single-Output (SISO) systems and do not take into consideration the off-diagonal terms in the transfer function matrix of Multi-Input-Multi-Output (MIMO) systems. Robust stability analysis techniques such as H(sub infinity) and mu are applicable to MIMO systems but have not been adopted as standard practices within the launch vehicle controls community. This paper took advantage of a simple singular-value-based MIMO stability margin evaluation method based on work done by Mukhopadhyay and Newsom and applied it to the SLS high-fidelity dynamics model. The method computes a simultaneous multi-loop gain and phase margin that could be related back to classical margins. The results presented in this paper suggest that for the SLS system, traditional SISO stability margins are similar to the MIMO margins. This additional level of verification provides confidence in the robustness of the control design.

  20. Segmented surface coil resonator for in vivo EPR applications at 1.1 GHz

    PubMed Central

    Petryakov, Sergey; Samouilov, Alexandre; Chzhan-Roytenberg, Michael; Kesselring, Eric; Sun, Ziqi; Zweier, Jay L.

    2010-01-01

    A four-loop segmented surface coil resonator (SSCR) with electronic frequency and coupling adjustments was constructed with 18 mm aperture and loading capability suitable for in vivo Electron Paramagnetic Resonance (EPR) spectroscopy and imaging applications at L-band. Increased sample volume and loading capability were achieved by employing a multi-loop three-dimensional surface coil structure. Symmetrical design of the resonator with coupling to each loop resulted in high homogeneity of RF magnetic field. Parallel loops were coupled to the feeder cable via balancing circuitry containing varactor diodes for electronic coupling and tuning over a wide range of loading conditions. Manually adjusted high Q trimmer capacitors were used for initial tuning with subsequent tuning electronically controlled using varactor diodes. This design provides transparency and homogeneity of magnetic field modulation in the sample volume, while matching components are shielded to minimize interference with modulation and ambient RF fields. It can accommodate lossy samples up to 90% of its aperture with high homogeneity of RF and modulation magnetic fields and can function as a surface loop or a slice volume resonator. Along with an outer coaxial NMR surface coil, the SSCR enabled EPR/NMR co-imaging of paramagnetic probes in living rats to a depth of 20 mm. PMID:19268615

  1. Passive advection of a vector field: Anisotropy, finite correlation time, exact solution, and logarithmic corrections to ordinary scaling

    NASA Astrophysics Data System (ADS)

    Antonov, N. V.; Gulitskiy, N. M.

    2015-10-01

    In this work we study the generalization of the problem considered in [Phys. Rev. E 91, 013002 (2015), 10.1103/PhysRevE.91.013002] to the case of finite correlation time of the environment (velocity) field. The model describes a vector (e.g., magnetic) field, passively advected by a strongly anisotropic turbulent flow. Inertial-range asymptotic behavior is studied by means of the field theoretic renormalization group and the operator product expansion. The advecting velocity field is Gaussian, with finite correlation time and preassigned pair correlation function. Due to the presence of distinguished direction n , all the multiloop diagrams in this model vanish, so that the results obtained are exact. The inertial-range behavior of the model is described by two regimes (the limits of vanishing or infinite correlation time) that correspond to the two nontrivial fixed points of the RG equations. Their stability depends on the relation between the exponents in the energy spectrum E ∝k⊥1 -ξ and the dispersion law ω ∝k⊥2 -η . In contrast to the well-known isotropic Kraichnan's model, where various correlation functions exhibit anomalous scaling behavior with infinite sets of anomalous exponents, here the corrections to ordinary scaling are polynomials of logarithms of the integral turbulence scale L .

  2. Advanced Computational Methods for Security Constrained Financial Transmission Rights

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kalsi, Karanjit; Elbert, Stephen T.; Vlachopoulou, Maria

    Financial Transmission Rights (FTRs) are financial insurance tools to help power market participants reduce price risks associated with transmission congestion. FTRs are issued based on a process of solving a constrained optimization problem with the objective to maximize the FTR social welfare under power flow security constraints. Security constraints for different FTR categories (monthly, seasonal or annual) are usually coupled and the number of constraints increases exponentially with the number of categories. Commercial software for FTR calculation can only provide limited categories of FTRs due to the inherent computational challenges mentioned above. In this paper, first an innovative mathematical reformulationmore » of the FTR problem is presented which dramatically improves the computational efficiency of optimization problem. After having re-formulated the problem, a novel non-linear dynamic system (NDS) approach is proposed to solve the optimization problem. The new formulation and performance of the NDS solver is benchmarked against widely used linear programming (LP) solvers like CPLEX™ and tested on both standard IEEE test systems and large-scale systems using data from the Western Electricity Coordinating Council (WECC). The performance of the NDS is demonstrated to be comparable and in some cases is shown to outperform the widely used CPLEX algorithms. The proposed formulation and NDS based solver is also easily parallelizable enabling further computational improvement.« less

  3. Feasibility demonstration of a massively parallelizable optical near-field sensor for sub-wavelength defect detection and imaging

    PubMed Central

    Mostafavi, Mahkamehossadat; Diaz, Rodolfo E.

    2016-01-01

    To detect and resolve sub-wavelength features at optical frequencies, beyond the diffraction limit, requires sensors that interact with the electromagnetic near-field of those features. Most instruments operating in this modality scan a single detector element across the surface under inspection because the scattered signals from a multiplicity of such elements would end up interfering with each other. However, an alternative massively parallelized configuration, capable of interrogating multiple adjacent areas of the surface at the same time, was proposed in 2002. Full physics simulations of the photonic antenna detector element that enables this instrument, show that using conventional red laser light (in the 600 nm range) the detector magnifies the signal from an 8 nm particle by up to 1.5 orders of magnitude. The antenna is a shaped slot element in a 60 nm silver film. The ability of this detector element to resolve λ/78 objects is confirmed experimentally at radio frequencies by fabricating an artificial material structure that mimics the optical permittivity of silver scaled to 2 GHz, and “cutting” into it the slot antenna. The experimental set-up is also used to demonstrate the imaging of a patterned surface in which the critical dimensions of the pattern are λ/22 in size. PMID:27185385

  4. Shape reconstruction of irregular bodies with multiple complementary data sources

    NASA Astrophysics Data System (ADS)

    Kaasalainen, M.; Viikinkoski, M.

    2012-07-01

    We discuss inversion methods for shape reconstruction with complementary data sources. The current main sources are photometry, adaptive optics or other images, occultation timings, and interferometry, and the procedure can readily be extended to include range-Doppler radar and thermal infrared data as well. We introduce the octantoid, a generally applicable shape support that can be automatically used for surface types encountered in planetary research, including strongly nonconvex or non-starlike shapes. We present models of Kleopatra and Hermione from multimodal data as examples of this approach. An important concept in this approach is the optimal weighting of the various data modes. We define the maximum compatibility estimate, a multimodal generalization of the maximum likelihood estimate, for this purpose. We also present a specific version of the procedure for asteroid flyby missions, with which one can reconstruct the complete shape of the target by using the flyby-based map of a part of the surface together with other available data. Finally, we show that the relative volume error of a shape solution is usually approximately equal to the relative shape error rather than its multiple. Our algorithms are trivially parallelizable, so running the code on a CUDA-enabled graphics processing unit is some two orders of magnitude faster than the usual single-processor mode.

  5. Inverse 4D conformal planning for lung SBRT using particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Modiri, A.; Gu, X.; Hagan, A.; Bland, R.; Iyengar, P.; Timmerman, R.; Sawant, A.

    2016-08-01

    A critical aspect of highly potent regimens such as lung stereotactic body radiation therapy (SBRT) is to avoid collateral toxicity while achieving planning target volume (PTV) coverage. In this work, we describe four dimensional conformal radiotherapy using a highly parallelizable swarm intelligence-based stochastic optimization technique. Conventional lung CRT-SBRT uses a 4DCT to create an internal target volume and then, using forward-planning, generates a 3D conformal plan. In contrast, we investigate an inverse-planning strategy that uses 4DCT data to create a 4D conformal plan, which is optimized across the three spatial dimensions (3D) as well as time, as represented by the respiratory phase. The key idea is to use respiratory motion as an additional degree of freedom. We iteratively adjust fluence weights for all beam apertures across all respiratory phases considering OAR sparing, PTV coverage and delivery efficiency. To demonstrate proof-of-concept, five non-small-cell lung cancer SBRT patients were retrospectively studied. The 4D optimized plans achieved PTV coverage comparable to the corresponding clinically delivered plans while showing significantly superior OAR sparing ranging from 26% to 83% for D max heart, 10%-41% for D max esophagus, 31%-68% for D max spinal cord and 7%-32% for V 13 lung.

  6. A parallel competitive Particle Swarm Optimization for non-linear first arrival traveltime tomography and uncertainty quantification

    NASA Astrophysics Data System (ADS)

    Luu, Keurfon; Noble, Mark; Gesret, Alexandrine; Belayouni, Nidhal; Roux, Pierre-François

    2018-04-01

    Seismic traveltime tomography is an optimization problem that requires large computational efforts. Therefore, linearized techniques are commonly used for their low computational cost. These local optimization methods are likely to get trapped in a local minimum as they critically depend on the initial model. On the other hand, global optimization methods based on MCMC are insensitive to the initial model but turn out to be computationally expensive. Particle Swarm Optimization (PSO) is a rather new global optimization approach with few tuning parameters that has shown excellent convergence rates and is straightforwardly parallelizable, allowing a good distribution of the workload. However, while it can traverse several local minima of the evaluated misfit function, classical implementation of PSO can get trapped in local minima at later iterations as particles inertia dim. We propose a Competitive PSO (CPSO) to help particles to escape from local minima with a simple implementation that improves swarm's diversity. The model space can be sampled by running the optimizer multiple times and by keeping all the models explored by the swarms in the different runs. A traveltime tomography algorithm based on CPSO is successfully applied on a real 3D data set in the context of induced seismicity.

  7. Fast parallel tandem mass spectral library searching using GPU hardware acceleration.

    PubMed

    Baumgardner, Lydia Ashleigh; Shanmugam, Avinash Kumar; Lam, Henry; Eng, Jimmy K; Martin, Daniel B

    2011-06-03

    Mass spectrometry-based proteomics is a maturing discipline of biologic research that is experiencing substantial growth. Instrumentation has steadily improved over time with the advent of faster and more sensitive instruments collecting ever larger data files. Consequently, the computational process of matching a peptide fragmentation pattern to its sequence, traditionally accomplished by sequence database searching and more recently also by spectral library searching, has become a bottleneck in many mass spectrometry experiments. In both of these methods, the main rate-limiting step is the comparison of an acquired spectrum with all potential matches from a spectral library or sequence database. This is a highly parallelizable process because the core computational element can be represented as a simple but arithmetically intense multiplication of two vectors. In this paper, we present a proof of concept project taking advantage of the massively parallel computing available on graphics processing units (GPUs) to distribute and accelerate the process of spectral assignment using spectral library searching. This program, which we have named FastPaSS (for Fast Parallelized Spectral Searching), is implemented in CUDA (Compute Unified Device Architecture) from NVIDIA, which allows direct access to the processors in an NVIDIA GPU. Our efforts demonstrate the feasibility of GPU computing for spectral assignment, through implementation of the validated spectral searching algorithm SpectraST in the CUDA environment.

  8. Inverse 4D conformal planning for lung SBRT using particle swarm optimization

    PubMed Central

    Modiri, A; Gu, X; Hagan, A; Bland, R; Iyengar, P; Timmerman, R; Sawant, A

    2016-01-01

    A critical aspect of highly potent regimens such as lung stereotactic body radiation therapy (SBRT) is to avoid collateral toxicity while achieving planning target volume (PTV) coverage. In this work, we describe four dimensional conformal radiotherapy (4D CRT) using a highly parallelizable swarm intelligence-based stochastic optimization technique. Conventional lung CRT-SBRT uses a 4DCT to create an internal target volume (ITV) and then, using forward-planning, generates a 3D conformal plan. In contrast, we investigate an inverse-planning strategy that uses 4DCT data to create a 4D conformal plan, which is optimized across the three spatial dimensions (3D) as well as time, as represented by the respiratory phase. The key idea is to use respiratory motion as an additional degree of freedom. We iteratively adjust fluence weights for all beam apertures across all respiratory phases considering OAR sparing, PTV coverage and delivery efficiency. To demonstrate proof-of-concept, five non-small-cell lung cancer SBRT patients were retrospectively studied. The 4D optimized plans achieved PTV coverage comparable to the corresponding clinically delivered plans while showing significantly superior OAR sparing ranging from 26% to 83% for Dmax heart, 10% to 41% for Dmax esophagus, 31% to 68% for Dmax spinal cord and 7% to 32% for V13 lung. PMID:27476472

  9. Posttest analysis of MIST Test 330302 using TRAC-PF1/MOD1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boyack, B E

    This report discusses a posttest analysis of Multi-Loop Integral System Test (MIST) 330302 which has been performed using TRAC-PF1/MOD1. This test was one of group performed in the MIST facility to investigate high-pressure injection (HPI)-power-operated relief valve (PORV) cooling, also known as feed-and-bleed cooling. In Test 330302, HPI cooling was delayed 20 min after opening and locking the PORV open to induce extensive system voiding. We have concluded that the TRAC-calculated results are in reasonable overall agreement with the data for Test 330302. All major trends and phenomena were correctly predicted. Differences observed between the measured and calculated results havemore » been traced and related, in part, to deficiencies in our knowledge of the facility configuration and operation. We have identified two models forwhich additional review is appropriate. However, in general, the TRAC closure models and correlations appear to be adequate for the prediction of the phenomena expected to occur during feed-and-bleed transientsin the MIST facility. We believe that the correct conclusions about trends and phenomena will be reached if the code is used in similar applications. Conclusions reached regarding use of the code to calculate similar phenomena in full-size plants (scaling implications) and regulatory implications of this work are also presented.« less

  10. MIST final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gloudemans, J.R.

    1991-08-01

    The multiloop integral system test (MIST) was part of a multiphase program started in 1983 to address small-break loss-of-coolant accidents (SBLOCAs) specific to Babcock Wilcox-designed plants. MIST was sponsored by the US Nuclear Regulatory Commission, the Babcock Wilcox Owners Group, the Electric Power Research Institute, and Babcock Wilcox. The unique features of the Babcock Wilcox design, specifically the hot leg U-bends and steam generators, prevented the use of existing integral system data or existing integral system facilities to addresss the thermal-hydraulic SBLOCA questions. MIST was specifically designed and constructed for this program, and an existing facility -- the once-through integralmore » system (OTIS) -- was also used. Data from MIST and OTIS are used to benchmark the adequacy of system codes, such as RELAP5 and TRAC, for predicting abnormal plant transients. The MIST program is reported in eleven volumes; Volumes 2 through 8 pertain to groups of Phase 3 tests by type, Volume 9 presents inter-group comparisons. Volume 10 provides comparisons between the RELAP5 MOD2 calculations and MIST observations, and Volume 11 (with addendum) presents the later, Phase 4 tests. This is Volume 1 of the MIST final report, a summary of the entire MIST program. Major topics include: test advisory grop (TAG) issues; facility scaling and design; test matrix; observations; comparisons of RELAP5 calculations to MIST observations; and MIST versus the TAG issues. 11 refs., 29 figs., 9 tabs.« less

  11. A Wearable EEG-HEG-HRV Multimodal System With Simultaneous Monitoring of tES for Mental Health Management.

    PubMed

    Ha, Unsoo; Lee, Yongsu; Kim, Hyunki; Roh, Taehwan; Bae, Joonsung; Kim, Changhyeon; Yoo, Hoi-Jun

    2015-12-01

    A multimodal mental management system in the shape of the wearable headband and earplugs is proposed to monitor electroencephalography (EEG), hemoencephalography (HEG) and heart rate variability (HRV) for accurate mental health monitoring. It enables simultaneous transcranial electrical stimulation (tES) together with real-time monitoring. The total weight of the proposed system is less than 200 g. The multi-loop low-noise amplifier (MLLNA) achieves over 130 dB CMRR for EEG sensing and the capacitive correlated-double sampling transimpedance amplifier (CCTIA) has low-noise characteristics for HEG and HRV sensing. Measured three-physiology domains such as neural, vascular and autonomic domain signals are combined with canonical correlation analysis (CCA) and temporal kernel canonical correlation analysis (tkCCA) algorithm to find the neural-vascular-autonomic coupling. It supports highly accurate classification with the 19% maximum improvement with multimodal monitoring. For the multi-channel stimulation functionality, after-effects maximization monitoring and sympathetic nerve disorder monitoring, the stimulator is designed as reconfigurable. The 3.37 × 2.25 mm(2) chip has 2-channel EEG sensor front-end, 2-channel NIRS sensor front-end, NIRS current driver to drive dual-wavelength VCSEL and 6-b DAC current source for tES mode. It dissipates 24 mW with 2 mA stimulation current and 5 mA NIRS driver current.

  12. Steady-state bumpless transfer under controller uncertainty using the state/output feedback topology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng, K.; Lee, A.H.; Bentsman, J.

    2006-01-15

    Linear quadratic (LQ) bumpless transfer design introduced recently by Turner and Walker gives a very convenient and straightforward computational procedure for the steady-state bumpless transfer operator synthesis. It is, however, found to be incapable of providing convergence of the output of the offline controller to that of the online controller in several industrial applications, producing bumps in the plant output in the wake of controller transfer. An examination of this phenomenon reveals that the applications in question are characterized by a significant mismatch, further referred to as controller uncertainty, between the dynamics of the implemented controllers and their models usedmore » in the transfer operator computation. To address this problem, while retaining the convenience of the Turner and Walker design, a novel state/output feedback bumpless transfer topology is introduced that employs the nominal state of the offline controller and, through the use of an additional controller/model mismatch compensator, also the offline controller output. A corresponding steady-state bumpless transfer design procedure along with the supporting theory is developed for a large class of systems. Due to these features, it is demonstrated to solve a long-standing problem of high-quality steady-state bumpless transfer from the industry standard low-order nonlinear multiloop PID-based controllers to the modern multiinput-multioutput (MIMO) robust controllers in the megawatt/throttle pressure control of a typical coal-fired boiler/turbine unit.« less

  13. A Pilot Model for the NASA Simplified Aid for EVA Rescue (SAFER) (Single-Axis Pitch Task)

    NASA Astrophysics Data System (ADS)

    Handley, Patrick Mark

    This thesis defines, tests, and validates a descriptive pilot model for a single-axis pitch control task of the Simplified Aid for EVA Rescue (SAFER). SAFER is a small propulsive jetpack used by astronauts for self-rescue. Pilot model research supports development of improved self-rescue strategies and technologies through insights into pilot behavior.This thesis defines a multi-loop pilot model. The innermost loop controls the hand controller, the middle loop controls pitch rate, and the outer loop controls pitch angle. A human-in-the-loop simulation was conducted to gather data from a human pilot. Quantitative and qualitative metrics both indicate that the model is an acceptable fit to the human data. Fuel consumption was nearly identical; time to task completion matched very well. There is some evidence that the model responds faster to initial pitch rates than the human, artificially decreasing the model's time to task completion. This pilot model is descriptive, not predictive, of the human pilot. Insights are made into pilot behavior from this research. Symmetry implies that the human responds to positive and negative initial conditions with the same strategy. The human pilot appears indifferent to pitch angles within 0.5 deg, coasts at a constant pitch rate 1.09 deg/s, and has a reaction delay of 0.1 s.

  14. The detailed 3D multi-loop aggregate/rosette chromatin architecture and functional dynamic organization of the human and mouse genomes.

    PubMed

    Knoch, Tobias A; Wachsmuth, Malte; Kepper, Nick; Lesnussa, Michael; Abuseiris, Anis; Ali Imam, A M; Kolovos, Petros; Zuin, Jessica; Kockx, Christel E M; Brouwer, Rutger W W; van de Werken, Harmen J G; van IJcken, Wilfred F J; Wendt, Kerstin S; Grosveld, Frank G

    2016-01-01

    The dynamic three-dimensional chromatin architecture of genomes and its co-evolutionary connection to its function-the storage, expression, and replication of genetic information-is still one of the central issues in biology. Here, we describe the much debated 3D architecture of the human and mouse genomes from the nucleosomal to the megabase pair level by a novel approach combining selective high-throughput high-resolution chromosomal interaction capture ( T2C ), polymer simulations, and scaling analysis of the 3D architecture and the DNA sequence. The genome is compacted into a chromatin quasi-fibre with ~5 ± 1 nucleosomes/11 nm, folded into stable ~30-100 kbp loops forming stable loop aggregates/rosettes connected by similar sized linkers. Minor but significant variations in the architecture are seen between cell types and functional states. The architecture and the DNA sequence show very similar fine-structured multi-scaling behaviour confirming their co-evolution and the above. This architecture, its dynamics, and accessibility, balance stability and flexibility ensuring genome integrity and variation enabling gene expression/regulation by self-organization of (in)active units already in proximity. Our results agree with the heuristics of the field and allow "architectural sequencing" at a genome mechanics level to understand the inseparable systems genomic properties.

  15. Domain decomposition: A bridge between nature and parallel computers

    NASA Technical Reports Server (NTRS)

    Keyes, David E.

    1992-01-01

    Domain decomposition is an intuitive organizing principle for a partial differential equation (PDE) computation, both physically and architecturally. However, its significance extends beyond the readily apparent issues of geometry and discretization, on one hand, and of modular software and distributed hardware, on the other. Engineering and computer science aspects are bridged by an old but recently enriched mathematical theory that offers the subject not only unity, but also tools for analysis and generalization. Domain decomposition induces function-space and operator decompositions with valuable properties. Function-space bases and operator splittings that are not derived from domain decompositions generally lack one or more of these properties. The evolution of domain decomposition methods for elliptically dominated problems has linked two major algorithmic developments of the last 15 years: multilevel and Krylov methods. Domain decomposition methods may be considered descendants of both classes with an inheritance from each: they are nearly optimal and at the same time efficiently parallelizable. Many computationally driven application areas are ripe for these developments. A progression is made from a mathematically informal motivation for domain decomposition methods to a specific focus on fluid dynamics applications. To be introductory rather than comprehensive, simple examples are provided while convergence proofs and algorithmic details are left to the original references; however, an attempt is made to convey their most salient features, especially where this leads to algorithmic insight.

  16. CloudMC: a cloud computing application for Monte Carlo simulation.

    PubMed

    Miras, H; Jiménez, R; Miras, C; Gomà, C

    2013-04-21

    This work presents CloudMC, a cloud computing application-developed in Windows Azure®, the platform of the Microsoft® cloud-for the parallelization of Monte Carlo simulations in a dynamic virtual cluster. CloudMC is a web application designed to be independent of the Monte Carlo code in which the simulations are based-the simulations just need to be of the form: input files → executable → output files. To study the performance of CloudMC in Windows Azure®, Monte Carlo simulations with penelope were performed on different instance (virtual machine) sizes, and for different number of instances. The instance size was found to have no effect on the simulation runtime. It was also found that the decrease in time with the number of instances followed Amdahl's law, with a slight deviation due to the increase in the fraction of non-parallelizable time with increasing number of instances. A simulation that would have required 30 h of CPU on a single instance was completed in 48.6 min when executed on 64 instances in parallel (speedup of 37 ×). Furthermore, the use of cloud computing for parallel computing offers some advantages over conventional clusters: high accessibility, scalability and pay per usage. Therefore, it is strongly believed that cloud computing will play an important role in making Monte Carlo dose calculation a reality in future clinical practice.

  17. Energy minimization in nematic liquid crystal systems driven by geometric confinement and temperature gradients with applications in colloidal systems

    NASA Astrophysics Data System (ADS)

    Kolacz, Jakub

    We first explore the topology of liquid crystals and look at the fundamental limitations of liquid crystals in confined geometries. The properties of liquid crystal droplets are studied both theoretically and through simulations. We then demonstrate a method of chemically patterning surfaces that allows us to generate periodic arrays of micron-sized liquid crystal droplets and compare them to our simulation results. The parallelizable method of self-localizing liquid crystals using 2D chemical patterning developed here has applications in liquid crystal biosensors and lens arrays. We also present the first work looking at colloidal liquid crystals under the guise of thermophoresis. We observe that strong negative thermophoresis occurs in these systems and develop a theory based on elastic energy minimization. We also calculate a Soret coefficient two orders of magnitude larger than those present in the literature. This large Soret coefficient has considerable potential for improving thermophoretic sorting mechanisms such as Thermal-Field Flow Fractionation and MicroScale Thermophoresis. The final piece of this work demonstrates a method of using projection lithography to polymerize liquid crystal colloids with a defined internal director. While still a work in progress, there is potential for generating systems of active colloids that can change shape upon external stimulus and in the generation of self-folding shapes by selective polymerization and director predetermination in the vain of micro-kirigami.

  18. Real-time implementation of optimized maximum noise fraction transform for feature extraction of hyperspectral images

    NASA Astrophysics Data System (ADS)

    Wu, Yuanfeng; Gao, Lianru; Zhang, Bing; Zhao, Haina; Li, Jun

    2014-01-01

    We present a parallel implementation of the optimized maximum noise fraction (G-OMNF) transform algorithm for feature extraction of hyperspectral images on commodity graphics processing units (GPUs). The proposed approach explored the algorithm data-level concurrency and optimized the computing flow. We first defined a three-dimensional grid, in which each thread calculates a sub-block data to easily facilitate the spatial and spectral neighborhood data searches in noise estimation, which is one of the most important steps involved in OMNF. Then, we optimized the processing flow and computed the noise covariance matrix before computing the image covariance matrix to reduce the original hyperspectral image data transmission. These optimization strategies can greatly improve the computing efficiency and can be applied to other feature extraction algorithms. The proposed parallel feature extraction algorithm was implemented on an Nvidia Tesla GPU using the compute unified device architecture and basic linear algebra subroutines library. Through the experiments on several real hyperspectral images, our GPU parallel implementation provides a significant speedup of the algorithm compared with the CPU implementation, especially for highly data parallelizable and arithmetically intensive algorithm parts, such as noise estimation. In order to further evaluate the effectiveness of G-OMNF, we used two different applications: spectral unmixing and classification for evaluation. Considering the sensor scanning rate and the data acquisition time, the proposed parallel implementation met the on-board real-time feature extraction.

  19. Hierarchical animal movement models for population-level inference

    USGS Publications Warehouse

    Hooten, Mevin B.; Buderman, Frances E.; Brost, Brian M.; Hanks, Ephraim M.; Ivans, Jacob S.

    2016-01-01

    New methods for modeling animal movement based on telemetry data are developed regularly. With advances in telemetry capabilities, animal movement models are becoming increasingly sophisticated. Despite a need for population-level inference, animal movement models are still predominantly developed for individual-level inference. Most efforts to upscale the inference to the population level are either post hoc or complicated enough that only the developer can implement the model. Hierarchical Bayesian models provide an ideal platform for the development of population-level animal movement models but can be challenging to fit due to computational limitations or extensive tuning required. We propose a two-stage procedure for fitting hierarchical animal movement models to telemetry data. The two-stage approach is statistically rigorous and allows one to fit individual-level movement models separately, then resample them using a secondary MCMC algorithm. The primary advantages of the two-stage approach are that the first stage is easily parallelizable and the second stage is completely unsupervised, allowing for an automated fitting procedure in many cases. We demonstrate the two-stage procedure with two applications of animal movement models. The first application involves a spatial point process approach to modeling telemetry data, and the second involves a more complicated continuous-time discrete-space animal movement model. We fit these models to simulated data and real telemetry data arising from a population of monitored Canada lynx in Colorado, USA.

  20. Large-scale seismic signal analysis with Hadoop

    DOE PAGES

    Addair, T. G.; Dodge, D. A.; Walter, W. R.; ...

    2014-02-11

    In seismology, waveform cross correlation has been used for years to produce high-precision hypocenter locations and for sensitive detectors. Because correlated seismograms generally are found only at small hypocenter separation distances, correlation detectors have historically been reserved for spotlight purposes. However, many regions have been found to produce large numbers of correlated seismograms, and there is growing interest in building next-generation pipelines that employ correlation as a core part of their operation. In an effort to better understand the distribution and behavior of correlated seismic events, we have cross correlated a global dataset consisting of over 300 million seismograms. Thismore » was done using a conventional distributed cluster, and required 42 days. In anticipation of processing much larger datasets, we have re-architected the system to run as a series of MapReduce jobs on a Hadoop cluster. In doing so we achieved a factor of 19 performance increase on a test dataset. We found that fundamental algorithmic transformations were required to achieve the maximum performance increase. Whereas in the original IO-bound implementation, we went to great lengths to minimize IO, in the Hadoop implementation where IO is cheap, we were able to greatly increase the parallelism of our algorithms by performing a tiered series of very fine-grained (highly parallelizable) transformations on the data. Each of these MapReduce jobs required reading and writing large amounts of data.« less

  1. Effective optimization using sample persistence: A case study on quantum annealers and various Monte Carlo optimization methods

    NASA Astrophysics Data System (ADS)

    Karimi, Hamed; Rosenberg, Gili; Katzgraber, Helmut G.

    2017-10-01

    We present and apply a general-purpose, multistart algorithm for improving the performance of low-energy samplers used for solving optimization problems. The algorithm iteratively fixes the value of a large portion of the variables to values that have a high probability of being optimal. The resulting problems are smaller and less connected, and samplers tend to give better low-energy samples for these problems. The algorithm is trivially parallelizable since each start in the multistart algorithm is independent, and could be applied to any heuristic solver that can be run multiple times to give a sample. We present results for several classes of hard problems solved using simulated annealing, path-integral quantum Monte Carlo, parallel tempering with isoenergetic cluster moves, and a quantum annealer, and show that the success metrics and the scaling are improved substantially. When combined with this algorithm, the quantum annealer's scaling was substantially improved for native Chimera graph problems. In addition, with this algorithm the scaling of the time to solution of the quantum annealer is comparable to the Hamze-de Freitas-Selby algorithm on the weak-strong cluster problems introduced by Boixo et al. Parallel tempering with isoenergetic cluster moves was able to consistently solve three-dimensional spin glass problems with 8000 variables when combined with our method, whereas without our method it could not solve any.

  2. Fast parallel tandem mass spectral library searching using GPU hardware acceleration

    PubMed Central

    Baumgardner, Lydia Ashleigh; Shanmugam, Avinash Kumar; Lam, Henry; Eng, Jimmy K.; Martin, Daniel B.

    2011-01-01

    Mass spectrometry-based proteomics is a maturing discipline of biologic research that is experiencing substantial growth. Instrumentation has steadily improved over time with the advent of faster and more sensitive instruments collecting ever larger data files. Consequently, the computational process of matching a peptide fragmentation pattern to its sequence, traditionally accomplished by sequence database searching and more recently also by spectral library searching, has become a bottleneck in many mass spectrometry experiments. In both of these methods, the main rate limiting step is the comparison of an acquired spectrum with all potential matches from a spectral library or sequence database. This is a highly parallelizable process because the core computational element can be represented as a simple but arithmetically intense multiplication of two vectors. In this paper we present a proof of concept project taking advantage of the massively parallel computing available on graphics processing units (GPUs) to distribute and accelerate the process of spectral assignment using spectral library searching. This program, which we have named FastPaSS (for Fast Parallelized Spectral Searching) is implemented in CUDA (Compute Unified Device Architecture) from NVIDIA which allows direct access to the processors in an NVIDIA GPU. Our efforts demonstrate the feasibility of GPU computing for spectral assignment, through implementation of the validated spectral searching algorithm SpectraST in the CUDA environment. PMID:21545112

  3. Thin-film-transistor array: an exploratory attempt for high throughput cell manipulation using electrowetting principle

    NASA Astrophysics Data System (ADS)

    Shaik, F. Azam; Cathcart, G.; Ihida, S.; Lereau-Bernier, M.; Leclerc, E.; Sakai, Y.; Toshiyoshi, H.; Tixier-Mita, A.

    2017-05-01

    In lab-on-a-chip (LoC) devices, microfluidic displacement of liquids is a key component. electrowetting on dielectric (EWOD) is a technique to move fluids, with the advantage of not requiring channels, pumps or valves. Fluids are discretized into droplets on microelectrodes and moved by applying an electric field via the electrodes to manipulate the contact angle. Micro-objects, such as biological cells, can be transported inside of these droplets. However, the design of conventional microelectrodes, made by standard micro-fabrication techniques, fixes the path of the droplets, and limits the reconfigurability of paths and thus limits the parallel processing of droplets. In that respect, thin film transistor (TFT) technology presents a great opportunity as it allows infinitely reconfigurable paths, with high parallelizability. We propose here to investigate the possibility of using TFT array devices for high throughput cell manipulation using EWOD. A COMSOL based 2D simulation coupled with a MATLAB algorithm was used to simulate the contact angle modulation, displacement and mixing of droplets. These simulations were confirmed by experimental results. The EWOD technique was applied to a droplet of culture medium containing HepG2 carcinoma cells and demonstrated no negative effects on the viability of the cells. This confirms the possibility of applying EWOD techniques to cellular applications, such as parallel cell analysis.

  4. Efficient Iterative Methods Applied to the Solution of Transonic Flows

    NASA Astrophysics Data System (ADS)

    Wissink, Andrew M.; Lyrintzis, Anastasios S.; Chronopoulos, Anthony T.

    1996-02-01

    We investigate the use of an inexact Newton's method to solve the potential equations in the transonic regime. As a test case, we solve the two-dimensional steady transonic small disturbance equation. Approximate factorization/ADI techniques have traditionally been employed for implicit solutions of this nonlinear equation. Instead, we apply Newton's method using an exact analytical determination of the Jacobian with preconditioned conjugate gradient-like iterative solvers for solution of the linear systems in each Newton iteration. Two iterative solvers are tested; a block s-step version of the classical Orthomin(k) algorithm called orthogonal s-step Orthomin (OSOmin) and the well-known GMRES method. The preconditioner is a vectorizable and parallelizable version of incomplete LU (ILU) factorization. Efficiency of the Newton-Iterative method on vector and parallel computer architectures is the main issue addressed. In vectorized tests on a single processor of the Cray C-90, the performance of Newton-OSOmin is superior to Newton-GMRES and a more traditional monotone AF/ADI method (MAF) for a variety of transonic Mach numbers and mesh sizes. Newton-GMRES is superior to MAF for some cases. The parallel performance of the Newton method is also found to be very good on multiple processors of the Cray C-90 and on the massively parallel thinking machine CM-5, where very fast execution rates (up to 9 Gflops) are found for large problems.

  5. An information-theoretic approach to motor action decoding with a reconfigurable parallel architecture.

    PubMed

    Craciun, Stefan; Brockmeier, Austin J; George, Alan D; Lam, Herman; Príncipe, José C

    2011-01-01

    Methods for decoding movements from neural spike counts using adaptive filters often rely on minimizing the mean-squared error. However, for non-Gaussian distribution of errors, this approach is not optimal for performance. Therefore, rather than using probabilistic modeling, we propose an alternate non-parametric approach. In order to extract more structure from the input signal (neuronal spike counts) we propose using minimum error entropy (MEE), an information-theoretic approach that minimizes the error entropy as part of an iterative cost function. However, the disadvantage of using MEE as the cost function for adaptive filters is the increase in computational complexity. In this paper we present a comparison between the decoding performance of the analytic Wiener filter and a linear filter trained with MEE, which is then mapped to a parallel architecture in reconfigurable hardware tailored to the computational needs of the MEE filter. We observe considerable speedup from the hardware design. The adaptation of filter weights for the multiple-input, multiple-output linear filters, necessary in motor decoding, is a highly parallelizable algorithm. It can be decomposed into many independent computational blocks with a parallel architecture readily mapped to a field-programmable gate array (FPGA) and scales to large numbers of neurons. By pipelining and parallelizing independent computations in the algorithm, the proposed parallel architecture has sublinear increases in execution time with respect to both window size and filter order.

  6. Strength in Numbers: Using Big Data to Simplify Sentiment Classification.

    PubMed

    Filippas, Apostolos; Lappas, Theodoros

    2017-09-01

    Sentiment classification, the task of assigning a positive or negative label to a text segment, is a key component of mainstream applications such as reputation monitoring, sentiment summarization, and item recommendation. Even though the performance of sentiment classification methods has steadily improved over time, their ever-increasing complexity renders them comprehensible by only a shrinking minority of expert practitioners. For all others, such highly complex methods are black-box predictors that are hard to tune and even harder to justify to decision makers. Motivated by these shortcomings, we introduce BigCounter: a new algorithm for sentiment classification that substitutes algorithmic complexity with Big Data. Our algorithm combines standard data structures with statistical testing to deliver accurate and interpretable predictions. It is also parameter free and suitable for use virtually "out of the box," which makes it appealing for organizations wanting to leverage their troves of unstructured data without incurring the significant expense of creating in-house teams of data scientists. Finally, BigCounter's efficient and parallelizable design makes it applicable to very large data sets. We apply our method on such data sets toward a study on the limits of Big Data for sentiment classification. Our study finds that, after a certain point, predictive performance tends to converge and additional data have little benefit. Our algorithmic design and findings provide the foundations for future research on the data-over-computation paradigm for classification problems.

  7. Large-scale virtual screening on public cloud resources with Apache Spark.

    PubMed

    Capuccini, Marco; Ahmed, Laeeq; Schaal, Wesley; Laure, Erwin; Spjuth, Ola

    2017-01-01

    Structure-based virtual screening is an in-silico method to screen a target receptor against a virtual molecular library. Applying docking-based screening to large molecular libraries can be computationally expensive, however it constitutes a trivially parallelizable task. Most of the available parallel implementations are based on message passing interface, relying on low failure rate hardware and fast network connection. Google's MapReduce revolutionized large-scale analysis, enabling the processing of massive datasets on commodity hardware and cloud resources, providing transparent scalability and fault tolerance at the software level. Open source implementations of MapReduce include Apache Hadoop and the more recent Apache Spark. We developed a method to run existing docking-based screening software on distributed cloud resources, utilizing the MapReduce approach. We benchmarked our method, which is implemented in Apache Spark, docking a publicly available target receptor against [Formula: see text]2.2 M compounds. The performance experiments show a good parallel efficiency (87%) when running in a public cloud environment. Our method enables parallel Structure-based virtual screening on public cloud resources or commodity computer clusters. The degree of scalability that we achieve allows for trying out our method on relatively small libraries first and then to scale to larger libraries. Our implementation is named Spark-VS and it is freely available as open source from GitHub (https://github.com/mcapuccini/spark-vs).Graphical abstract.

  8. Large-scale seismic signal analysis with Hadoop

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Addair, T. G.; Dodge, D. A.; Walter, W. R.

    In seismology, waveform cross correlation has been used for years to produce high-precision hypocenter locations and for sensitive detectors. Because correlated seismograms generally are found only at small hypocenter separation distances, correlation detectors have historically been reserved for spotlight purposes. However, many regions have been found to produce large numbers of correlated seismograms, and there is growing interest in building next-generation pipelines that employ correlation as a core part of their operation. In an effort to better understand the distribution and behavior of correlated seismic events, we have cross correlated a global dataset consisting of over 300 million seismograms. Thismore » was done using a conventional distributed cluster, and required 42 days. In anticipation of processing much larger datasets, we have re-architected the system to run as a series of MapReduce jobs on a Hadoop cluster. In doing so we achieved a factor of 19 performance increase on a test dataset. We found that fundamental algorithmic transformations were required to achieve the maximum performance increase. Whereas in the original IO-bound implementation, we went to great lengths to minimize IO, in the Hadoop implementation where IO is cheap, we were able to greatly increase the parallelism of our algorithms by performing a tiered series of very fine-grained (highly parallelizable) transformations on the data. Each of these MapReduce jobs required reading and writing large amounts of data.« less

  9. Extraction-Separation Performance and Dynamic Modeling of Orion Test Vehicles with Adams Simulation: 3rd Edition

    NASA Technical Reports Server (NTRS)

    Varela, Jose G.; Reddy, Satish; Moeller, Enrique; Anderson, Keith

    2017-01-01

    NASA's Orion Capsule Parachute Assembly System (CPAS) Project is now in the qualification phase of testing, and the Adams simulation has continued to evolve to model the complex dynamics experienced during the test article extraction and separation phases of flight. The ability to initiate tests near the upper altitude limit of the Orion parachute deployment envelope requires extractions from the aircraft at 35,000 ft-MSL. Engineering development phase testing of the Parachute Test Vehicle (PTV) carried by the Carriage Platform Separation System (CPSS) at altitude resulted in test support equipment hardware failures due to increased energy caused by higher true airspeeds. As a result, hardware modifications became a necessity requiring ground static testing of the textile components to be conducted and a new ground dynamic test of the extraction system to be devised. Force-displacement curves from static tests were incorporated into the Adams simulations, allowing prediction of loads, velocities and margins encountered during both flight and ground dynamic tests. The Adams simulation was then further refined by fine tuning the damping terms to match the peak loads recorded in the ground dynamic tests. The failure observed in flight testing was successfully replicated in ground testing and true safety margins of the textile components were revealed. A multi-loop energy modulator was then incorporated into the system level Adams simulation model and the effect on improving test margins be properly evaluated leading to high confidence ground verification testing of the final design solution.

  10. Posttest analysis of MIST Test 320201 using TRAC-PF1/MOD1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Siebe, D.A.; Steiner, J.L.; Boyack, B.E.

    A posttest calculation and analysis of Multi-Loop Integral System Test 320201, a small-break loss-of-coolant accident (SBLOCA) test with a scaled 50-cm{sup 2} cold-leg pump discharge leak, has been completed and is reported herein. It was one in a series of tests, with leak size varied parametrically. Scaled leak sizes included 5, 10, (the nominal, Test 3109AA), and 50 cm{sub 2}. The test exhibited the major post-SBLOCA phenomena, as expected, including depressurization to saturation, interruption of loop flow, boiler-condenser mode cooling, refill, and postrefill cooldown. Full high-pressure injection and auxiliary feedwater were available, reactor coolant pumps were not available, and reactor-vesselmore » vent valves and guard heaters were automatically controlled. Constant level control in the steam-generator (SG) secondaries was used after SG-secondary refill; and symmetric SG pressure control was also used. The sequence of events seen in this test was similar to the sequence of events for much of the nominal test except that events occurred in a shorter time frame as the system inventory was reduced and the system depressurized at a faster rate. The calculation was performed using TRAC-PFL/MOD 1. Agreement between test data and the calculation was generally reasonable. All major trends and phenomena were correctly predicted. We believe that the correct conclusions about trends and phenomena will be reached if the code is used in similar applications.« less

  11. Posttest analysis of MIST Test 320201 using TRAC-PF1/MOD1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Siebe, D.A.; Steiner, J.L.; Boyack, B.E.

    A posttest calculation and analysis of Multi-Loop Integral System Test 320201, a small-break loss-of-coolant accident (SBLOCA) test with a scaled 50-cm[sup 2] cold-leg pump discharge leak, has been completed and is reported herein. It was one in a series of tests, with leak size varied parametrically. Scaled leak sizes included 5, 10, (the nominal, Test 3109AA), and 50 cm[sub 2]. The test exhibited the major post-SBLOCA phenomena, as expected, including depressurization to saturation, interruption of loop flow, boiler-condenser mode cooling, refill, and postrefill cooldown. Full high-pressure injection and auxiliary feedwater were available, reactor coolant pumps were not available, and reactor-vesselmore » vent valves and guard heaters were automatically controlled. Constant level control in the steam-generator (SG) secondaries was used after SG-secondary refill; and symmetric SG pressure control was also used. The sequence of events seen in this test was similar to the sequence of events for much of the nominal test except that events occurred in a shorter time frame as the system inventory was reduced and the system depressurized at a faster rate. The calculation was performed using TRAC-PFL/MOD 1. Agreement between test data and the calculation was generally reasonable. All major trends and phenomena were correctly predicted. We believe that the correct conclusions about trends and phenomena will be reached if the code is used in similar applications.« less

  12. Embedding climate change risk assessment within a governance context

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Preston, Benjamin L

    Climate change adaptation is increasingly being framed in the context of climate risk management. This has contributed to the proliferation of climate change vulnerability and/or risk assessments as means of supporting institutional decision-making regarding adaptation policies and measures. To date, however, little consideration has been given to how such assessment projects and programs interact with governance systems to facilitate or hinder the implementation of adaptive responses. An examination of recent case studies involving Australian local governments reveals two key linkages between risk assessment and the governance of adaptation. First, governance systems influence how risk assessment processes are conducted, by whommore » they are conducted, and whom they are meant to inform. Australia s governance system emphasizes evidence-based decision-making that reinforces a knowledge deficit model of decision support. Assessments are often carried out by external experts on behalf of local government, with limited participation by relevant stakeholders and/or civil society. Second, governance systems influence the extent to which the outputs from risk assessment activities are translated into adaptive responses and outcomes. Technical information regarding risk is often stranded by institutional barriers to adaptation including poor uptake of information, competition on the policy agenda, and lack of sufficient entitlements. Yet, risk assessments can assist in bringing such barriers to the surface, where they can be debated and resolved. In fact, well-designed risk assessments can contribute to multi-loop learning by institutions, and that reflexive problem orientation may be one of the more valuable benefits of assessment.« less

  13. Two-motor direct drive control for elevation axis of telescope

    NASA Astrophysics Data System (ADS)

    Tang, T.; Tan, Y.; Ren, G.

    2014-07-01

    Two-motor application has become a very attractive filed in important field which high performance is permitted to achieve of position, speed, and acceleration. In the elevation axis of telescope control system, two-motor direct drive is proposed to enhance the high performance of tracking control system. Although there are several dominant strengths such as low size of motors and high torsional structural dynamics, the synchronization control of two motors is a very difficult and important. In this paper, a multi-loop control technique base master-slave current control is used to synchronize two motors, including current control loop, speed control loop and position control loop. First, the direct drive function of two motors is modeled. Compared of single motor direct control system, the resonance frequency of two motor control systems is same; while the anti-resonance frequency of two motors control system is 1.414 times than those of sing motor system. Because of rigid coupling for direct drive, the speed of two motor of the system is same, and the synchronization of torque for motors is critical. The current master-slave control technique is effective to synchronize the torque, which the current loop of the master motors is tracked the other slave motor. The speed feedback into the input of current loop of the master motors. The experiments test the performance of the two motors drive system. The random tracking error is 0.0119" for the line trajectory of 0.01°/s.

  14. Unified functional network and nonlinear time series analysis for complex systems science: The pyunicorn package

    NASA Astrophysics Data System (ADS)

    Donges, Jonathan; Heitzig, Jobst; Beronov, Boyan; Wiedermann, Marc; Runge, Jakob; Feng, Qing Yi; Tupikina, Liubov; Stolbova, Veronika; Donner, Reik; Marwan, Norbert; Dijkstra, Henk; Kurths, Jürgen

    2016-04-01

    We introduce the pyunicorn (Pythonic unified complex network and recurrence analysis toolbox) open source software package for applying and combining modern methods of data analysis and modeling from complex network theory and nonlinear time series analysis. pyunicorn is a fully object-oriented and easily parallelizable package written in the language Python. It allows for the construction of functional networks such as climate networks in climatology or functional brain networks in neuroscience representing the structure of statistical interrelationships in large data sets of time series and, subsequently, investigating this structure using advanced methods of complex network theory such as measures and models for spatial networks, networks of interacting networks, node-weighted statistics, or network surrogates. Additionally, pyunicorn provides insights into the nonlinear dynamics of complex systems as recorded in uni- and multivariate time series from a non-traditional perspective by means of recurrence quantification analysis, recurrence networks, visibility graphs, and construction of surrogate time series. The range of possible applications of the library is outlined, drawing on several examples mainly from the field of climatology. pyunicorn is available online at https://github.com/pik-copan/pyunicorn. Reference: J.F. Donges, J. Heitzig, B. Beronov, M. Wiedermann, J. Runge, Q.-Y. Feng, L. Tupikina, V. Stolbova, R.V. Donner, N. Marwan, H.A. Dijkstra, and J. Kurths, Unified functional network and nonlinear time series analysis for complex systems science: The pyunicorn package, Chaos 25, 113101 (2015), DOI: 10.1063/1.4934554, Preprint: arxiv.org:1507.01571 [physics.data-an].

  15. Predictive coarse-graining

    NASA Astrophysics Data System (ADS)

    Schöberl, Markus; Zabaras, Nicholas; Koutsourelakis, Phaedon-Stelios

    2017-03-01

    We propose a data-driven, coarse-graining formulation in the context of equilibrium statistical mechanics. In contrast to existing techniques which are based on a fine-to-coarse map, we adopt the opposite strategy by prescribing a probabilistic coarse-to-fine map. This corresponds to a directed probabilistic model where the coarse variables play the role of latent generators of the fine scale (all-atom) data. From an information-theoretic perspective, the framework proposed provides an improvement upon the relative entropy method [1] and is capable of quantifying the uncertainty due to the information loss that unavoidably takes place during the coarse-graining process. Furthermore, it can be readily extended to a fully Bayesian model where various sources of uncertainties are reflected in the posterior of the model parameters. The latter can be used to produce not only point estimates of fine-scale reconstructions or macroscopic observables, but more importantly, predictive posterior distributions on these quantities. Predictive posterior distributions reflect the confidence of the model as a function of the amount of data and the level of coarse-graining. The issues of model complexity and model selection are seamlessly addressed by employing a hierarchical prior that favors the discovery of sparse solutions, revealing the most prominent features in the coarse-grained model. A flexible and parallelizable Monte Carlo - Expectation-Maximization (MC-EM) scheme is proposed for carrying out inference and learning tasks. A comparative assessment of the proposed methodology is presented for a lattice spin system and the SPC/E water model.

  16. Polarizable atomic multipole X-ray refinement: application to peptide crystals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schnieders, Michael J.; Fenn, Timothy D.; Howard Hughes Medical Institute

    2009-09-01

    A method to accelerate the computation of structure factors from an electron density described by anisotropic and aspherical atomic form factors via fast Fourier transformation is described for the first time. Recent advances in computational chemistry have produced force fields based on a polarizable atomic multipole description of biomolecular electrostatics. In this work, the Atomic Multipole Optimized Energetics for Biomolecular Applications (AMOEBA) force field is applied to restrained refinement of molecular models against X-ray diffraction data from peptide crystals. A new formalism is also developed to compute anisotropic and aspherical structure factors using fast Fourier transformation (FFT) of Cartesian Gaussianmore » multipoles. Relative to direct summation, the FFT approach can give a speedup of more than an order of magnitude for aspherical refinement of ultrahigh-resolution data sets. Use of a sublattice formalism makes the method highly parallelizable. Application of the Cartesian Gaussian multipole scattering model to a series of four peptide crystals using multipole coefficients from the AMOEBA force field demonstrates that AMOEBA systematically underestimates electron density at bond centers. For the trigonal and tetrahedral bonding geometries common in organic chemistry, an atomic multipole expansion through hexadecapole order is required to explain bond electron density. Alternatively, the addition of interatomic scattering (IAS) sites to the AMOEBA-based density captured bonding effects with fewer parameters. For a series of four peptide crystals, the AMOEBA–IAS model lowered R{sub free} by 20–40% relative to the original spherically symmetric scattering model.« less

  17. Predictive coarse-graining

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schöberl, Markus, E-mail: m.schoeberl@tum.de; Zabaras, Nicholas; Department of Aerospace and Mechanical Engineering, University of Notre Dame, 365 Fitzpatrick Hall, Notre Dame, IN 46556

    We propose a data-driven, coarse-graining formulation in the context of equilibrium statistical mechanics. In contrast to existing techniques which are based on a fine-to-coarse map, we adopt the opposite strategy by prescribing a probabilistic coarse-to-fine map. This corresponds to a directed probabilistic model where the coarse variables play the role of latent generators of the fine scale (all-atom) data. From an information-theoretic perspective, the framework proposed provides an improvement upon the relative entropy method and is capable of quantifying the uncertainty due to the information loss that unavoidably takes place during the coarse-graining process. Furthermore, it can be readily extendedmore » to a fully Bayesian model where various sources of uncertainties are reflected in the posterior of the model parameters. The latter can be used to produce not only point estimates of fine-scale reconstructions or macroscopic observables, but more importantly, predictive posterior distributions on these quantities. Predictive posterior distributions reflect the confidence of the model as a function of the amount of data and the level of coarse-graining. The issues of model complexity and model selection are seamlessly addressed by employing a hierarchical prior that favors the discovery of sparse solutions, revealing the most prominent features in the coarse-grained model. A flexible and parallelizable Monte Carlo – Expectation–Maximization (MC-EM) scheme is proposed for carrying out inference and learning tasks. A comparative assessment of the proposed methodology is presented for a lattice spin system and the SPC/E water model.« less

  18. Boosting Bayesian parameter inference of nonlinear stochastic differential equation models by Hamiltonian scale separation.

    PubMed

    Albert, Carlo; Ulzega, Simone; Stoop, Ruedi

    2016-04-01

    Parameter inference is a fundamental problem in data-driven modeling. Given observed data that is believed to be a realization of some parameterized model, the aim is to find parameter values that are able to explain the observed data. In many situations, the dominant sources of uncertainty must be included into the model for making reliable predictions. This naturally leads to stochastic models. Stochastic models render parameter inference much harder, as the aim then is to find a distribution of likely parameter values. In Bayesian statistics, which is a consistent framework for data-driven learning, this so-called posterior distribution can be used to make probabilistic predictions. We propose a novel, exact, and very efficient approach for generating posterior parameter distributions for stochastic differential equation models calibrated to measured time series. The algorithm is inspired by reinterpreting the posterior distribution as a statistical mechanics partition function of an object akin to a polymer, where the measurements are mapped on heavier beads compared to those of the simulated data. To arrive at distribution samples, we employ a Hamiltonian Monte Carlo approach combined with a multiple time-scale integration. A separation of time scales naturally arises if either the number of measurement points or the number of simulation points becomes large. Furthermore, at least for one-dimensional problems, we can decouple the harmonic modes between measurement points and solve the fastest part of their dynamics analytically. Our approach is applicable to a wide range of inference problems and is highly parallelizable.

  19. MetaCRAST: reference-guided extraction of CRISPR spacers from unassembled metagenomes.

    PubMed

    Moller, Abraham G; Liang, Chun

    2017-01-01

    Clustered regularly interspaced short palindromic repeat (CRISPR) systems are the adaptive immune systems of bacteria and archaea against viral infection. While CRISPRs have been exploited as a tool for genetic engineering, their spacer sequences can also provide valuable insights into microbial ecology by linking environmental viruses to their microbial hosts. Despite this importance, metagenomic CRISPR detection remains a major challenge. Here we present a reference-guided CRISPR spacer detection tool ( Meta genomic C RISPR R eference- A ided S earch T ool-MetaCRAST) that constrains searches based on user-specified direct repeats (DRs). These DRs could be expected from assembly or taxonomic profiles of metagenomes. We compared the performance of MetaCRAST to those of two existing metagenomic CRISPR detection tools-Crass and MinCED-using both real and simulated acid mine drainage (AMD) and enhanced biological phosphorus removal (EBPR) metagenomes. Our evaluation shows MetaCRAST improves CRISPR spacer detection in real metagenomes compared to the de novo CRISPR detection methods Crass and MinCED. Evaluation on simulated metagenomes show it performs better than de novo tools for Illumina metagenomes and comparably for 454 metagenomes. It also has comparable performance dependence on read length and community composition, run time, and accuracy to these tools. MetaCRAST is implemented in Perl, parallelizable through the Many Core Engine (MCE), and takes metagenomic sequence reads and direct repeat queries (FASTA or FASTQ) as input. It is freely available for download at https://github.com/molleraj/MetaCRAST.

  20. Jllumina - A comprehensive Java-based API for statistical Illumina Infinium HumanMethylation450 and Infinium MethylationEPIC BeadChip data processing.

    PubMed

    Almeida, Diogo; Skov, Ida; Lund, Jesper; Mohammadnejad, Afsaneh; Silva, Artur; Vandin, Fabio; Tan, Qihua; Baumbach, Jan; Röttger, Richard

    2016-10-01

    Measuring differential methylation of the DNA is the nowadays most common approach to linking epigenetic modifications to diseases (called epigenome-wide association studies, EWAS). For its low cost, its efficiency and easy handling, the Illumina HumanMethylation450 BeadChip and its successor, the Infinium MethylationEPIC BeadChip, is the by far most popular techniques for conduction EWAS in large patient cohorts. Despite the popularity of this chip technology, raw data processing and statistical analysis of the array data remains far from trivial and still lacks dedicated software libraries enabling high quality and statistically sound downstream analyses. As of yet, only R-based solutions are freely available for low-level processing of the Illumina chip data. However, the lack of alternative libraries poses a hurdle for the development of new bioinformatic tools, in particular when it comes to web services or applications where run time and memory consumption matter, or EWAS data analysis is an integrative part of a bigger framework or data analysis pipeline. We have therefore developed and implemented Jllumina, an open-source Java library for raw data manipulation of Illumina Infinium HumanMethylation450 and Infinium MethylationEPIC BeadChip data, supporting the developer with Java functions covering reading and preprocessing the raw data, down to statistical assessment, permutation tests, and identification of differentially methylated loci. Jllumina is fully parallelizable and publicly available at http://dimmer.compbio.sdu.dk/download.html.

  1. Jllumina - A comprehensive Java-based API for statistical Illumina Infinium HumanMethylation450 and MethylationEPIC data processing.

    PubMed

    Almeida, Diogo; Skov, Ida; Lund, Jesper; Mohammadnejad, Afsaneh; Silva, Artur; Vandin, Fabio; Tan, Qihua; Baumbach, Jan; Röttger, Richard

    2016-12-18

    Measuring differential methylation of the DNA is the nowadays most common approach to linking epigenetic modifications to diseases (called epigenome-wide association studies, EWAS). For its low cost, its efficiency and easy handling, the Illumina HumanMethylation450 BeadChip and its successor, the Infinium MethylationEPIC BeadChip, is the by far most popular techniques for conduction EWAS in large patient cohorts. Despite the popularity of this chip technology, raw data processing and statistical analysis of the array data remains far from trivial and still lacks dedicated software libraries enabling high quality and statistically sound downstream analyses. As of yet, only R-based solutions are freely available for low-level processing of the Illumina chip data. However, the lack of alternative libraries poses a hurdle for the development of new bioinformatic tools, in particular when it comes to web services or applications where run time and memory consumption matter, or EWAS data analysis is an integrative part of a bigger framework or data analysis pipeline. We have therefore developed and implemented Jllumina, an open-source Java library for raw data manipulation of Illumina Infinium HumanMethylation450 and Infinium MethylationEPIC BeadChip data, supporting the developer with Java functions covering reading and preprocessing the raw data, down to statistical assessment, permutation tests, and identification of differentially methylated loci. Jllumina is fully parallelizable and publicly available at http://dimmer.compbio.sdu.dk/download.html.

  2. Design of high-performance parallelized gene predictors in MATLAB.

    PubMed

    Rivard, Sylvain Robert; Mailloux, Jean-Gabriel; Beguenane, Rachid; Bui, Hung Tien

    2012-04-10

    This paper proposes a method of implementing parallel gene prediction algorithms in MATLAB. The proposed designs are based on either Goertzel's algorithm or on FFTs and have been implemented using varying amounts of parallelism on a central processing unit (CPU) and on a graphics processing unit (GPU). Results show that an implementation using a straightforward approach can require over 4.5 h to process 15 million base pairs (bps) whereas a properly designed one could perform the same task in less than five minutes. In the best case, a GPU implementation can yield these results in 57 s. The present work shows how parallelism can be used in MATLAB for gene prediction in very large DNA sequences to produce results that are over 270 times faster than a conventional approach. This is significant as MATLAB is typically overlooked due to its apparent slow processing time even though it offers a convenient environment for bioinformatics. From a practical standpoint, this work proposes two strategies for accelerating genome data processing which rely on different parallelization mechanisms. Using a CPU, the work shows that direct access to the MEX function increases execution speed and that the PARFOR construct should be used in order to take full advantage of the parallelizable Goertzel implementation. When the target is a GPU, the work shows that data needs to be segmented into manageable sizes within the GFOR construct before processing in order to minimize execution time.

  3. Imaging the eye fundus with real-time en-face spectral domain optical coherence tomography

    PubMed Central

    Bradu, Adrian; Podoleanu, Adrian Gh.

    2014-01-01

    Real-time display of processed en-face spectral domain optical coherence tomography (SD-OCT) images is important for diagnosis. However, due to many steps of data processing requirements, such as Fast Fourier transformation (FFT), data re-sampling, spectral shaping, apodization, zero padding, followed by software cut of the 3D volume acquired to produce an en-face slice, conventional high-speed SD-OCT cannot render an en-face OCT image in real time. Recently we demonstrated a Master/Slave (MS)-OCT method that is highly parallelizable, as it provides reflectivity values of points at depth within an A-scan in parallel. This allows direct production of en-face images. In addition, the MS-OCT method does not require data linearization, which further simplifies the processing. The computation in our previous paper was however time consuming. In this paper we present an optimized algorithm that can be used to provide en-face MS-OCT images much quicker. Using such an algorithm we demonstrate around 10 times faster production of sets of en-face OCT images than previously obtained as well as simultaneous real-time display of up to 4 en-face OCT images of 200 × 200 pixels2 from the fovea and the optic nerve of a volunteer. We also demonstrate 3D and B-scan OCT images obtained from sets of MS-OCT C-scans, i.e. with no FFT and no intermediate step of generation of A-scans. PMID:24761303

  4. Fast and Precise Emulation of Stochastic Biochemical Reaction Networks With Amplified Thermal Noise in Silicon Chips.

    PubMed

    Kim, Jaewook; Woo, Sung Sik; Sarpeshkar, Rahul

    2018-04-01

    The analysis and simulation of complex interacting biochemical reaction pathways in cells is important in all of systems biology and medicine. Yet, the dynamics of even a modest number of noisy or stochastic coupled biochemical reactions is extremely time consuming to simulate. In large part, this is because of the expensive cost of random number and Poisson process generation and the presence of stiff, coupled, nonlinear differential equations. Here, we demonstrate that we can amplify inherent thermal noise in chips to emulate randomness physically, thus alleviating these costs significantly. Concurrently, molecular flux in thermodynamic biochemical reactions maps to thermodynamic electronic current in a transistor such that stiff nonlinear biochemical differential equations are emulated exactly in compact, digitally programmable, highly parallel analog "cytomorphic" transistor circuits. For even small-scale systems involving just 80 stochastic reactions, our 0.35-μm BiCMOS chips yield a 311× speedup in the simulation time of Gillespie's stochastic algorithm over COPASI, a fast biochemical-reaction software simulator that is widely used in computational biology; they yield a 15 500× speedup over equivalent MATLAB stochastic simulations. The chip emulation results are consistent with these software simulations over a large range of signal-to-noise ratios. Most importantly, our physical emulation of Poisson chemical dynamics does not involve any inherently sequential processes and updates such that, unlike prior exact simulation approaches, they are parallelizable, asynchronous, and enable even more speedup for larger-size networks.

  5. Efficient iterative methods applied to the solution of transonic flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wissink, A.M.; Lyrintzis, A.S.; Chronopoulos, A.T.

    1996-02-01

    We investigate the use of an inexact Newton`s method to solve the potential equations in the transonic regime. As a test case, we solve the two-dimensional steady transonic small disturbance equation. Approximate factorization/ADI techniques have traditionally been employed for implicit solutions of this nonlinear equation. Instead, we apply Newton`s method using an exact analytical determination of the Jacobian with preconditioned conjugate gradient-like iterative solvers for solution of the linear systems in each Newton iteration. Two iterative solvers are tested; a block s-step version of the classical Orthomin(k) algorithm called orthogonal s-step Orthomin (OSOmin) and the well-known GIVIRES method. The preconditionermore » is a vectorizable and parallelizable version of incomplete LU (ILU) factorization. Efficiency of the Newton-Iterative method on vector and parallel computer architectures is the main issue addressed. In vectorized tests on a single processor of the Cray C-90, the performance of Newton-OSOmin is superior to Newton-GMRES and a more traditional monotone AF/ADI method (MAF) for a variety of transonic Mach numbers and mesh sizes. Newton- GIVIRES is superior to MAF for some cases. The parallel performance of the Newton method is also found to be very good on multiple processors of the Cray C-90 and on the massively parallel thinking machine CM-5, where very fast execution rates (up to 9 Gflops) are found for large problems. 38 refs., 14 figs., 7 tabs.« less

  6. Fluidic Processing of High-Performance ZIF-8 Membranes on Polymeric Hollow Fibers: Mechanistic Insights and Microstructure Control

    DOE PAGES

    Eum, Kiwon; Rownaghi, Ali; Choi, Dalsu; ...

    2016-06-01

    Recently, a methodology for fabricating polycrystalline metal-organic framework (MOF) membranes has been introduced – referred to as interfacial microfluidic membrane processing – which allows parallelizable fabrication of MOF membranes inside polymeric hollow fibers of microscopic diameter. Such hollow fiber membranes, when bundled together into modules, are an attractive way to scale molecular sieving membranes. The understanding and engineering of fluidic processing techniques for MOF membrane fabrication are in their infancy. Here in this work, a detailed mechanistic understanding of MOF (ZIF-8) membrane growth under microfluidic conditions in polyamide-imide hollow fibers is reported, without any intermediate steps (such as seeding ormore » surface modification) or post-synthesis treatments. A key finding is that interfacial membrane formation in the hollow fiber occurs via an initial formation of two distinct layers and the subsequent rearrangement into a single layer. This understanding is used to show how nonisothermal processing allows fabrication of thinner (5 μm) ZIF-8 films for higher throughput, and furthermore how engineering the polymeric hollow fiber support microstructure allows control of defects in the ZIF-8 membranes. Finally, the performance of these engineered ZIF-8 membranes is then characterized, which have H 2/C 3H 8 and C 3H 6/C 3H 8 mixture separation factors as high as 2018 and 65, respectively, and C 3H 6 permeances as high as 66 GPU.« less

  7. GPU-accelerated algorithms for many-particle continuous-time quantum walks

    NASA Astrophysics Data System (ADS)

    Piccinini, Enrico; Benedetti, Claudia; Siloi, Ilaria; Paris, Matteo G. A.; Bordone, Paolo

    2017-06-01

    Many-particle continuous-time quantum walks (CTQWs) represent a resource for several tasks in quantum technology, including quantum search algorithms and universal quantum computation. In order to design and implement CTQWs in a realistic scenario, one needs effective simulation tools for Hamiltonians that take into account static noise and fluctuations in the lattice, i.e. Hamiltonians containing stochastic terms. To this aim, we suggest a parallel algorithm based on the Taylor series expansion of the evolution operator, and compare its performances with those of algorithms based on the exact diagonalization of the Hamiltonian or a 4th order Runge-Kutta integration. We prove that both Taylor-series expansion and Runge-Kutta algorithms are reliable and have a low computational cost, the Taylor-series expansion showing the additional advantage of a memory allocation not depending on the precision of calculation. Both algorithms are also highly parallelizable within the SIMT paradigm, and are thus suitable for GPGPU computing. In turn, we have benchmarked 4 NVIDIA GPUs and 3 quad-core Intel CPUs for a 2-particle system over lattices of increasing dimension, showing that the speedup provided by GPU computing, with respect to the OPENMP parallelization, lies in the range between 8x and (more than) 20x, depending on the frequency of post-processing. GPU-accelerated codes thus allow one to overcome concerns about the execution time, and make it possible simulations with many interacting particles on large lattices, with the only limit of the memory available on the device.

  8. Implementing and testing a panel-based method for modeling acoustic scattering from CFD input

    NASA Astrophysics Data System (ADS)

    Swift, S. Hales

    Exposure of sailors to high levels of noise in the aircraft carrier deck environment is a problem that has serious human and economic consequences. A variety of approaches to quieting exhausting jets from high-performance aircraft are undergoing development. However, testing of noise abatement solutions at full-scale may be prohibitively costly when many possible nozzle treatments are under consideration. A relatively efficient and accurate means of predicting the noise levels resulting from engine-quieting technologies at personnel locations is needed. This is complicated by the need to model both the direct and the scattered sound field in order to determine the resultant spectrum and levels. While the direct sound field may be obtained using CFD plus surface integral methods such as the Ffowcs-Williams Hawkings method, the scattered sound field is complicated by its dependence on the geometry of the scattering surface--the aircraft carrier deck, aircraft control surfaces and other nearby structures. In this work, a time-domain boundary element method, or TD-BEM, (sometimes referred to in terms of source panels) is proposed and developed that takes advantage of and offers beneficial effects for the substantial planar components of the aircraft carrier deck environment and uses pressure gradients as its input. This method is applied to and compared with analytical results for planar surfaces, corners and spherical surfaces using an analytic point source as input. The method can also accept input from CFD data on an acoustic data surface by using the G1A pressure gradient formulation to obtain pressure gradients on the surface from the flow variables contained on the acoustic data surface. The method is also applied to a planar scattering surface characteristic of an aircraft carrier flight deck with an acoustic data surface from a supersonic jet large eddy simulation, or LES, as input to the scattering model. In this way, the process for modeling the complete sound field (assuming the availability of an acoustic data surface from a time-realized numerical simulation of the jet flow field) is outlined for a realistic group of source location, scattering surface location and observer locations. The method was able to successfully model planar cases, corners and spheres with a level of error that is low enough for some engineering purposes. Significant benefits were realized for fully planar surfaces including high parallelizability and avoidance of interaction between portions of the paneled boundary. When the jet large eddy simulation case was considered the method was able to capture a substantial portion of the spectrum including the peak frequency region and a majority of the spectral energy with good fidelity.

  9. Simultaneous determination of D-aspartic acid and D-glutamic acid in rat tissues and physiological fluids using a multi-loop two-dimensional HPLC procedure.

    PubMed

    Han, Hai; Miyoshi, Yurika; Ueno, Kyoko; Okamura, Chieko; Tojo, Yosuke; Mita, Masashi; Lindner, Wolfgang; Zaitsu, Kiyoshi; Hamase, Kenji

    2011-11-01

    For a metabolomics study focusing on the analysis of aspartic and glutamic acid enantiomers, a fully automated two-dimensional HPLC system employing a microbore-ODS column and a narrowbore-enantioselective column was developed. By using this system, a detailed distribution of D-Asp and D-Glu besides L-Asp and L-Glu in mammals was elucidated. For the total analysis concept, the amino acids were first pre-column derivatized with 4-fluoro-7-nitro-2,1,3-benzoxadiazole (NBD-F) to be sensitively and fluorometrically detected. For the non-stereoselective separation of the analytes in the first dimension a monolithic ODS column (750 mm × 0.53 mm i.d.) was adopted, and a self-packed narrowbore-Pirkle type enantioselective column (Sumichiral OA-2500S, 250 mm × 1.5 mm i.d.) was selected for the second dimension. In the rat plasma, RSD values for intra-day and inter-day precision were less than 6.8%, and the accuracy ranged between 96.1% and 105.8%. The values of LOQ of D-Asp and D-Glu were 5 fmol/injection (0.625 nmol/g tissue). The present method was successfully applied to the simultaneous determination of free aspartic acid and glutamic acid enantiomers in 7 brain areas, 11 peripheral tissues, plasma and urine of Wistar rats. Biologically significant D-Asp values were found in various tissue samples whereas for D-Glu the values were very low possibly indicating less significance. Copyright © 2011 Elsevier B.V. All rights reserved.

  10. The flare kernel in the impulsive phase

    NASA Technical Reports Server (NTRS)

    Dejager, C.

    1986-01-01

    The impulsive phase of a flare is characterized by impulsive bursts of X-ray and microwave radiation, related to impulsive footpoint heating up to 50 or 60 MK, by upward gas velocities (150 to 400 km/sec) and by a gradual increase of the flare's thermal energy content. These phenomena, as well as non-thermal effects, are all related to the impulsive energy injection into the flare. The available observations are also quantitatively consistent with a model in which energy is injected into the flare by beams of energetic electrons, causing ablation of chromospheric gas, followed by convective rise of gas. Thus, a hole is burned into the chromosphere; at the end of impulsive phase of an average flare the lower part of that hole is situated about 1800 km above the photosphere. H alpha and other optical and UV line emission is radiated by a thin layer (approx. 20 km) at the bottom of the flare kernel. The upward rising and outward streaming gas cools down by conduction in about 45 s. The non-thermal effects in the initial phase are due to curtailing of the energy distribution function by escape of energetic electrons. The single flux tube model of a flare does not fit with these observations; instead we propose the spaghetti-bundle model. Microwave and gamma-ray observations suggest the occurrence of dense flare knots of approx. 800 km diameter, and of high temperature. Future observations should concentrate on locating the microwave/gamma-ray sources, and on determining the kernel's fine structure and the related multi-loop structure of the flaring area.

  11. cellGPU: Massively parallel simulations of dynamic vertex models

    NASA Astrophysics Data System (ADS)

    Sussman, Daniel M.

    2017-10-01

    Vertex models represent confluent tissue by polygonal or polyhedral tilings of space, with the individual cells interacting via force laws that depend on both the geometry of the cells and the topology of the tessellation. This dependence on the connectivity of the cellular network introduces several complications to performing molecular-dynamics-like simulations of vertex models, and in particular makes parallelizing the simulations difficult. cellGPU addresses this difficulty and lays the foundation for massively parallelized, GPU-based simulations of these models. This article discusses its implementation for a pair of two-dimensional models, and compares the typical performance that can be expected between running cellGPU entirely on the CPU versus its performance when running on a range of commercial and server-grade graphics cards. By implementing the calculation of topological changes and forces on cells in a highly parallelizable fashion, cellGPU enables researchers to simulate time- and length-scales previously inaccessible via existing single-threaded CPU implementations. Program Files doi:http://dx.doi.org/10.17632/6j2cj29t3r.1 Licensing provisions: MIT Programming language: CUDA/C++ Nature of problem: Simulations of off-lattice "vertex models" of cells, in which the interaction forces depend on both the geometry and the topology of the cellular aggregate. Solution method: Highly parallelized GPU-accelerated dynamical simulations in which the force calculations and the topological features can be handled on either the CPU or GPU. Additional comments: The code is hosted at https://gitlab.com/dmsussman/cellGPU, with documentation additionally maintained at http://dmsussman.gitlab.io/cellGPUdocumentation

  12. A Generic and Efficient E-field Parallel Imaging Correlator for Next-Generation Radio Telescopes

    NASA Astrophysics Data System (ADS)

    Thyagarajan, Nithyanandan; Beardsley, Adam P.; Bowman, Judd D.; Morales, Miguel F.

    2017-05-01

    Modern radio telescopes are favouring densely packed array layouts with large numbers of antennas (NA ≳ 1000). Since the complexity of traditional correlators scales as O(N_A^2), there will be a steep cost for realizing the full imaging potential of these powerful instruments. Through our generic and efficient E-field Parallel Imaging Correlator (epic), we present the first software demonstration of a generalized direct imaging algorithm, namely the Modular Optimal Frequency Fourier imager. Not only does it bring down the cost for dense layouts to O(N_A log _2N_A) but can also image from irregular layouts and heterogeneous arrays of antennas. epic is highly modular, parallelizable, implemented in object-oriented python, and publicly available. We have verified the images produced to be equivalent to those from traditional techniques to within a precision set by gridding coarseness. We have also validated our implementation on data observed with the Long Wavelength Array (LWA1). We provide a detailed framework for imaging with heterogeneous arrays and show that epic robustly estimates the input sky model for such arrays. Antenna layouts with dense filling factors consisting of a large number of antennas such as LWA, the Square Kilometre Array, Hydrogen Epoch of Reionization Array, and Canadian Hydrogen Intensity Mapping Experiment will gain significant computational advantage by deploying an optimized version of epic. The algorithm is a strong candidate for instruments targeting transient searches of fast radio bursts as well as planetary and exoplanetary phenomena due to the availability of high-speed calibrated time-domain images and low output bandwidth relative to visibility-based systems.

  13. Fast and Exact Fiber Surfaces for Tetrahedral Meshes.

    PubMed

    Klacansky, Pavol; Tierny, Julien; Carr, Hamish; Zhao Geng

    2017-07-01

    Isosurfaces are fundamental geometrical objects for the analysis and visualization of volumetric scalar fields. Recent work has generalized them to bivariate volumetric fields with fiber surfaces, the pre-image of polygons in range space. However, the existing algorithm for their computation is approximate, and is limited to closed polygons. Moreover, its runtime performance does not allow instantaneous updates of the fiber surfaces upon user edits of the polygons. Overall, these limitations prevent a reliable and interactive exploration of the space of fiber surfaces. This paper introduces the first algorithm for the exact computation of fiber surfaces in tetrahedral meshes. It assumes no restriction on the topology of the input polygon, handles degenerate cases and better captures sharp features induced by polygon bends. The algorithm also allows visualization of individual fibers on the output surface, better illustrating their relationship with data features in range space. To enable truly interactive exploration sessions, we further improve the runtime performance of this algorithm. In particular, we show that it is trivially parallelizable and that it scales nearly linearly with the number of cores. Further, we study acceleration data-structures both in geometrical domain and range space and we show how to generalize interval trees used in isosurface extraction to fiber surface extraction. Experiments demonstrate the superiority of our algorithm over previous work, both in terms of accuracy and running time, with up to two orders of magnitude speedups. This improvement enables interactive edits of range polygons with instantaneous updates of the fiber surface for exploration purpose. A VTK-based reference implementation is provided as additional material to reproduce our results.

  14. Application of a Parallelizable Perfusion Bioreactor for Physiologic 3D Cell Culture.

    PubMed

    Egger, Dominik; Spitz, Sarah; Fischer, Monica; Handschuh, Stephan; Glösmann, Martin; Friemert, Benedikt; Egerbacher, Monika; Kasper, Cornelia

    2017-01-01

    It is crucial but challenging to keep physiologic conditions during the cultivation of 3D cell scaffold constructs for the optimization of 3D cell culture processes. Therefore, we demonstrate the benefits of a recently developed miniaturized perfusion bioreactor together with a specialized incubator system that allows for the cultivation of multiple samples while screening different conditions. Hence, a decellularized bone matrix was tested towards its suitability for 3D osteogenic differentiation under flow perfusion conditions. Subsequently, physiologic shear stress and hydrostatic pressure (HP) conditions were optimized for osteogenic differentiation of human mesenchymal stem cells (MSCs). X-ray computed microtomography and scanning electron microscopy (SEM) revealed a closed cell layer covering the entire matrix. Osteogenic differentiation assessed by alkaline phosphatase activity and SEM was found to be increased in all dynamic conditions. Furthermore, screening of different fluid shear stress (FSS) conditions revealed 1.5 mL/min (equivalent to ∼10 mPa shear stress) to be optimal. However, no distinct effect of HP compared to flow perfusion without HP on osteogenic differentiation was observed. Notably, throughout all experiments, cells cultivated under FSS or HP conditions displayed increased osteogenic differentiation, which underlines the importance of physiologic conditions. In conclusion, the bioreactor system was used for biomaterial testing and to develop and optimize a 3D cell culture process for the osteogenic differentiation of MSCs. Due to its versatility and higher throughput efficiency, we hypothesize that this bioreactor/incubator system will advance the development and optimization of a variety of 3D cell culture processes. © 2017 S. Karger AG, Basel.

  15. Distributed multitasking ITS with PVM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fan, W.C.; Halbleib, J.A. Sr.

    1995-12-31

    Advances in computer hardware and communication software have made it possible to perform parallel-processing computing on a collection of desktop workstations. For many applications, multitasking on a cluster of high-performance workstations has achieved performance comparable to or better than that on a traditional supercomputer. From the point of view of cost-effectiveness, it also allows users to exploit available but unused computational resources and thus achieve a higher performance-to-cost ratio. Monte Carlo calculations are inherently parallelizable because the individual particle trajectories can be generated independently with minimum need for interprocessor communication. Furthermore, the number of particle histories that can be generatedmore » in a given amount of wall-clock time is nearly proportional to the number of processors in the cluster. This is an important fact because the inherent statistical uncertainty in any Monte Carlo result decreases as the number of histories increases. For these reasons, researchers have expended considerable effort to take advantage of different parallel architectures for a variety of Monte Carlo radiation transport codes, often with excellent results. The initial interest in this work was sparked by the multitasking capability of the MCNP code on a cluster of workstations using the Parallel Virtual Machine (PVM) software. On a 16-machine IBM RS/6000 cluster, it has been demonstrated that MCNP runs ten times as fast as on a single-processor CRAY YMP. In this paper, we summarize the implementation of a similar multitasking capability for the coupled electronphoton transport code system, the Integrated TIGER Series (ITS), and the evaluation of two load-balancing schemes for homogeneous and heterogeneous networks.« less

  16. Distributed multitasking ITS with PVM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fan, W.C.; Halbleib, J.A. Sr.

    1995-02-01

    Advances of computer hardware and communication software have made it possible to perform parallel-processing computing on a collection of desktop workstations. For many applications, multitasking on a cluster of high-performance workstations has achieved performance comparable or better than that on a traditional supercomputer. From the point of view of cost-effectiveness, it also allows users to exploit available but unused computational resources, and thus achieve a higher performance-to-cost ratio. Monte Carlo calculations are inherently parallelizable because the individual particle trajectories can be generated independently with minimum need for interprocessor communication. Furthermore, the number of particle histories that can be generated inmore » a given amount of wall-clock time is nearly proportional to the number of processors in the cluster. This is an important fact because the inherent statistical uncertainty in any Monte Carlo result decreases as the number of histories increases. For these reasons, researchers have expended considerable effort to take advantage of different parallel architectures for a variety of Monte Carlo radiation transport codes, often with excellent results. The initial interest in this work was sparked by the multitasking capability of MCNP on a cluster of workstations using the Parallel Virtual Machine (PVM) software. On a 16-machine IBM RS/6000 cluster, it has been demonstrated that MCNP runs ten times as fast as on a single-processor CRAY YMP. In this paper, we summarize the implementation of a similar multitasking capability for the coupled electron/photon transport code system, the Integrated TIGER Series (ITS), and the evaluation of two load balancing schemes for homogeneous and heterogeneous networks.« less

  17. A Fast Experimental Scanner for Proton CT: Technical Performance and First Experience with Phantom Scans

    PubMed Central

    Johnson, Robert P.; Bashkirov, Vladimir; DeWitt, Langley; Giacometti, Valentina; Hurley, Robert F.; Piersimoni, Pierluigi; Plautz, Tia E.; Sadrozinski, Hartmut F.-W.; Schubert, Keith; Schulte, Reinhard; Schultze, Blake; Zatserklyaniy, Andriy

    2016-01-01

    We report on the design, fabrication, and first tests of a tomographic scanner developed for proton computed tomography (pCT) of head-sized objects. After extensive preclinical testing, pCT is intended to be employed in support of proton therapy treatment planning and pre-treatment verification in patients undergoing particle-beam therapy. The scanner consists of two silicon-strip telescopes that track individual protons before and after the phantom, and a novel multistage scintillation detector that measures a combination of the residual energy and range of the proton, from which we derive the water equivalent path length (WEPL) of the protons in the scanned object. The set of WEPL values and the associated paths of protons passing through the object over a 360° angular scan are processed by an iterative, parallelizable reconstruction algorithm that runs on modern GP-GPU hardware. In order to assess the performance of the scanner, we have performed tests with 200 MeV protons from the synchrotron of the Loma Linda University Medical Center and the IBA cyclotron of the Northwestern Medicine Chicago Proton Center. Our first objective was calibration of the instrument, including tracker channel maps and alignment as well as the WEPL calibration. Then we performed the first CT scans on a series of phantoms. The very high sustained rate of data acquisition, exceeding one million protons per second, allowed a full 360° scan to be completed in less than 10 minutes, and reconstruction of a CATPHAN 404 phantom verified accurate reconstruction of the proton relative stopping power in a variety of materials. PMID:27127307

  18. Novel scheme for rapid parallel parameter estimation of gravitational waves from compact binary coalescences

    NASA Astrophysics Data System (ADS)

    Pankow, C.; Brady, P.; Ochsner, E.; O'Shaughnessy, R.

    2015-07-01

    We introduce a highly parallelizable architecture for estimating parameters of compact binary coalescence using gravitational-wave data and waveform models. Using a spherical harmonic mode decomposition, the waveform is expressed as a sum over modes that depend on the intrinsic parameters (e.g., masses) with coefficients that depend on the observer dependent extrinsic parameters (e.g., distance, sky position). The data is then prefiltered against those modes, at fixed intrinsic parameters, enabling efficiently evaluation of the likelihood for generic source positions and orientations, independent of waveform length or generation time. We efficiently parallelize our intrinsic space calculation by integrating over all extrinsic parameters using a Monte Carlo integration strategy. Since the waveform generation and prefiltering happens only once, the cost of integration dominates the procedure. Also, we operate hierarchically, using information from existing gravitational-wave searches to identify the regions of parameter space to emphasize in our sampling. As proof of concept and verification of the result, we have implemented this algorithm using standard time-domain waveforms, processing each event in less than one hour on recent computing hardware. For most events we evaluate the marginalized likelihood (evidence) with statistical errors of ≲5 %, and even smaller in many cases. With a bounded runtime independent of the waveform model starting frequency, a nearly unchanged strategy could estimate neutron star (NS)-NS parameters in the 2018 advanced LIGO era. Our algorithm is usable with any noise curve and existing time-domain model at any mass, including some waveforms which are computationally costly to evolve.

  19. Parallel high-performance grid computing: capabilities and opportunities of a novel demanding service and business class allowing highest resource efficiency.

    PubMed

    Kepper, Nick; Ettig, Ramona; Dickmann, Frank; Stehr, Rene; Grosveld, Frank G; Wedemann, Gero; Knoch, Tobias A

    2010-01-01

    Especially in the life-science and the health-care sectors the huge IT requirements are imminent due to the large and complex systems to be analysed and simulated. Grid infrastructures play here a rapidly increasing role for research, diagnostics, and treatment, since they provide the necessary large-scale resources efficiently. Whereas grids were first used for huge number crunching of trivially parallelizable problems, increasingly parallel high-performance computing is required. Here, we show for the prime example of molecular dynamic simulations how the presence of large grid clusters including very fast network interconnects within grid infrastructures allows now parallel high-performance grid computing efficiently and thus combines the benefits of dedicated super-computing centres and grid infrastructures. The demands for this service class are the highest since the user group has very heterogeneous requirements: i) two to many thousands of CPUs, ii) different memory architectures, iii) huge storage capabilities, and iv) fast communication via network interconnects, are all needed in different combinations and must be considered in a highly dedicated manner to reach highest performance efficiency. Beyond, advanced and dedicated i) interaction with users, ii) the management of jobs, iii) accounting, and iv) billing, not only combines classic with parallel high-performance grid usage, but more importantly is also able to increase the efficiency of IT resource providers. Consequently, the mere "yes-we-can" becomes a huge opportunity like e.g. the life-science and health-care sectors as well as grid infrastructures by reaching higher level of resource efficiency.

  20. A Fast Experimental Scanner for Proton CT: Technical Performance and First Experience with Phantom Scans.

    PubMed

    Johnson, Robert P; Bashkirov, Vladimir; DeWitt, Langley; Giacometti, Valentina; Hurley, Robert F; Piersimoni, Pierluigi; Plautz, Tia E; Sadrozinski, Hartmut F-W; Schubert, Keith; Schulte, Reinhard; Schultze, Blake; Zatserklyaniy, Andriy

    2016-02-01

    We report on the design, fabrication, and first tests of a tomographic scanner developed for proton computed tomography (pCT) of head-sized objects. After extensive preclinical testing, pCT is intended to be employed in support of proton therapy treatment planning and pre-treatment verification in patients undergoing particle-beam therapy. The scanner consists of two silicon-strip telescopes that track individual protons before and after the phantom, and a novel multistage scintillation detector that measures a combination of the residual energy and range of the proton, from which we derive the water equivalent path length (WEPL) of the protons in the scanned object. The set of WEPL values and the associated paths of protons passing through the object over a 360° angular scan are processed by an iterative, parallelizable reconstruction algorithm that runs on modern GP-GPU hardware. In order to assess the performance of the scanner, we have performed tests with 200 MeV protons from the synchrotron of the Loma Linda University Medical Center and the IBA cyclotron of the Northwestern Medicine Chicago Proton Center. Our first objective was calibration of the instrument, including tracker channel maps and alignment as well as the WEPL calibration. Then we performed the first CT scans on a series of phantoms. The very high sustained rate of data acquisition, exceeding one million protons per second, allowed a full 360° scan to be completed in less than 10 minutes, and reconstruction of a CATPHAN 404 phantom verified accurate reconstruction of the proton relative stopping power in a variety of materials.

  1. A Fast Experimental Scanner for Proton CT: Technical Performance and First Experience With Phantom Scans

    NASA Astrophysics Data System (ADS)

    Johnson, Robert P.; Bashkirov, Vladimir; DeWitt, Langley; Giacometti, Valentina; Hurley, Robert F.; Piersimoni, Pierluigi; Plautz, Tia E.; Sadrozinski, Hartmut F.-W.; Schubert, Keith; Schulte, Reinhard; Schultze, Blake; Zatserklyaniy, Andriy

    2016-02-01

    We report on the design, fabrication, and first tests of a tomographic scanner developed for proton computed tomography (pCT) of head-sized objects. After extensive preclinical testing, pCT is intended to be employed in support of proton therapy treatment planning and pre-treatment verification in patients undergoing particle-beam therapy. The scanner consists of two silicon-strip telescopes that track individual protons before and after the phantom, and a novel multistage scintillation detector that measures a combination of the residual energy and range of the proton, from which we derive the water equivalent path length (WEPL) of the protons in the scanned object. The set of WEPL values and the associated paths of protons passing through the object over a 360 ° angular scan are processed by an iterative, parallelizable reconstruction algorithm that runs on modern GP-GPU hardware. In order to assess the performance of the scanner, we have performed tests with 200 MeV protons from the synchrotron of the Loma Linda University Medical Center and the IBA cyclotron of the Northwestern Medicine Chicago Proton Center. Our first objective was calibration of the instrument, including tracker channel maps and alignment as well as the WEPL calibration. Then we performed the first CT scans on a series of phantoms. The very high sustained rate of data acquisition, exceeding one million protons per second, allowed a full 360 ° scan to be completed in less than 10 minutes, and reconstruction of a CATPHAN 404 phantom verified accurate reconstruction of the proton relative stopping power in a variety of materials.

  2. ECG-gated interventional cardiac reconstruction for non-periodic motion.

    PubMed

    Rohkohl, Christopher; Lauritsch, Günter; Biller, Lisa; Hornegger, Joachim

    2010-01-01

    The 3-D reconstruction of cardiac vasculature using C-arm CT is an active and challenging field of research. In interventional environments patients often do have arrhythmic heart signals or cannot hold breath during the complete data acquisition. This important group of patients cannot be reconstructed with current approaches that do strongly depend on a high degree of cardiac motion periodicity for working properly. In a last year's MICCAI contribution a first algorithm was presented that is able to estimate non-periodic 4-D motion patterns. However, to some degree that algorithm still depends on periodicity, as it requires a prior image which is obtained using a simple ECG-gated reconstruction. In this work we aim to provide a solution to this problem by developing a motion compensated ECG-gating algorithm. It is built upon a 4-D time-continuous affine motion model which is capable of compactly describing highly non-periodic motion patterns. A stochastic optimization scheme is derived which minimizes the error between the measured projection data and the forward projection of the motion compensated reconstruction. For evaluation, the algorithm is applied to 5 datasets of the left coronary arteries of patients that have ignored the breath hold command and/or had arrhythmic heart signals during the data acquisition. By applying the developed algorithm the average visibility of the vessel segments could be increased by 27%. The results show that the proposed algorithm provides excellent reconstruction quality in cases where classical approaches fail. The algorithm is highly parallelizable and a clinically feasible runtime of under 4 minutes is achieved using modern graphics card hardware.

  3. Parameterization, sensitivity analysis, and inversion: an investigation using groundwater modeling of the surface-mined Tivoli-Guidonia basin (Metropolitan City of Rome, Italy)

    NASA Astrophysics Data System (ADS)

    La Vigna, Francesco; Hill, Mary C.; Rossetto, Rudy; Mazza, Roberto

    2016-09-01

    With respect to model parameterization and sensitivity analysis, this work uses a practical example to suggest that methods that start with simple models and use computationally frugal model analysis methods remain valuable in any toolbox of model development methods. In this work, groundwater model calibration starts with a simple parameterization that evolves into a moderately complex model. The model is developed for a water management study of the Tivoli-Guidonia basin (Rome, Italy) where surface mining has been conducted in conjunction with substantial dewatering. The approach to model development used in this work employs repeated analysis using sensitivity and inverse methods, including use of a new observation-stacked parameter importance graph. The methods are highly parallelizable and require few model runs, which make the repeated analyses and attendant insights possible. The success of a model development design can be measured by insights attained and demonstrated model accuracy relevant to predictions. Example insights were obtained: (1) A long-held belief that, except for a few distinct fractures, the travertine is homogeneous was found to be inadequate, and (2) The dewatering pumping rate is more critical to model accuracy than expected. The latter insight motivated additional data collection and improved pumpage estimates. Validation tests using three other recharge and pumpage conditions suggest good accuracy for the predictions considered. The model was used to evaluate management scenarios and showed that similar dewatering results could be achieved using 20 % less pumped water, but would require installing newly positioned wells and cooperation between mine owners.

  4. CSciBox: An Intelligent Assistant for Dating Ice and Sediment Cores

    NASA Astrophysics Data System (ADS)

    Finlinson, K.; Bradley, E.; White, J. W. C.; Anderson, K. A.; Marchitto, T. M., Jr.; de Vesine, L. R.; Jones, T. R.; Lindsay, C. M.; Israelsen, B.

    2015-12-01

    CSciBox is an integrated software system for the construction and evaluation of age models of paleo-environmental archives. It incorporates a number of data-processing and visualization facilities, ranging from simple interpolation to reservoir-age correction and 14C calibration via the Calib algorithm, as well as a number of firn and ice-flow models. It employs modern database technology to store paleoclimate proxy data and analysis results in an easily accessible and searchable form, and offers the user access to those data and computational elements via a modern graphical user interface (GUI). In the case of truly large data or computations, CSciBox is parallelizable across modern multi-core processors, or clusters, or even the cloud. The code is open source and freely available on github, as are one-click installers for various versions of Windows and Mac OSX. The system's architecture allows users to incorporate their own software in the form of computational components that can be built smoothly into CSciBox workflows, taking advantage of CSciBox's GUI, data importing facilities, and plotting capabilities. To date, BACON and StratiCounter have been integrated into CSciBox as embedded components. The user can manipulate and compose all of these tools and facilities as she sees fit. Alternatively, she can employ CSciBox's automated reasoning engine, which uses artificial intelligence techniques to explore the gamut of age models and cross-dating scenarios automatically. The automated reasoning engine captures the knowledge of expert geoscientists, and can output a description of its reasoning.

  5. A Morphing Radiator for High-Turndown Thermal Control of Crewed Space Exploration Vehicles

    NASA Technical Reports Server (NTRS)

    Cognata, Thomas J.; Hardtl, Darren; Sheth, Rubik; Dinsmore, Craig

    2015-01-01

    Spacecraft designed for missions beyond low earth orbit (LEO) face a difficult thermal control challenge, particularly in the case of crewed vehicles where the thermal control system (TCS) must maintain a relatively constant internal environment temperature despite a vastly varying external thermal environment and despite heat rejection needs that are contrary to the potential of the environment. A thermal control system is in other words required to reject a higher heat load to warm environments and a lower heat load to cold environments, necessitating a quite high turndown ratio. A modern thermal control system is capable of a turndown ratio of on the order of 12:1, but for crew safety and environment compatibility these are massive multi-loop fluid systems. This paper discusses the analysis of a unique radiator design which employs the behavior of shape memory alloys (SMA) to vary the turndown of, and thus enable, a single-loop vehicle thermal control system for space exploration vehicles. This design, a morphing radiator, varies its shape in response to facesheet temperature to control view of space and primary surface emissivity. Because temperature dependence is inherent to SMA behavior, the design requires no accommodation for control, instrumentation, nor power supply in order to operate. Thermal and radiation modeling of the morphing radiator predict a turndown ranging from 11.9:1 to 35:1 independent of TCS configuration. Stress and deformation analyses predict the desired morphing behavior of the concept. A system level mass analysis shows that by enabling a single loop architecture this design could reduce the TCS mass by between 139 kg and 225 kg. The concept is demonstrated in proof-of-concept benchtop tests.

  6. Detecting Inspection Objects of Power Line from Cable Inspection Robot LiDAR Data

    PubMed Central

    Qin, Xinyan; Wu, Gongping; Fan, Fei

    2018-01-01

    Power lines are extending to complex environments (e.g., lakes and forests), and the distribution of power lines in a tower is becoming complicated (e.g., multi-loop and multi-bundle). Additionally, power line inspection is becoming heavier and more difficult. Advanced LiDAR technology is increasingly being used to solve these difficulties. Based on precise cable inspection robot (CIR) LiDAR data and the distinctive position and orientation system (POS) data, we propose a novel methodology to detect inspection objects surrounding power lines. The proposed method mainly includes four steps: firstly, the original point cloud is divided into single-span data as a processing unit; secondly, the optimal elevation threshold is constructed to remove ground points without the existing filtering algorithm, improving data processing efficiency and extraction accuracy; thirdly, a single power line and its surrounding data can be respectively extracted by a structured partition based on a POS data (SPPD) algorithm from “layer” to “block” according to power line distribution; finally, a partition recognition method is proposed based on the distribution characteristics of inspection objects, highlighting the feature information and improving the recognition effect. The local neighborhood statistics and the 3D region growing method are used to recognize different inspection objects surrounding power lines in a partition. Three datasets were collected by two CIR LIDAR systems in our study. The experimental results demonstrate that an average 90.6% accuracy and average 98.2% precision at the point cloud level can be achieved. The successful extraction indicates that the proposed method is feasible and promising. Our study can be used to obtain precise dimensions of fittings for modeling, as well as automatic detection and location of security risks, so as to improve the intelligence level of power line inspection. PMID:29690560

  7. Analytic Result for the Two-loop Six-point NMHV Amplitude in N = 4 Super Yang-Mills Theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dixon, Lance J.; /SLAC; Drummond, James M.

    2012-02-15

    We provide a simple analytic formula for the two-loop six-point ratio function of planar N = 4 super Yang-Mills theory. This result extends the analytic knowledge of multi-loop six-point amplitudes beyond those with maximal helicity violation. We make a natural ansatz for the symbols of the relevant functions appearing in the two-loop amplitude, and impose various consistency conditions, including symmetry, the absence of spurious poles, the correct collinear behavior, and agreement with the operator product expansion for light-like (super) Wilson loops. This information reduces the ansatz to a small number of relatively simple functions. In order to fix these parametersmore » uniquely, we utilize an explicit representation of the amplitude in terms of loop integrals that can be evaluated analytically in various kinematic limits. The final compact analytic result is expressed in terms of classical polylogarithms, whose arguments are rational functions of the dual conformal cross-ratios, plus precisely two functions that are not of this type. One of the functions, the loop integral {Omega}{sup (2)}, also plays a key role in a new representation of the remainder function R{sub 6}{sup (2)} in the maximally helicity violating sector. Another interesting feature at two loops is the appearance of a new (parity odd) x (parity odd) sector of the amplitude, which is absent at one loop, and which is uniquely determined in a natural way in terms of the more familiar (parity even) x (parity even) part. The second non-polylogarithmic function, the loop integral {tilde {Omega}}{sup (2)}, characterizes this sector. Both {Omega}{sup (2)} and {tilde {Omega}}{sup (2)} can be expressed as one-dimensional integrals over classical polylogarithms with rational arguments.« less

  8. The trend of digital control system design for nuclear power plants in Korea

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, S. H.; Jung, H. Y.; Yang, C. Y.

    2006-07-01

    Currently there are 20 nuclear power plants (NPPs) in operation, and 6 more units are under construction in Korea. The control systems of those NPPs have also been developed together with the technology advancement. Control systems started with On-Off control using the relay logic, had been evolved into Solid-State logic using TTL ICs, and applied with the micro-processors since the Yonggwang NPP Units 3 and 4 which started its construction in 1989. Multiplexers are also installed at the local plant areas to collect field input and to send output signals while communicating with the controllers located in the system cabinetsmore » near the main control room in order to reduce the field wiring cables. The design of the digital control system technology for the NPPs in Korea has been optimized to maximize the operability as well as the safety through the design, construction, start-up and operation experiences. Both Shin-Kori Units 1 and 2 and Shin-Wolsong Units 1 and 2 NPP projects under construction are being progressed at the same time. Digital Plant Control Systems of these projects have adopted multi-loop controllers, redundant loop configuration, and soft control system for the radwaste system. Programmable Logic Controller (PLC) and Distributed Control System (DCS) are applied with soft control system in Shin-Kori Units 3 and 4. This paper describes the evolvement of control system at the NPPs in Korea and the experience and design improvement through the observation of the latest failure of the digital control system. In addition, design concept and its trend of the digital control system being applied to the NPP in Korea are introduced. (authors)« less

  9. A Morphing Radiator for High-Turndown Thermal Control of Crewed Space Exploration Vehicles

    NASA Technical Reports Server (NTRS)

    Cognata, Thomas J.; Hartl, Darren J.; Sheth, Rubik; Dinsmore, Craig

    2014-01-01

    Spacecraft designed for missions beyond low earth orbit (LEO) face a difficult thermal control challenge, particularly in the case of crewed vehicles where the thermal control system (TCS) must maintain a relatively constant internal environment temperature despite a vastly varying external thermal environment and despite heat rejection needs that are contrary to the potential of the environment. A thermal control system may be required to reject a higher heat load to warm environments and a lower heat load to cold environments, necessitating a relatively high turndown ratio. A modern thermal control system is capable of a turndown ratio of on the order of 12:1, but crew safety and environment compatibility have constrained these solutions to massive multi-loop fluid systems. This paper discusses the analysis of a unique radiator design that employs the behavior of shape memory alloys (SMAs) to vary the turndown of, and thus enable, a single-loop vehicle thermal control system for space exploration vehicles. This design, a morphing radiator, varies its shape in response to facesheet temperature to control view of space and primary surface emissivity. Because temperature dependence is inherent to SMA behavior, the design requires no accommodation for control, instrumentation, or power supply in order to operate. Thermal and radiation modeling of the morphing radiator predict a turndown ranging from 11.9:1 to 35:1 independent of TCS configuration. Coupled thermal-stress analyses predict that the desired morphing behavior of the concept is attainable. A system level mass analysis shows that by enabling a single loop architecture this design could reduce the TCS mass by between 139 kg and 225 kg. The concept has been demonstrated in proof-of-concept benchtop tests.

  10. A sensitive and innovative detection method for rapid C-reactive proteins analysis based on a micro-fluxgate sensor system

    PubMed Central

    Yang, Zhen; Zhi, Shaotao; Feng, Zhu; Lei, Chong; Zhou, Yong

    2018-01-01

    A sensitive and innovative assay system based on a micro-MEMS-fluxgate sensor and immunomagnetic beads-labels was developed for the rapid analysis of C-reactive proteins (CRP). The fluxgate sensor presented in this study was fabricated through standard micro-electro-mechanical system technology. A multi-loop magnetic core made of Fe-based amorphous ribbon was employed as the sensing element, and 3-D solenoid copper coils were used to control the sensing core. Antibody-conjugated immunomagnetic microbeads were strategically utilized as signal tags to label the CRP via the specific conjugation of CRP to polyclonal CRP antibodies. Separate Au film substrates were applied as immunoplatforms to immobilize CRP-beads labels through classical sandwich assays. Detection and quantification of the CRP at different concentrations were implemented by detecting the stray field of CRP labeled magnetic beads using the newly-developed micro-fluxgate sensor. The resulting system exhibited the required sensitivity, stability, reproducibility, and selectivity. A detection limit as low as 0.002 μg/mL CRP with a linearity range from 0.002 μg/mL to 10 μg/mL was achieved, and this suggested that the proposed biosystem possesses high sensitivity. In addition to the extremely low detection limit, the proposed method can be easily manipulated and possesses a quick response time. The response time of our sensor was less than 5 s, and the entire detection period for CRP analysis can be completed in less than 30 min using the current method. Given the detection performance and other advantages such as miniaturization, excellent stability and specificity, the proposed biosensor can be considered as a potential candidate for the rapid analysis of CRP, especially for point-of-care platforms. PMID:29601593

  11. Model predictive control of a solar-thermal reactor

    NASA Astrophysics Data System (ADS)

    Saade Saade, Maria Elizabeth

    Solar-thermal reactors represent a promising alternative to fossil fuels because they can harvest solar energy and transform it into storable and transportable fuels. The operation of solar-thermal reactors is restricted by the available sunlight and its inherently transient behavior, which affects the performance of the reactors and limits their efficiency. Before solar-thermal reactors can become commercially viable, they need to be able to maintain a continuous high-performance operation, even in the presence of passing clouds. A well-designed control system can preserve product quality and maintain stable product compositions, resulting in a more efficient and cost-effective operation, which can ultimately lead to scale-up and commercialization of solar thermochemical technologies. In this work, we propose a model predictive control (MPC) system for a solar-thermal reactor for the steam-gasification of biomass. The proposed controller aims at rejecting the disturbances in solar irradiation caused by the presence of clouds. A first-principles dynamic model of the process was developed. The model was used to study the dynamic responses of the process variables and to identify a linear time-invariant model used in the MPC algorithm. To provide an estimation of the disturbances for the control algorithm, a one-minute-ahead direct normal irradiance (DNI) predictor was developed. The proposed predictor utilizes information obtained through the analysis of sky images, in combination with current atmospheric measurements, to produce the DNI forecast. In the end, a robust controller was designed capable of rejecting disturbances within the operating region. Extensive simulation experiments showed that the controller outperforms a finely-tuned multi-loop feedback control strategy. The results obtained suggest that our controller is suitable for practical implementation.

  12. Detecting Inspection Objects of Power Line from Cable Inspection Robot LiDAR Data.

    PubMed

    Qin, Xinyan; Wu, Gongping; Lei, Jin; Fan, Fei; Ye, Xuhui

    2018-04-22

    Power lines are extending to complex environments (e.g., lakes and forests), and the distribution of power lines in a tower is becoming complicated (e.g., multi-loop and multi-bundle). Additionally, power line inspection is becoming heavier and more difficult. Advanced LiDAR technology is increasingly being used to solve these difficulties. Based on precise cable inspection robot (CIR) LiDAR data and the distinctive position and orientation system (POS) data, we propose a novel methodology to detect inspection objects surrounding power lines. The proposed method mainly includes four steps: firstly, the original point cloud is divided into single-span data as a processing unit; secondly, the optimal elevation threshold is constructed to remove ground points without the existing filtering algorithm, improving data processing efficiency and extraction accuracy; thirdly, a single power line and its surrounding data can be respectively extracted by a structured partition based on a POS data (SPPD) algorithm from "layer" to "block" according to power line distribution; finally, a partition recognition method is proposed based on the distribution characteristics of inspection objects, highlighting the feature information and improving the recognition effect. The local neighborhood statistics and the 3D region growing method are used to recognize different inspection objects surrounding power lines in a partition. Three datasets were collected by two CIR LIDAR systems in our study. The experimental results demonstrate that an average 90.6% accuracy and average 98.2% precision at the point cloud level can be achieved. The successful extraction indicates that the proposed method is feasible and promising. Our study can be used to obtain precise dimensions of fittings for modeling, as well as automatic detection and location of security risks, so as to improve the intelligence level of power line inspection.

  13. GPUs benchmarking in subpixel image registration algorithm

    NASA Astrophysics Data System (ADS)

    Sanz-Sabater, Martin; Picazo-Bueno, Jose Angel; Micó, Vicente; Ferrerira, Carlos; Granero, Luis; Garcia, Javier

    2015-05-01

    Image registration techniques are used among different scientific fields, like medical imaging or optical metrology. The straightest way to calculate shifting between two images is using the cross correlation, taking the highest value of this correlation image. Shifting resolution is given in whole pixels which cannot be enough for certain applications. Better results can be achieved interpolating both images, as much as the desired resolution we want to get, and applying the same technique described before, but the memory needed by the system is significantly higher. To avoid memory consuming we are implementing a subpixel shifting method based on FFT. With the original images, subpixel shifting can be achieved multiplying its discrete Fourier transform by a linear phase with different slopes. This method is high time consuming method because checking a concrete shifting means new calculations. The algorithm, highly parallelizable, is very suitable for high performance computing systems. GPU (Graphics Processing Unit) accelerated computing became very popular more than ten years ago because they have hundreds of computational cores in a reasonable cheap card. In our case, we are going to register the shifting between two images, doing the first approach by FFT based correlation, and later doing the subpixel approach using the technique described before. We consider it as `brute force' method. So we will present a benchmark of the algorithm consisting on a first approach (pixel resolution) and then do subpixel resolution approaching, decreasing the shifting step in every loop achieving a high resolution in few steps. This program will be executed in three different computers. At the end, we will present the results of the computation, with different kind of CPUs and GPUs, checking the accuracy of the method, and the time consumed in each computer, discussing the advantages, disadvantages of the use of GPUs.

  14. Non-adiabatic Excited State Molecule Dynamics Modeling of Photochemistry and Photophysics of Materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, Tammie Renee; Tretiak, Sergei

    2017-01-06

    Understanding and controlling excited state dynamics lies at the heart of all our efforts to design photoactive materials with desired functionality. This tailor-design approach has become the standard for many technological applications (e.g., solar energy harvesting) including the design of organic conjugated electronic materials with applications in photovoltaic and light-emitting devices. Over the years, our team has developed efficient LANL-based codes to model the relevant photophysical processes following photoexcitation (spatial energy transfer, excitation localization/delocalization, and/or charge separation). The developed approach allows the non-radiative relaxation to be followed on up to ~10 ps timescales for large realistic molecules (hundreds of atomsmore » in size) in the realistic solvent dielectric environment. The Collective Electronic Oscillator (CEO) code is used to compute electronic excited states, and the Non-adiabatic Excited State Molecular Dynamics (NA-ESMD) code is used to follow the non-adiabatic dynamics on multiple coupled Born-Oppenheimer potential energy surfaces. Our preliminary NA-ESMD simulations have revealed key photoinduced mechanisms controlling competing interactions and relaxation pathways in complex materials, including organic conjugated polymer materials, and have provided a detailed understanding of photochemical products and intermediates and the internal conversion process during the initiation of energetic materials. This project will be using LANL-based CEO and NA-ESMD codes to model nonradiative relaxation in organic and energetic materials. The NA-ESMD and CEO codes belong to a class of electronic structure/quantum chemistry codes that require large memory, “long-queue-few-core” distribution of resources in order to make useful progress. The NA-ESMD simulations are trivially parallelizable requiring ~300 processors for up to one week runtime to reach a meaningful restart point.« less

  15. Line-Focused Optical Excitation of Parallel Acoustic Focused Sample Streams for High Volumetric and Analytical Rate Flow Cytometry.

    PubMed

    Kalb, Daniel M; Fencl, Frank A; Woods, Travis A; Swanson, August; Maestas, Gian C; Juárez, Jaime J; Edwards, Bruce S; Shreve, Andrew P; Graves, Steven W

    2017-09-19

    Flow cytometry provides highly sensitive multiparameter analysis of cells and particles but has been largely limited to the use of a single focused sample stream. This limits the analytical rate to ∼50K particles/s and the volumetric rate to ∼250 μL/min. Despite the analytical prowess of flow cytometry, there are applications where these rates are insufficient, such as rare cell analysis in high cellular backgrounds (e.g., circulating tumor cells and fetal cells in maternal blood), detection of cells/particles in large dilute samples (e.g., water quality, urine analysis), or high-throughput screening applications. Here we report a highly parallel acoustic flow cytometer that uses an acoustic standing wave to focus particles into 16 parallel analysis points across a 2.3 mm wide optical flow cell. A line-focused laser and wide-field collection optics are used to excite and collect the fluorescence emission of these parallel streams onto a high-speed camera for analysis. With this instrument format and fluorescent microsphere standards, we obtain analysis rates of 100K/s and flow rates of 10 mL/min, while maintaining optical performance comparable to that of a commercial flow cytometer. The results with our initial prototype instrument demonstrate that the integration of key parallelizable components, including the line-focused laser, particle focusing using multinode acoustic standing waves, and a spatially arrayed detector, can increase analytical and volumetric throughputs by orders of magnitude in a compact, simple, and cost-effective platform. Such instruments will be of great value to applications in need of high-throughput yet sensitive flow cytometry analysis.

  16. Accelerating Wright–Fisher Forward Simulations on the Graphics Processing Unit

    PubMed Central

    Lawrie, David S.

    2017-01-01

    Forward Wright–Fisher simulations are powerful in their ability to model complex demography and selection scenarios, but suffer from slow execution on the Central Processor Unit (CPU), thus limiting their usefulness. However, the single-locus Wright–Fisher forward algorithm is exceedingly parallelizable, with many steps that are so-called “embarrassingly parallel,” consisting of a vast number of individual computations that are all independent of each other and thus capable of being performed concurrently. The rise of modern Graphics Processing Units (GPUs) and programming languages designed to leverage the inherent parallel nature of these processors have allowed researchers to dramatically speed up many programs that have such high arithmetic intensity and intrinsic concurrency. The presented GPU Optimized Wright–Fisher simulation, or “GO Fish” for short, can be used to simulate arbitrary selection and demographic scenarios while running over 250-fold faster than its serial counterpart on the CPU. Even modest GPU hardware can achieve an impressive speedup of over two orders of magnitude. With simulations so accelerated, one can not only do quick parametric bootstrapping of previously estimated parameters, but also use simulated results to calculate the likelihoods and summary statistics of demographic and selection models against real polymorphism data, all without restricting the demographic and selection scenarios that can be modeled or requiring approximations to the single-locus forward algorithm for efficiency. Further, as many of the parallel programming techniques used in this simulation can be applied to other computationally intensive algorithms important in population genetics, GO Fish serves as an exciting template for future research into accelerating computation in evolution. GO Fish is part of the Parallel PopGen Package available at: http://dl42.github.io/ParallelPopGen/. PMID:28768689

  17. Non-rigid CT/CBCT to CBCT registration for online external beam radiotherapy guidance

    NASA Astrophysics Data System (ADS)

    Zachiu, Cornel; de Senneville, Baudouin Denis; Tijssen, Rob H. N.; Kotte, Alexis N. T. J.; Houweling, Antonetta C.; Kerkmeijer, Linda G. W.; Lagendijk, Jan J. W.; Moonen, Chrit T. W.; Ries, Mario

    2018-01-01

    Image-guided external beam radiotherapy (EBRT) allows radiation dose deposition with a high degree of accuracy and precision. Guidance is usually achieved by estimating the displacements, via image registration, between cone beam computed tomography (CBCT) and computed tomography (CT) images acquired at different stages of the therapy. The resulting displacements are then used to reposition the patient such that the location of the tumor at the time of treatment matches its position during planning. Moreover, ongoing research aims to use CBCT-CT image registration for online plan adaptation. However, CBCT images are usually acquired using a small number of x-ray projections and/or low beam intensities. This often leads to the images being subject to low contrast, low signal-to-noise ratio and artifacts, which ends-up hampering the image registration process. Previous studies addressed this by integrating additional image processing steps into the registration procedure. However, these steps are usually designed for particular image acquisition schemes, therefore limiting their use on a case-by-case basis. In the current study we address CT to CBCT and CBCT to CBCT registration by the means of the recently proposed EVolution registration algorithm. Contrary to previous approaches, EVolution does not require the integration of additional image processing steps in the registration scheme. Moreover, the algorithm requires a low number of input parameters, is easily parallelizable and provides an elastic deformation on a point-by-point basis. Results have shown that relative to a pure CT-based registration, the intrinsic artifacts present in typical CBCT images only have a sub-millimeter impact on the accuracy and precision of the estimated deformation. In addition, the algorithm has low computational requirements, which are compatible with online image-based guidance of EBRT treatments.

  18. HYDROSCAPE: A SCAlable and ParallelizablE Rainfall Runoff Model for Hydrological Applications

    NASA Astrophysics Data System (ADS)

    Piccolroaz, S.; Di Lazzaro, M.; Zarlenga, A.; Majone, B.; Bellin, A.; Fiori, A.

    2015-12-01

    In this work we present HYDROSCAPE, an innovative streamflow routing method based on the travel time approach, and modeled through a fine-scale geomorphological description of hydrological flow paths. The model is designed aimed at being easily coupled with weather forecast or climate models providing the hydrological forcing, and at the same time preserving the geomorphological dispersion of the river network, which is kept unchanged independently on the grid size of rainfall input. This makes HYDROSCAPE particularly suitable for multi-scale applications, ranging from medium size catchments up to the continental scale, and to investigate the effects of extreme rainfall events that require an accurate description of basin response timing. Key feature of the model is its computational efficiency, which allows performing a large number of simulations for sensitivity/uncertainty analyses in a Monte Carlo framework. Further, the model is highly parsimonious, involving the calibration of only three parameters: one defining the residence time of hillslope response, one for channel velocity, and a multiplicative factor accounting for uncertainties in the identification of the potential maximum soil moisture retention in the SCS-CN method. HYDROSCAPE is designed with a simple and flexible modular structure, which makes it particularly prone to massive parallelization, customization according to the specific user needs and preferences (e.g., rainfall-runoff model), and continuous development and improvement. Finally, the possibility to specify the desired computational time step and evaluate streamflow at any location in the domain, makes HYDROSCAPE an attractive tool for many hydrological applications, and a valuable alternative to more complex and highly parametrized large scale hydrological models. Together with model development and features, we present an application to the Upper Tiber River basin (Italy), providing a practical example of model performance and characteristics.

  19. Boundaries on Range-Range Constrained Admissible Regions for Optical Space Surveillance

    NASA Astrophysics Data System (ADS)

    Gaebler, J. A.; Axelrad, P.; Schumacher, P. W., Jr.

    We propose a new type of admissible-region analysis for track initiation in multi-satellite problems when apparent angles measured at known stations are the only observable. The goal is to create an efficient and parallelizable algorithm for computing initial candidate orbits for a large number of new targets. It takes at least three angles-only observations to establish an orbit by traditional means. Thus one is faced with a problem that requires N-choose-3 sets of calculations to test every possible combination of the N observations. An alternative approach is to reduce the number of combinations by making hypotheses of the range to a target along the observed line-of-sight. If realistic bounds on the range are imposed, consistent with a given partition of the space of orbital elements, a pair of range possibilities can be evaluated via Lambert’s method to find candidate orbits for that that partition, which then requires Nchoose- 2 times M-choose-2 combinations, where M is the average number of range hypotheses per observation. The contribution of this work is a set of constraints that establish bounds on the range-range hypothesis region for a given element-space partition, thereby minimizing M. Two effective constraints were identified, which together, constrain the hypothesis region in range-range space to nearly that of the true admissible region based on an orbital partition. The first constraint is based on the geometry of the vacant orbital focus. The second constraint is based on time-of-flight and Lagrange’s form of Kepler’s equation. A complete and efficient parallelization of the problem is possible on this approach because the element partitions can be arbitrary and can be handled independently of each other.

  20. A new augmentation based algorithm for extracting maximal chordal subgraphs

    DOE PAGES

    Bhowmick, Sanjukta; Chen, Tzu-Yi; Halappanavar, Mahantesh

    2014-10-18

    If every cycle of a graph is chordal length greater than three then it contains an edge between non-adjacent vertices. Chordal graphs are of interest both theoretically, since they admit polynomial time solutions to a range of NP-hard graph problems, and practically, since they arise in many applications including sparse linear algebra, computer vision, and computational biology. A maximal chordal subgraph is a chordal subgraph that is not a proper subgraph of any other chordal subgraph. Existing algorithms for computing maximal chordal subgraphs depend on dynamically ordering the vertices, which is an inherently sequential process and therefore limits the algorithms’more » parallelizability. In our paper we explore techniques to develop a scalable parallel algorithm for extracting a maximal chordal subgraph. We demonstrate that an earlier attempt at developing a parallel algorithm may induce a non-optimal vertex ordering and is therefore not guaranteed to terminate with a maximal chordal subgraph. We then give a new algorithm that first computes and then repeatedly augments a spanning chordal subgraph. After proving that the algorithm terminates with a maximal chordal subgraph, we then demonstrate that this algorithm is more amenable to parallelization and that the parallel version also terminates with a maximal chordal subgraph. That said, the complexity of the new algorithm is higher than that of the previous parallel algorithm, although the earlier algorithm computes a chordal subgraph which is not guaranteed to be maximal. Finally, we experimented with our augmentation-based algorithm on both synthetic and real-world graphs. We provide scalability results and also explore the effect of different choices for the initial spanning chordal subgraph on both the running time and on the number of edges in the maximal chordal subgraph.« less

  1. A New Augmentation Based Algorithm for Extracting Maximal Chordal Subgraphs.

    PubMed

    Bhowmick, Sanjukta; Chen, Tzu-Yi; Halappanavar, Mahantesh

    2015-02-01

    A graph is chordal if every cycle of length greater than three contains an edge between non-adjacent vertices. Chordal graphs are of interest both theoretically, since they admit polynomial time solutions to a range of NP-hard graph problems, and practically, since they arise in many applications including sparse linear algebra, computer vision, and computational biology. A maximal chordal subgraph is a chordal subgraph that is not a proper subgraph of any other chordal subgraph. Existing algorithms for computing maximal chordal subgraphs depend on dynamically ordering the vertices, which is an inherently sequential process and therefore limits the algorithms' parallelizability. In this paper we explore techniques to develop a scalable parallel algorithm for extracting a maximal chordal subgraph. We demonstrate that an earlier attempt at developing a parallel algorithm may induce a non-optimal vertex ordering and is therefore not guaranteed to terminate with a maximal chordal subgraph. We then give a new algorithm that first computes and then repeatedly augments a spanning chordal subgraph. After proving that the algorithm terminates with a maximal chordal subgraph, we then demonstrate that this algorithm is more amenable to parallelization and that the parallel version also terminates with a maximal chordal subgraph. That said, the complexity of the new algorithm is higher than that of the previous parallel algorithm, although the earlier algorithm computes a chordal subgraph which is not guaranteed to be maximal. We experimented with our augmentation-based algorithm on both synthetic and real-world graphs. We provide scalability results and also explore the effect of different choices for the initial spanning chordal subgraph on both the running time and on the number of edges in the maximal chordal subgraph.

  2. Parallelizable 3D statistical reconstruction for C-arm tomosynthesis system

    NASA Astrophysics Data System (ADS)

    Wang, Beilei; Barner, Kenneth; Lee, Denny

    2005-04-01

    Clinical diagnosis and security detection tasks increasingly require 3D information which is difficult or impossible to obtain from 2D (two dimensional) radiographs. As a 3D (three dimensional) radiographic and non-destructive imaging technique, digital tomosynthesis is especially fit for cases where 3D information is required while a complete projection data is not available. Nowadays, FBP (filtered back projection) is extensively used in industry for its fast speed and simplicity. However, it is hard to deal with situations where only a limited number of projections from constrained directions are available, or the SNR (signal to noises ratio) of the projections is low. In order to deal with noise and take into account a priori information of the object, a statistical image reconstruction method is described based on the acquisition model of X-ray projections. We formulate a ML (maximum likelihood) function for this model and develop an ordered-subsets iterative algorithm to estimate the unknown attenuation of the object. Simulations show that satisfied results can be obtained after 1 to 2 iterations, and after that there is no significant improvement of the image quality. An adaptive wiener filter is also applied to the reconstructed image to remove its noise. Some approximations to speed up the reconstruction computation are also considered. Applying this method to computer generated projections of a revised Shepp phantom and true projections from diagnostic radiographs of a patient"s hand and mammography images yields reconstructions with impressive quality. Parallel programming is also implemented and tested. The quality of the reconstructed object is conserved, while the computation time is considerably reduced by almost the number of threads used.

  3. Global properties of physically interesting Lorentzian spacetimes

    NASA Astrophysics Data System (ADS)

    Nawarajan, Deloshan; Visser, Matt

    Under normal circumstances most members of the general relativity community focus almost exclusively on the local properties of spacetime, such as the locally Euclidean structure of the manifold and the Lorentzian signature of the metric tensor. When combined with the classical Einstein field equations this gives an extremely successful empirical model of classical gravity and classical matter — at least as long as one does not ask too many awkward questions about global issues, (such as global topology and global causal structure). We feel however that this is a tactical error — even without invoking full-fledged “quantum gravity” we know that the standard model of particle physics is also an extremely good representation of some parts of empirical reality; and we had better be able to carry over all the good features of the standard model of particle physics — at least into the realm of semi-classical quantum gravity. Doing so gives us some interesting global features that spacetime should possess: On physical grounds spacetime should be space-orientable, time-orientable, and spacetime-orientable, and it should possess a globally defined tetrad (vierbein, or in general a globally defined vielbein/n-bein). So on physical grounds spacetime should be parallelizable. This strongly suggests that the metric is not the fundamental physical quantity; a very good case can be made for the tetrad being more fundamental than the metric. Furthermore, a globally-defined “almost complex structure” is almost unavoidable. Ideas along these lines have previously been mooted, but much is buried in the pre-arXiv literature and is either forgotten or inaccessible. We shall revisit these ideas taking a perspective very much based on empirical physical observation.

  4. High-resolution imaging of magnetic fields using scanning superconducting quantum interference device (SQUID) microscopy

    NASA Astrophysics Data System (ADS)

    Fong de Los Santos, Luis E.

    Development of a scanning superconducting quantum interference device (SQUID) microscope system with interchangeable sensor configurations for imaging magnetic fields of room-temperature (RT) samples with sub-millimeter resolution. The low-critical-temperature (Tc) niobium-based monolithic SQUID sensor is mounted in the tip of a sapphire rod and thermally anchored to the cryostat helium reservoir. A 25 mum sapphire window separates the vacuum space from the RT sample. A positioning mechanism allows adjusting the sample-to-sensor spacing from the top of the Dewar. I have achieved a sensor-to-sample spacing of 100 mum, which could be maintained for periods of up to 4 weeks. Different SQUID sensor configurations are necessary to achieve the best combination of spatial resolution and field sensitivity for a given magnetic source. For imaging thin sections of geological samples, I used a custom-designed monolithic low-Tc niobium bare SQUID sensor, with an effective diameter of 80 mum, and achieved a field sensitivity of 1.5 pT/Hz1/2 and a magnetic moment sensitivity of 5.4 x 10-18 Am2/Hz1/2 at a sensor-to-sample spacing of 100 mum in the white noise region for frequencies above 100 Hz. Imaging action currents in cardiac tissue requires higher field sensitivity, which can only be achieved by compromising spatial resolution. I developed a monolithic low-Tc niobium multiloop SQUID sensor, with sensor sizes ranging from 250 mum to 1 mm, and achieved sensitivities of 480 - 180 fT/Hz1/2 in the white noise region for frequencies above 100 Hz, respectively. For all sensor configurations, the spatial resolution was comparable to the effective diameter and limited by the sensor-to-sample spacing. Spatial registration allowed us to compare high-resolution images of magnetic fields associated with action currents and optical recordings of transmembrane potentials to study the bidomain nature of cardiac tissue or to match petrography to magnetic field maps in thin sections of geological samples.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Plautz, Tia E.; Johnson, R. P.; Sadrozinski, H. F.-W.

    Purpose: To characterize the modulation transfer function (MTF) of the pre-clinical (phase II) head scanner developed for proton computed tomography (pCT) by the pCT collaboration. To evaluate the spatial resolution achievable by this system. Methods: Our phase II proton CT scanner prototype consists of two silicon telescopes that track individual protons upstream and downstream from a phantom, and a 5-stage scintillation detector that measures a combination of the residual energy and range of the proton. Residual energy is converted to water equivalent path length (WEPL) of the protons in the scanned object. The set of WEPL values and associated pathsmore » of protons passing through the object over a 360° angular scan is processed by an iterative parallelizable reconstruction algorithm that runs on GP-GPU hardware. A custom edge phantom composed of water-equivalent polymer and tissue-equivalent material inserts was constructed. The phantom was first simulated in Geant4 and then built to perform experimental beam tests with 200 MeV protons at the Northwestern Medicine Chicago Proton Center. The oversampling method was used to construct radial and azimuthal edge spread functions and modulation transfer functions. The spatial resolution was defined by the 10% point of the modulation transfer function in units of lp/cm. Results: The spatial resolution of the image was found to be strongly correlated with the radial position of the insert but independent of the relative stopping power of the insert. The spatial resolution varies between roughly 4 and 6 lp/cm in both the the radial and azimuthal directions depending on the radial displacement of the edge. Conclusion: The amount of image degradation due to our detector system is small compared with the effects of multiple Coulomb scattering, pixelation of the image and the reconstruction algorithm. Improvements in reconstruction will be made in order to achieve the theoretical limits of spatial resolution.« less

  6. An accurate, fast, and scalable solver for high-frequency wave propagation

    NASA Astrophysics Data System (ADS)

    Zepeda-Núñez, L.; Taus, M.; Hewett, R.; Demanet, L.

    2017-12-01

    In many science and engineering applications, solving time-harmonic high-frequency wave propagation problems quickly and accurately is of paramount importance. For example, in geophysics, particularly in oil exploration, such problems can be the forward problem in an iterative process for solving the inverse problem of subsurface inversion. It is important to solve these wave propagation problems accurately in order to efficiently obtain meaningful solutions of the inverse problems: low order forward modeling can hinder convergence. Additionally, due to the volume of data and the iterative nature of most optimization algorithms, the forward problem must be solved many times. Therefore, a fast solver is necessary to make solving the inverse problem feasible. For time-harmonic high-frequency wave propagation, obtaining both speed and accuracy is historically challenging. Recently, there have been many advances in the development of fast solvers for such problems, including methods which have linear complexity with respect to the number of degrees of freedom. While most methods scale optimally only in the context of low-order discretizations and smooth wave speed distributions, the method of polarized traces has been shown to retain optimal scaling for high-order discretizations, such as hybridizable discontinuous Galerkin methods and for highly heterogeneous (and even discontinuous) wave speeds. The resulting fast and accurate solver is consequently highly attractive for geophysical applications. To date, this method relies on a layered domain decomposition together with a preconditioner applied in a sweeping fashion, which has limited straight-forward parallelization. In this work, we introduce a new version of the method of polarized traces which reveals more parallel structure than previous versions while preserving all of its other advantages. We achieve this by further decomposing each layer and applying the preconditioner to these new components separately and in parallel. We demonstrate that this produces an even more effective and parallelizable preconditioner for a single right-hand side. As before, additional speed can be gained by pipelining several right-hand-sides.

  7. Automated hierarchical classification of protein domain subfamilies based on functionally-divergent residue signatures

    PubMed Central

    2012-01-01

    Background The NCBI Conserved Domain Database (CDD) consists of a collection of multiple sequence alignments of protein domains that are at various stages of being manually curated into evolutionary hierarchies based on conserved and divergent sequence and structural features. These domain models are annotated to provide insights into the relationships between sequence, structure and function via web-based BLAST searches. Results Here we automate the generation of conserved domain (CD) hierarchies using a combination of heuristic and Markov chain Monte Carlo (MCMC) sampling procedures and starting from a (typically very large) multiple sequence alignment. This procedure relies on statistical criteria to define each hierarchy based on the conserved and divergent sequence patterns associated with protein functional-specialization. At the same time this facilitates the sequence and structural annotation of residues that are functionally important. These statistical criteria also provide a means to objectively assess the quality of CD hierarchies, a non-trivial task considering that the protein subgroups are often very distantly related—a situation in which standard phylogenetic methods can be unreliable. Our aim here is to automatically generate (typically sub-optimal) hierarchies that, based on statistical criteria and visual comparisons, are comparable to manually curated hierarchies; this serves as the first step toward the ultimate goal of obtaining optimal hierarchical classifications. A plot of runtimes for the most time-intensive (non-parallelizable) part of the algorithm indicates a nearly linear time complexity so that, even for the extremely large Rossmann fold protein class, results were obtained in about a day. Conclusions This approach automates the rapid creation of protein domain hierarchies and thus will eliminate one of the most time consuming aspects of conserved domain database curation. At the same time, it also facilitates protein domain annotation by identifying those pattern residues that most distinguish each protein domain subgroup from other related subgroups. PMID:22726767

  8. Amp: A modular approach to machine learning in atomistic simulations

    NASA Astrophysics Data System (ADS)

    Khorshidi, Alireza; Peterson, Andrew A.

    2016-10-01

    Electronic structure calculations, such as those employing Kohn-Sham density functional theory or ab initio wavefunction theories, have allowed for atomistic-level understandings of a wide variety of phenomena and properties of matter at small scales. However, the computational cost of electronic structure methods drastically increases with length and time scales, which makes these methods difficult for long time-scale molecular dynamics simulations or large-sized systems. Machine-learning techniques can provide accurate potentials that can match the quality of electronic structure calculations, provided sufficient training data. These potentials can then be used to rapidly simulate large and long time-scale phenomena at similar quality to the parent electronic structure approach. Machine-learning potentials usually take a bias-free mathematical form and can be readily developed for a wide variety of systems. Electronic structure calculations have favorable properties-namely that they are noiseless and targeted training data can be produced on-demand-that make them particularly well-suited for machine learning. This paper discusses our modular approach to atomistic machine learning through the development of the open-source Atomistic Machine-learning Package (Amp), which allows for representations of both the total and atom-centered potential energy surface, in both periodic and non-periodic systems. Potentials developed through the atom-centered approach are simultaneously applicable for systems with various sizes. Interpolation can be enhanced by introducing custom descriptors of the local environment. We demonstrate this in the current work for Gaussian-type, bispectrum, and Zernike-type descriptors. Amp has an intuitive and modular structure with an interface through the python scripting language yet has parallelizable fortran components for demanding tasks; it is designed to integrate closely with the widely used Atomic Simulation Environment (ASE), which makes it compatible with a wide variety of commercial and open-source electronic structure codes. We finally demonstrate that the neural network model inside Amp can accurately interpolate electronic structure energies as well as forces of thousands of multi-species atomic systems.

  9. Online track detection in triggerless mode for INO

    NASA Astrophysics Data System (ADS)

    Jain, A.; Padmini, S.; Joseph, A. N.; Mahesh, P.; Preetha, N.; Behere, A.; Sikder, S. S.; Majumder, G.; Behera, S. P.

    2018-03-01

    The India based Neutrino Observatory (INO) is a proposed particle physics research project to study the atmospheric neutrinos. INO-Iron Calorimeter (ICAL) will consist of 28,800 detectors having 3.6 million electronic channels expected to activate with 100 Hz single rate, producing data at a rate of 3 GBps. Data collected contains a few real hits generated by muon tracks and the remaining noise-induced spurious hits. Estimated reduction factor after filtering out data of interest from generated data is of the order of 103. This makes trigger generation critical for efficient data collection and storage. Trigger is generated by detecting coincidence across multiple channels satisfying trigger criteria, within a small window of 200 ns in the trigger region. As the probability of neutrino interaction is very low, track detection algorithm has to be efficient and fast enough to process 5 × 106 events-candidates/s without introducing significant dead time, so that not even a single neutrino event is missed out. A hardware based trigger system is presently proposed for on-line track detection considering stringent timing requirements. Though the trigger system can be designed with scalability, a lot of hardware devices and interconnections make it a complex and expensive solution with limited flexibility. A software based track detection approach working on the hit information offers an elegant solution with possibility of varying trigger criteria for selecting various potentially interesting physics events. An event selection approach for an alternative triggerless readout scheme has been developed. The algorithm is mathematically simple, robust and parallelizable. It has been validated by detecting simulated muon events for energies of the range of 1 GeV-10 GeV with 100% efficiency at a processing rate of 60 μs/event on a 16 core machine. The algorithm and result of a proof-of-concept for its faster implementation over multiple cores is presented. The paper also discusses about harnessing the computing capabilities of multi-core computing farm, thereby optimizing number of nodes required for the proposed system.

  10. Cscibox: A Software System for Age-Model Construction and Evaluation

    NASA Astrophysics Data System (ADS)

    Bradley, E.; Anderson, K. A.; Marchitto, T. M., Jr.; de Vesine, L. R.; White, J. W. C.; Anderson, D. M.

    2014-12-01

    CSciBox is an integrated software system for the construction and evaluation of age models of paleo-environmetal archives, both directly dated and cross dated. The time has come to encourage cross-pollinization between earth science and computer science in dating paleorecords. This project addresses that need. The CSciBox code, which is being developed by a team of computer scientists and geoscientists, is open source and freely available on github. The system employs modern database technology to store paleoclimate proxy data and analysis results in an easily accessible and searchable form. This makes it possible to do analysis on the whole core at once, in an interactive fashion, or to tailor the analysis to a subset of the core without loading the entire data file. CSciBox provides a number of 'components' that perform the common steps in age-model construction and evaluation: calibrations, reservoir-age correction, interpolations, statistics, and so on. The user employs these components via a graphical user interface (GUI) to go from raw data to finished age model in a single tool: e.g., an IntCal09 calibration of 14C data from a marine sediment core, followed by a piecewise-linear interpolation. CSciBox's GUI supports plotting of any measurement in the core against any other measurement, or against any of the variables in the calculation of the age model-with or without explicit error representations. Using the GUI, CSciBox's user can import a new calibration curve or other background data set and define a new module that employs that information. Users can also incorporate other software (e.g., Calib, BACON) as 'plug ins.' In the case of truly large data or significant computational effort, CSciBox is parallelizable across modern multicore processors, or clusters, or even the cloud. The next generation of the CSciBox code, currently in the testing stages, includes an automated reasoning engine that supports a more-thorough exploration of plausible age models and cross-dating scenarios.

  11. An Integrated Approach to Parameter Learning in Infinite-Dimensional Space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boyd, Zachary M.; Wendelberger, Joanne Roth

    The availability of sophisticated modern physics codes has greatly extended the ability of domain scientists to understand the processes underlying their observations of complicated processes, but it has also introduced the curse of dimensionality via the many user-set parameters available to tune. Many of these parameters are naturally expressed as functional data, such as initial temperature distributions, equations of state, and controls. Thus, when attempting to find parameters that match observed data, being able to navigate parameter-space becomes highly non-trivial, especially considering that accurate simulations can be expensive both in terms of time and money. Existing solutions include batch-parallel simulations,more » high-dimensional, derivative-free optimization, and expert guessing, all of which make some contribution to solving the problem but do not completely resolve the issue. In this work, we explore the possibility of coupling together all three of the techniques just described by designing user-guided, batch-parallel optimization schemes. Our motivating example is a neutron diffusion partial differential equation where the time-varying multiplication factor serves as the unknown control parameter to be learned. We find that a simple, batch-parallelizable, random-walk scheme is able to make some progress on the problem but does not by itself produce satisfactory results. After reducing the dimensionality of the problem using functional principal component analysis (fPCA), we are able to track the progress of the solver in a visually simple way as well as viewing the associated principle components. This allows a human to make reasonable guesses about which points in the state space the random walker should try next. Thus, by combining the random walker's ability to find descent directions with the human's understanding of the underlying physics, it is possible to use expensive simulations more efficiently and more quickly arrive at the desired parameter set.« less

  12. DESI focal plate mechanical integration and cooling

    NASA Astrophysics Data System (ADS)

    Lambert, A. R.; Besuner, R. W.; Claybaugh, T. M.; Silber, J. H.

    2016-08-01

    The Dark Energy Spectroscopic Instrument (DESI) is under construction to measure the expansion history of the Universe using the Baryon Acoustic Oscillation technique[1]. The spectra of 40 million galaxies over 14000 sq. deg will be measured during the life of the experiment. A new prime focus corrector for the KPNO Mayall telescope will deliver light to 5000 fiber optic positioners. The fibers in turn feed ten broad-band spectrographs. This paper describes the mechanical integration of the DESI focal plate and the thermal system design. The DESI focal plate is comprised of ten identical petal assemblies. Each petal contains 500 robotic fiber positioners. Each petal is a complete, self-contained unit, independent from the others, with integrated power supply, controllers, fiber routing, and cooling services. The major advantages of this scheme are: (1) supports installation and removal of complete petal assemblies in-situ, without disturbing the others, (2) component production, assembly stations, and test procedures are repeated and parallelizable, (3) a complete, full-scale prototype can be built and tested at an early date, (4) each production petal can be surveyed and tested as a complete unit, prior to integration, from the fiber tip at the focal surface to the fiber slit at the spectrograph. The ten petal assemblies will be installed in a single integration ring, which is mounted to the DESI corrector. The aluminum integration ring attaches to the steel corrector barrel via a flexured steel adapter, isolating the focal plate from differential thermal expansions. The plate scale will be kept stable by conductive cooling of the petal assembly. The guider and wavefront sensors (one per petal) will be convectively cooled by forced flow of air. Heat will be removed from the system at ten liquid-cooled cold plates, one per petal, operating at ambient temperature. The entire focal plate structure is enclosed in an insulating shroud, which serves as a thermal barrier between the heat-generating focal plate components and the ambient air of the Mayall dome, to protect the seeing[2].

  13. Completing the census of young stars near the Sun with the FunnelWeb spectroscopic survey

    NASA Astrophysics Data System (ADS)

    Lawson, Warrick; Murphy, Simon; Tinney, Christopher G.; Ireland, Michael; Bessell, Michael S.

    2016-06-01

    From late 2016, the Australian FunnelWeb survey will obtain medium-resolution (R~2000) spectra covering the full optical range for 2 million of the brightest stars (I<12) in the southern sky. It will do so using an upgraded UK Schmidt Telescope at Siding Spring Observatory, equipped with a revolutionary, parallelizable optical fibre positioner ("Starbugs") and spectrograph. The ability to reconfigure a multi-fibre plate in less than 5 minutes allows FunnelWeb to observe more stars per night than any other competing multi-fibre spectrograph and enables a range of previously inefficient bright star science not attempted since the completion of the HD catalogues in the 1940s. Among its key science aims, FunnelWeb will obtain spectra for thousands of young and adolescent (<1 Gyr) stars near the Sun (<200 pc) across a wide range of spectral types. These spectra will include well-studied youth and activity indicators such as H-alpha, Li I 6708A, Ca II H&K, as well as surface gravity diagnostics (e.g. Na I, K I). In addition, FunnelWeb will obtain stellar parameters (Teff, logg, vsini), abundances (Fe/H, alpha/Fe) and radial velocities to 1-2 km/s for every star in the survey. When combined with high precision parallaxes and proper motions from the Gaia mission expected from 2017, this dataset will provide a near-complete census of adolescent stars in the solar neighbourhood. It will help reveal the typical formation environments of young solar-type stars, how such stars move from their stellar nurseries to their adult lives in the field, and identifying thousands of high-priority targets for follow-up direct imaging (GPI, SPHERE), transit (including TESS) and radial velocity exoplanet studies. In this poster contribution we introduce the FunnelWeb survey, its science goals and input catalogue, as well as provide an update on the status of the fibre positioner and spectrograph commissioning at Siding Spring.

  14. Transformation diffusion reconstruction of three-dimensional histology volumes from two-dimensional image stacks.

    PubMed

    Casero, Ramón; Siedlecka, Urszula; Jones, Elizabeth S; Gruscheski, Lena; Gibb, Matthew; Schneider, Jürgen E; Kohl, Peter; Grau, Vicente

    2017-05-01

    Traditional histology is the gold standard for tissue studies, but it is intrinsically reliant on two-dimensional (2D) images. Study of volumetric tissue samples such as whole hearts produces a stack of misaligned and distorted 2D images that need to be reconstructed to recover a congruent volume with the original sample's shape. In this paper, we develop a mathematical framework called Transformation Diffusion (TD) for stack alignment refinement as a solution to the heat diffusion equation. This general framework does not require contour segmentation, is independent of the registration method used, and is trivially parallelizable. After the first stack sweep, we also replace registration operations by operations in the space of transformations, several orders of magnitude faster and less memory-consuming. Implementing TD with operations in the space of transformations produces our Transformation Diffusion Reconstruction (TDR) algorithm, applicable to general transformations that are closed under inversion and composition. In particular, we provide formulas for translation and affine transformations. We also propose an Approximated TDR (ATDR) algorithm that extends the same principles to tensor-product B-spline transformations. Using TDR and ATDR, we reconstruct a full mouse heart at pixel size 0.92µm×0.92µm, cut 10µm thick, spaced 20µm (84G). Our algorithms employ only local information from transformations between neighboring slices, but the TD framework allows theoretical analysis of the refinement as applying a global Gaussian low-pass filter to the unknown stack misalignments. We also show that reconstruction without an external reference produces large shape artifacts in a cardiac specimen while still optimizing slice-to-slice alignment. To overcome this problem, we use a pre-cutting blockface imaging process previously developed by our group that takes advantage of Brewster's angle and a polarizer to capture the outline of only the topmost layer of wax in the block containing embedded tissue for histological sectioning. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  15. Correction of patient motion in cone-beam CT using 3D-2D registration

    NASA Astrophysics Data System (ADS)

    Ouadah, S.; Jacobson, M.; Stayman, J. W.; Ehtiati, T.; Weiss, C.; Siewerdsen, J. H.

    2017-12-01

    Cone-beam CT (CBCT) is increasingly common in guidance of interventional procedures, but can be subject to artifacts arising from patient motion during fairly long (~5-60 s) scan times. We present a fiducial-free method to mitigate motion artifacts using 3D-2D image registration that simultaneously corrects residual errors in the intrinsic and extrinsic parameters of geometric calibration. The 3D-2D registration process registers each projection to a prior 3D image by maximizing gradient orientation using the covariance matrix adaptation-evolution strategy optimizer. The resulting rigid transforms are applied to the system projection matrices, and a 3D image is reconstructed via model-based iterative reconstruction. Phantom experiments were conducted using a Zeego robotic C-arm to image a head phantom undergoing 5-15 cm translations and 5-15° rotations. To further test the algorithm, clinical images were acquired with a CBCT head scanner in which long scan times were susceptible to significant patient motion. CBCT images were reconstructed using a penalized likelihood objective function. For phantom studies the structural similarity (SSIM) between motion-free and motion-corrected images was  >0.995, with significant improvement (p  <  0.001) compared to the SSIM values of uncorrected images. Additionally, motion-corrected images exhibited a point-spread function with full-width at half maximum comparable to that of the motion-free reference image. Qualitative comparison of the motion-corrupted and motion-corrected clinical images demonstrated a significant improvement in image quality after motion correction. This indicates that the 3D-2D registration method could provide a useful approach to motion artifact correction under assumptions of local rigidity, as in the head, pelvis, and extremities. The method is highly parallelizable, and the automatic correction of residual geometric calibration errors provides added benefit that could be valuable in routine use.

  16. Lost in folding space? Comparing four variants of the thermodynamic model for RNA secondary structure prediction.

    PubMed

    Janssen, Stefan; Schudoma, Christian; Steger, Gerhard; Giegerich, Robert

    2011-11-03

    Many bioinformatics tools for RNA secondary structure analysis are based on a thermodynamic model of RNA folding. They predict a single, "optimal" structure by free energy minimization, they enumerate near-optimal structures, they compute base pair probabilities and dot plots, representative structures of different abstract shapes, or Boltzmann probabilities of structures and shapes. Although all programs refer to the same physical model, they implement it with considerable variation for different tasks, and little is known about the effects of heuristic assumptions and model simplifications used by the programs on the outcome of the analysis. We extract four different models of the thermodynamic folding space which underlie the programs RNAFOLD, RNASHAPES, and RNASUBOPT. Their differences lie within the details of the energy model and the granularity of the folding space. We implement probabilistic shape analysis for all models, and introduce the shape probability shift as a robust measure of model similarity. Using four data sets derived from experimentally solved structures, we provide a quantitative evaluation of the model differences. We find that search space granularity affects the computed shape probabilities less than the over- or underapproximation of free energy by a simplified energy model. Still, the approximations perform similar enough to implementations of the full model to justify their continued use in settings where computational constraints call for simpler algorithms. On the side, we observe that the rarely used level 2 shapes, which predict the complete arrangement of helices, multiloops, internal loops and bulges, include the "true" shape in a rather small number of predicted high probability shapes. This calls for an investigation of new strategies to extract high probability members from the (very large) level 2 shape space of an RNA sequence. We provide implementations of all four models, written in a declarative style that makes them easy to be modified. Based on our study, future work on thermodynamic RNA folding may make a choice of model based on our empirical data. It can take our implementations as a starting point for further program development.

  17. High-resolution room-temperature sample scanning superconducting quantum interference device microscope configurable for geological and biomagnetic applications

    NASA Astrophysics Data System (ADS)

    Fong, L. E.; Holzer, J. R.; McBride, K. K.; Lima, E. A.; Baudenbacher, F.; Radparvar, M.

    2005-05-01

    We have developed a scanning superconducting quantum interference device (SQUID) microscope system with interchangeable sensor configurations for imaging magnetic fields of room-temperature (RT) samples with submillimeter resolution. The low-critical-temperature (Tc) niobium-based monolithic SQUID sensors are mounted on the tip of a sapphire and thermally anchored to the helium reservoir. A 25μm sapphire window separates the vacuum space from the RT sample. A positioning mechanism allows us to adjust the sample-to-sensor spacing from the top of the Dewar. We achieved a sensor-to-sample spacing of 100μm, which could be maintained for periods of up to four weeks. Different SQUID sensor designs are necessary to achieve the best combination of spatial resolution and field sensitivity for a given source configuration. For imaging thin sections of geological samples, we used a custom-designed monolithic low-Tc niobium bare SQUID sensor, with an effective diameter of 80μm, and achieved a field sensitivity of 1.5pT/Hz1/2 and a magnetic moment sensitivity of 5.4×10-18Am2/Hz1/2 at a sensor-to-sample spacing of 100μm in the white noise region for frequencies above 100Hz. Imaging action currents in cardiac tissue requires a higher field sensitivity, which can only be achieved by compromising spatial resolution. We developed a monolithic low-Tc niobium multiloop SQUID sensor, with sensor sizes ranging from 250μm to 1mm, and achieved sensitivities of 480-180fT /Hz1/2 in the white noise region for frequencies above 100Hz, respectively. For all sensor configurations, the spatial resolution was comparable to the effective diameter and limited by the sensor-to-sample spacing. Spatial registration allowed us to compare high-resolution images of magnetic fields associated with action currents and optical recordings of transmembrane potentials to study the bidomain nature of cardiac tissue or to match petrography to magnetic field maps in thin sections of geological samples.

  18. Slip-flow in complex porous media as determined by a multi-relaxation-time lattice Boltzmann model

    NASA Astrophysics Data System (ADS)

    Landry, C. J.; Prodanovic, M.; Eichhubl, P.

    2014-12-01

    The pores and throats of shales and mudrocks are predominantly found within a range of 1-100 nm, within this size range the flow of gas at reservoir conditions will fall within the slip-flow and low transition-flow regime (0.001 < Kn < 0.5). Currently, the study of slip-flows is for the most part limited to simple tube and channel geometries, however, the geometry of mudrock pores is often sponge-like (organic matter) and/or platy (clays). Molecular dynamics (MD) simulations can be used to predict slip-flow in complex geometries, but due to prohibitive computational demand are generally limited to small volumes (one to several pores). Here we present a multi-relaxation-time lattice Boltzmann model (LBM) parameterized for slip-flow (Guo et al. 2008) and adapted here to complex geometries. LBMs are inherently parallelizable, such that flow in complex geometries of significant (near REV-scale) volumes can be readily simulated at a fraction of the computational cost of MD simulations. At the macroscopic-scale the LBM is parameterized with local effective viscosities at each node to capture the variance of the mean-free-path of gas molecules in a bounded system. The corrected mean-free-path for each lattice node is determined using the mean distance of the node to the pore-wall and Stop's correction for mean-free-paths in an infinite parallel-plate geometry. At the microscopic-scale, a combined bounce-back specular-reflection boundary condition is applied to the pore-wall nodes to capture Maxwellian-slip. The LBM simulation results are first validated in simple tube and channel geometries, where good agreement is found for Knudsen numbers below 0.1, and fair agreement is found for Knudsen numbers between 0.1 and 0.5. More complex geometries are then examined including triangular-ducts and ellipsoid-ducts, both with constant and tapering/expanding cross-sections, as well as a clay pore-network imaged from a hydrocarbon producing shale by sequential focused ion-beam scanning electron microscopy. These results are analyzed to determine grid-independent resolutions, and used to explore the relationship between effective permeability and Knudsen number in complex geometries.

  19. WE-AB-BRA-08: Correction of Patient Motion in C-Arm Cone-Beam CT Using 3D-2D Registration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ouadah, S; Jacobson, M; Stayman, JW

    2016-06-15

    Purpose: Intraoperative C-arm cone-beam CT (CBCT) is subject to artifacts arising from patient motion during the fairly long (∼5–20 s) scan times. We present a fiducial free method to mitigate motion artifacts using 3D-2D image registration that simultaneously corrects residual errors in geometric calibration. Methods: A 3D-2D registration process was used to register each projection to DRRs computed from the 3D image by maximizing gradient orientation (GO) using the CMA-ES optimizer. The resulting rigid 6 DOF transforms were applied to the system projection matrices, and a 3D image was reconstructed via model-based image reconstruction (MBIR, which accommodates the resulting noncircularmore » orbit). Experiments were conducted using a Zeego robotic C-arm (20 s, 200°, 496 projections) to image a head phantom undergoing various types of motion: 1) 5° lateral motion; 2) 15° lateral motion; and 3) 5° lateral motion with 10 mm periodic inferior-superior motion. Images were reconstructed using a penalized likelihood (PL) objective function, and structural similarity (SSIM) was measured for axial slices of the reconstructed images. A motion-free image was acquired using the same protocol for comparison. Results: There was significant improvement (p < 0.001) in the SSIM of the motion-corrected (MC) images compared to uncorrected images. The SSIM in MC-PL images was >0.99, indicating near identity to the motion-free reference. The point spread function (PSF) measured from a wire in the phantom was restored to that of the reference in each case. Conclusion: The 3D-2D registration method provides a robust framework for mitigation of motion artifacts and is expected to hold for applications in the head, pelvis, and extremities with reasonably constrained operative setup. Further improvement can be achieved by incorporating multiple rigid components and non-rigid deformation within the framework. The method is highly parallelizable and could in principle be run with every acquisition. Research supported by National Institutes of Health Grant No. R01-EB-017226 and academic-industry partnership with Siemens Healthcare (AX Division, Forcheim, Germany).« less

  20. Critical Resolution and Physical Dependenices of Supernovae: Stars in Heat and Under Pressure

    NASA Astrophysics Data System (ADS)

    Vartanyan, David; Burrows, Adam Seth

    2017-01-01

    For over five decades, the mechanism of explosion in core-collapse supernova continues to remain one of the last untoppled bastions in astrophysics, presenting both a technical and physical problem.Motivated by advances in computation and nuclear physics and the resilience of the core-collapse problem, collaborators Adam Burrows (Princeton), Joshua Dolence (LANL), and Aaron Skinner (LNL) have developed FORNAX - a highly parallelizable multidimensional supernova simulation code featuring an explicit hydrodynamic and radiation-transfer solver.We present the results (Vartanyan et. al 2016, Burrows et. al 2016, both in preparation) of a sequence of two-dimensional axisymmetric simulations of core-collapse supernovae using FORNAX, probing both progenitor mass dependence and the effect of physical inputs in explosiveness in our study on the revival of the stalled shock via the neutrino heating mechanism. We also performed a resolution study, testing spatial and energy group resolutions as well as compilation flags. We illustrate that, when the protoneutron star bounded by a stalled shock is close to the critical explosion condition (Burrows & Goshy 1993), small changes of order 10% in neutrino energies and luminosities can result in explosion, and that these effects couple nonlinearly.We show that many-body medium effects due to neutrino-nucleon scattering as well as inelastic neutrino-nucleon and neutrino-electron scattering are strongly favorable to earlier and more vigorous explosions by depositing energy in the gain region. Additionally, we probe the effects of a ray-by-ray+ transport solver (which does not include transverse velocity terms) employed by many groups and confirm that it artificially accelerates explosion (see also Skinner et. al 2016).In the coming year, we are gearing up for the first set of 3D simulations yet performed in the context of core-collapse supernovae employing 20 energy groups, and one of the most complete nuclear physics modules in the field with the ambitious goal of simulating supernova remants like Cas A. The current environment for core-collapse supernova provides for invigorating optimism that a robust explosion mechanism is within reach on graduate student lifetimes.

  1. SU-E-T-06: 4D Particle Swarm Optimization to Enable Lung SBRT in Patients with Central And/or Large Tumors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Modiri, A; Gu, X; Hagan, A

    2015-06-15

    Purpose: Patients presenting with large and/or centrally-located lung tumors are currently considered ineligible for highly potent regimens such as SBRT due to concerns of toxicity to normal tissues and organs-at-risk (OARs). We present a particle swarm optimization (PSO)-based 4D planning technique, designed for MLC tracking delivery, that exploits the temporal dimension as an additional degree of freedom to significantly improve OAR-sparing and reduce toxicity to levels clinically considered as acceptable for SBRT administration. Methods: Two early-stage SBRT-ineligible NSCLC patients were considered, presenting with tumors of maximum dimensions of 7.4cm (central-right lobe; 1.5cm motion) and 11.9cm (upper-right lobe; 1cm motion). Inmore » each case, the target and normal structures were manually contoured on each of the ten 4DCT phases. Corresponding ten initial 3D-conformal plans (Pt#1: 7-beams; Pt#2: 9-beams) were generated using the Eclipse planning system. Using 4D-PSO, fluence weights were optimized over all beams and all phases (70 and 90 apertures for Pt1&2, respectively). Doses to normal tissues and OARs were compared with clinicallyestablished lung SBRT guidelines based on RTOG-0236. Results: The PSO-based 4D SBRT plan yielded tumor coverage and dose—sparing for parallel and serial OARs within the SBRT guidelines for both patients. The dose-sparing compared to the clinically-delivered conventionallyfractionated plan for Patient 1 (Patient 2) was: heart Dmean = 11% (33%); lung V20 = 16% (21%); lung Dmean = 7% (20%); spinal cord Dmax = 5% (16%); spinal cord Dmean = 7% (33%); esophagus Dmax = 0% (18%). Conclusion: Truly 4D planning can significantly reduce dose to normal tissues and OARs. Such sparing opens up the possibility of using highly potent and effective regimens such as lung SBRT for patients who were conventionally considered SBRT non-eligible. Given the large, non-convex solution space, PSO represents an attractive, parallelizable tool to successfully achieve a globally optimal solution for this problem. This work was supported through funding from the National Institutes of Health and Varian Medical Systems.« less

  2. A design methodology for portable software on parallel computers

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Miller, Keith W.; Chrisman, Dan A.

    1993-01-01

    This final report for research that was supported by grant number NAG-1-995 documents our progress in addressing two difficulties in parallel programming. The first difficulty is developing software that will execute quickly on a parallel computer. The second difficulty is transporting software between dissimilar parallel computers. In general, we expect that more hardware-specific information will be included in software designs for parallel computers than in designs for sequential computers. This inclusion is an instance of portability being sacrificed for high performance. New parallel computers are being introduced frequently. Trying to keep one's software on the current high performance hardware, a software developer almost continually faces yet another expensive software transportation. The problem of the proposed research is to create a design methodology that helps designers to more precisely control both portability and hardware-specific programming details. The proposed research emphasizes programming for scientific applications. We completed our study of the parallelizability of a subsystem of the NASA Earth Radiation Budget Experiment (ERBE) data processing system. This work is summarized in section two. A more detailed description is provided in Appendix A ('Programming Practices to Support Eventual Parallelism'). Mr. Chrisman, a graduate student, wrote and successfully defended a Ph.D. dissertation proposal which describes our research associated with the issues of software portability and high performance. The list of research tasks are specified in the proposal. The proposal 'A Design Methodology for Portable Software on Parallel Computers' is summarized in section three and is provided in its entirety in Appendix B. We are currently studying a proposed subsystem of the NASA Clouds and the Earth's Radiant Energy System (CERES) data processing system. This software is the proof-of-concept for the Ph.D. dissertation. We have implemented and measured the performance of a portion of this subsystem on the Intel iPSC/2 parallel computer. These results are provided in section four. Our future work is summarized in section five, our acknowledgements are stated in section six, and references for published papers associated with NAG-1-995 are provided in section seven.

  3. SU-F-T-256: 4D IMRT Planning Using An Early Prototype GPU-Enabled Eclipse Workstation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hagan, A; Modiri, A; Sawant, A

    Purpose: True 4D IMRT planning, based on simultaneous spatiotemporal optimization has been shown to significantly improve plan quality in lung radiotherapy. However, the high computational complexity associated with such planning represents a significant barrier to widespread clinical deployment. We introduce an early prototype GPU-enabled Eclipse workstation for inverse planning. To our knowledge, this is the first GPUintegrated Eclipse system demonstrating the potential for clinical translation of GPU computing on a major commercially-available TPS. Methods: The prototype system comprised of four NVIDIA Tesla K80 GPUs, with a maximum processing capability of 8.5 Tflops per K80 card. The system architecture consisted ofmore » three key modules: (i) a GPU-based inverse planning module using a highly-parallelizable, swarm intelligence-based global optimization algorithm, (ii) a GPU-based open-source b-spline deformable image registration module, Elastix, and (iii) a CUDA-based data management module. For evaluation, aperture fluence weights in an IMRT plan were optimized over 9 beams,166 apertures and 10 respiratory phases (14940 variables) for a lung cancer case (GTV = 95 cc, right lower lobe, 15 mm cranio-caudal motion). Sensitivity of the planning time and memory expense to parameter variations was quantified. Results: GPU-based inverse planning was significantly accelerated compared to its CPU counterpart (36 vs 488 min, for 10 phases, 10 search agents and 10 iterations). The optimized IMRT plan significantly improved OAR sparing compared to the original internal target volume (ITV)-based clinical plan, while maintaining prescribed tumor coverage. The dose-sparing improvements were: Esophagus Dmax 50%, Heart Dmax 42% and Spinal cord Dmax 25%. Conclusion: Our early prototype system demonstrates that through massive parallelization, computationally intense tasks such as 4D treatment planning can be accomplished in clinically feasible timeframes. With further optimization, such systems are expected to enable the eventual clinical translation of higher-dimensional and complex treatment planning strategies to significantly improve plan quality. This work was partially supported through research funding from National Institutes of Health (R01CA169102) and Varian Medical Systems, Palo Alto, CA, USA.« less

  4. Cloud-based large-scale air traffic flow optimization

    NASA Astrophysics Data System (ADS)

    Cao, Yi

    The ever-increasing traffic demand makes the efficient use of airspace an imperative mission, and this paper presents an effort in response to this call. Firstly, a new aggregate model, called Link Transmission Model (LTM), is proposed, which models the nationwide traffic as a network of flight routes identified by origin-destination pairs. The traversal time of a flight route is assumed to be the mode of distribution of historical flight records, and the mode is estimated by using Kernel Density Estimation. As this simplification abstracts away physical trajectory details, the complexity of modeling is drastically decreased, resulting in efficient traffic forecasting. The predicative capability of LTM is validated against recorded traffic data. Secondly, a nationwide traffic flow optimization problem with airport and en route capacity constraints is formulated based on LTM. The optimization problem aims at alleviating traffic congestions with minimal global delays. This problem is intractable due to millions of variables. A dual decomposition method is applied to decompose the large-scale problem such that the subproblems are solvable. However, the whole problem is still computational expensive to solve since each subproblem is an smaller integer programming problem that pursues integer solutions. Solving an integer programing problem is known to be far more time-consuming than solving its linear relaxation. In addition, sequential execution on a standalone computer leads to linear runtime increase when the problem size increases. To address the computational efficiency problem, a parallel computing framework is designed which accommodates concurrent executions via multithreading programming. The multithreaded version is compared with its monolithic version to show decreased runtime. Finally, an open-source cloud computing framework, Hadoop MapReduce, is employed for better scalability and reliability. This framework is an "off-the-shelf" parallel computing model that can be used for both offline historical traffic data analysis and online traffic flow optimization. It provides an efficient and robust platform for easy deployment and implementation. A small cloud consisting of five workstations was configured and used to demonstrate the advantages of cloud computing in dealing with large-scale parallelizable traffic problems.

  5. Multiwavelength Diagnostics of the Precursor and Main Phases of an M1.8 Flare on 2011 April 22

    NASA Technical Reports Server (NTRS)

    Awasthi, A. K.; Jain, R.; Gadhiya, P. D.; Aschwanden, M. J.; Uddin, W.; Srivastava, A. K.; Chandra, R.; Gopalswamy, N.; Nitta, N. V.; Yashiro, S.; hide

    2013-01-01

    We study the temporal, spatial and spectral evolution of the M1.8 flare, which occurred in the active region 11195 (S17E31) on 2011 April 22, and explore the underlying physical processes during the precursor phase and their relation to the main phase. The study of the source morphology using the composite images in 131Å wavelength observed by the Solar Dynamics Observatory/Atmospheric Imaging Assembly and 6-14 kiloelectronvolts [from the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI)] revealed a multi-loop system that destabilized systematically during the precursor and main phases. In contrast, hard X-ray emission (20-50 kiloelectronvolts) was absent during the precursor phase, appearing only from the onset of the impulsive phase in the form of foot-points of emitting loops. This study also revealed the heated loop-top prior to the loop emission, although no accompanying foot-point sources were observed during the precursor phase. We estimate the flare plasma parameters, namely temperature (T), emission measure (EM), power-law index (gamma) and photon turn-over energy (to), and found them to be varying in the ranges 12.4-23.4 megakelvins, 0.0003-0.6 x 10 (sup 49) per cubic centimeter, 5-9 and 14-18 kiloelectronvolts, respectively, by forward fitting RHESSI spectral observations. The energy released in the precursor phase was thermal and constituted approximately 1 percent of the total energy released during the flare. The study of morphological evolution of the filament in conjunction with synthesized T and EM maps was carried out, which reveals (a) partial filament eruption prior to the onset of the precursor emission and (b) heated dense plasma over the polarity inversion line and in the vicinity of the slowly rising filament during the precursor phase. Based on the implications from multiwavelength observations, we propose a scheme to unify the energy release during the precursor and main phase emissions in which the precursor phase emission was originated via conduction front that resulted due to the partial filament eruption. Next, the heated leftover S-shaped filament underwent slow-rise and heating due to magnetic reconnection and finally erupted to produce emission during the impulsive and gradual phases.

  6. Association between root resorption incident to orthodontic treatment and treatment factors.

    PubMed

    Motokawa, Masahide; Sasamoto, Tomoko; Kaku, Masato; Kawata, Toshitsugu; Matsuda, Yayoi; Terao, Akiko; Tanne, Kazuo

    2012-06-01

    The purpose of this study was to clarify the prevalence and degree of root resorption induced by orthodontic treatment in association with treatment factors. The files of 243 patients (72 males and 171 females) aged 9-51 years were randomly selected from subjects treated with multi-bracket appliances. The severity of root resorption was classified into five categories on radiographs taken before and after treatment. The subjects were divided into extraction (n = 113 patients, 2805 teeth) and non-extraction (n = 130 patients, 3616 teeth) groups and surgical (n = 56 patients, 1503 teeth) and non-surgical treatment (n = 187 patients, 4918 teeth) groups. These subjects were also divided into two or three groups based on the duration of multiloop edgewise archwire (MEAW) treatment, elastic use, and total treatment time: 0 month (T1; n = 184 patients, 4831 teeth), range 1-6 months (T2; n = 37 patients, 994 teeth), more than 6 months (T3; n = 22 patients, 596 teeth); range 0-6 months (n = 114 patients, 3016 teeth) more than 6 months (n = 129 patients, 3405 teeth); range 1-30 months (n = 148 patients, 3913 teeth) and more than 30 months (n = 95 patients, 2508 teeth). The prevalence of overall and severe root resorption evaluated by the number of subjects and teeth was compared with a chi-square test. A Student's t-test for unpaired data was used to determine any statistically significant differences. The prevalence of severe root resorption based on the number of teeth was significantly higher in the group with extractions (P < 0.01). Longer use of a MEAW appliance and elastics also produced a significantly higher prevalence of root resorption (P < 0.05). On the other hand, the prevalence of severe root resorption was not significantly different between the subjects treated with or without surgery, but there was a significant increase when treatment time was prolonged (P < 0.05). A significant difference was found in the amount of root movement of the upper central incisors and the distance from their root apices to the cortical bone surface (P < 0.05). These are regarded as essential factors in the onset of root resorption. These results indicate that orthodontic treatment with extractions, long-term use of a MEAW appliance and elastics, treatment time, and distance of tooth movement are risk factors for severe root resorption.

  7. Magnetic hyperthermia properties of nanoparticles inside lysosomes using kinetic Monte Carlo simulations: Influence of key parameters and dipolar interactions, and evidence for strong spatial variation of heating power

    NASA Astrophysics Data System (ADS)

    Tan, R. P.; Carrey, J.; Respaud, M.

    2014-12-01

    Understanding the influence of dipolar interactions in magnetic hyperthermia experiments is of crucial importance for fine optimization of nanoparticle (NP) heating power. In this study we use a kinetic Monte Carlo algorithm to calculate hysteresis loops that correctly account for both time and temperature. This algorithm is shown to correctly reproduce the high-frequency hysteresis loop of both superparamagnetic and ferromagnetic NPs without any ad hoc or artificial parameters. The algorithm is easily parallelizable with a good speed-up behavior, which considerably decreases the calculation time on several processors and enables the study of assemblies of several thousands of NPs. The specific absorption rate (SAR) of magnetic NPs dispersed inside spherical lysosomes is studied as a function of several key parameters: volume concentration, applied magnetic field, lysosome size, NP diameter, and anisotropy. The influence of these parameters is illustrated and comprehensively explained. In summary, magnetic interactions increase the coercive field, saturation field, and hysteresis area of major loops. However, for small amplitude magnetic fields such as those used in magnetic hyperthermia, the heating power as a function of concentration can increase, decrease, or display a bell shape, depending on the relationship between the applied magnetic field and the coercive/saturation fields of the NPs. The hysteresis area is found to be well correlated with the parallel or antiparallel nature of the dipolar field acting on each particle. The heating power of a given NP is strongly influenced by a local concentration involving approximately 20 neighbors. Because this local concentration strongly decreases upon approaching the surface, the heating power increases or decreases in the vicinity of the lysosome membrane. The amplitude of variation reaches more than one order of magnitude in certain conditions. This transition occurs on a thickness corresponding to approximately 1.3 times the mean distance between two neighbors. The amplitude and sign of this variation is explained. Finally, implications of these various findings are discussed in the framework of magnetic hyperthermia optimization. It is concluded that feedback on two specific points from biology experiments is required for further advancement of the optimization of magnetic NPs for magnetic hyperthermia. The present simulations will be an advantageous tool to optimize magnetic NPs heating power and interpret experimental results.

  8. Singular value decomposition for photon-processing nuclear imaging systems and applications for reconstruction and computing null functions.

    PubMed

    Jha, Abhinav K; Barrett, Harrison H; Frey, Eric C; Clarkson, Eric; Caucci, Luca; Kupinski, Matthew A

    2015-09-21

    Recent advances in technology are enabling a new class of nuclear imaging systems consisting of detectors that use real-time maximum-likelihood (ML) methods to estimate the interaction position, deposited energy, and other attributes of each photon-interaction event and store these attributes in a list format. This class of systems, which we refer to as photon-processing (PP) nuclear imaging systems, can be described by a fundamentally different mathematical imaging operator that allows processing of the continuous-valued photon attributes on a per-photon basis. Unlike conventional photon-counting (PC) systems that bin the data into images, PP systems do not have any binning-related information loss. Mathematically, while PC systems have an infinite-dimensional null space due to dimensionality considerations, PP systems do not necessarily suffer from this issue. Therefore, PP systems have the potential to provide improved performance in comparison to PC systems. To study these advantages, we propose a framework to perform the singular-value decomposition (SVD) of the PP imaging operator. We use this framework to perform the SVD of operators that describe a general two-dimensional (2D) planar linear shift-invariant (LSIV) PP system and a hypothetical continuously rotating 2D single-photon emission computed tomography (SPECT) PP system. We then discuss two applications of the SVD framework. The first application is to decompose the object being imaged by the PP imaging system into measurement and null components. We compare these components to the measurement and null components obtained with PC systems. In the process, we also present a procedure to compute the null functions for a PC system. The second application is designing analytical reconstruction algorithms for PP systems. The proposed analytical approach exploits the fact that PP systems acquire data in a continuous domain to estimate a continuous object function. The approach is parallelizable and implemented for graphics processing units (GPUs). Further, this approach leverages another important advantage of PP systems, namely the possibility to perform photon-by-photon real-time reconstruction. We demonstrate the application of the approach to perform reconstruction in a simulated 2D SPECT system. The results help to validate and demonstrate the utility of the proposed method and show that PP systems can help overcome the aliasing artifacts that are otherwise intrinsically present in PC systems.

  9. Massively-parallel FDTD simulations to address mask electromagnetic effects in hyper-NA immersion lithography

    NASA Astrophysics Data System (ADS)

    Tirapu Azpiroz, Jaione; Burr, Geoffrey W.; Rosenbluth, Alan E.; Hibbs, Michael

    2008-03-01

    In the Hyper-NA immersion lithography regime, the electromagnetic response of the reticle is known to deviate in a complicated manner from the idealized Thin-Mask-like behavior. Already, this is driving certain RET choices, such as the use of polarized illumination and the customization of reticle film stacks. Unfortunately, full 3-D electromagnetic mask simulations are computationally intensive. And while OPC-compatible mask electromagnetic field (EMF) models can offer a reasonable tradeoff between speed and accuracy for full-chip OPC applications, full understanding of these complex physical effects demands higher accuracy. Our paper describes recent advances in leveraging High Performance Computing as a critical step towards lithographic modeling of the full manufacturing process. In this paper, highly accurate full 3-D electromagnetic simulation of very large mask layouts are conducted in parallel with reasonable turnaround time, using a Blue- Gene/L supercomputer and a Finite-Difference Time-Domain (FDTD) code developed internally within IBM. A 3-D simulation of a large 2-D layout spanning 5μm×5μm at the wafer plane (and thus (20μm×20μm×0.5μm at the mask) results in a simulation with roughly 12.5GB of memory (grid size of 10nm at the mask, single-precision computation, about 30 bytes/grid point). FDTD is flexible and easily parallelizable to enable full simulations of such large layout in approximately an hour using one BlueGene/L "midplane" containing 512 dual-processor nodes with 256MB of memory per processor. Our scaling studies on BlueGene/L demonstrate that simulations up to 100μm × 100μm at the mask can be computed in a few hours. Finally, we will show that the use of a subcell technique permits accurate simulation of features smaller than the grid discretization, thus improving on the tradeoff between computational complexity and simulation accuracy. We demonstrate the correlation of the real and quadrature components that comprise the Boundary Layer representation of the EMF behavior of a mask blank to intensity measurements of the mask diffraction patterns by an Aerial Image Measurement System (AIMS) with polarized illumination. We also discuss how this model can become a powerful tool for the assessment of the impact to the lithographic process of a mask blank.

  10. Singular value decomposition for photon-processing nuclear imaging systems and applications for reconstruction and computing null functions

    NASA Astrophysics Data System (ADS)

    Jha, Abhinav K.; Barrett, Harrison H.; Frey, Eric C.; Clarkson, Eric; Caucci, Luca; Kupinski, Matthew A.

    2015-09-01

    Recent advances in technology are enabling a new class of nuclear imaging systems consisting of detectors that use real-time maximum-likelihood (ML) methods to estimate the interaction position, deposited energy, and other attributes of each photon-interaction event and store these attributes in a list format. This class of systems, which we refer to as photon-processing (PP) nuclear imaging systems, can be described by a fundamentally different mathematical imaging operator that allows processing of the continuous-valued photon attributes on a per-photon basis. Unlike conventional photon-counting (PC) systems that bin the data into images, PP systems do not have any binning-related information loss. Mathematically, while PC systems have an infinite-dimensional null space due to dimensionality considerations, PP systems do not necessarily suffer from this issue. Therefore, PP systems have the potential to provide improved performance in comparison to PC systems. To study these advantages, we propose a framework to perform the singular-value decomposition (SVD) of the PP imaging operator. We use this framework to perform the SVD of operators that describe a general two-dimensional (2D) planar linear shift-invariant (LSIV) PP system and a hypothetical continuously rotating 2D single-photon emission computed tomography (SPECT) PP system. We then discuss two applications of the SVD framework. The first application is to decompose the object being imaged by the PP imaging system into measurement and null components. We compare these components to the measurement and null components obtained with PC systems. In the process, we also present a procedure to compute the null functions for a PC system. The second application is designing analytical reconstruction algorithms for PP systems. The proposed analytical approach exploits the fact that PP systems acquire data in a continuous domain to estimate a continuous object function. The approach is parallelizable and implemented for graphics processing units (GPUs). Further, this approach leverages another important advantage of PP systems, namely the possibility to perform photon-by-photon real-time reconstruction. We demonstrate the application of the approach to perform reconstruction in a simulated 2D SPECT system. The results help to validate and demonstrate the utility of the proposed method and show that PP systems can help overcome the aliasing artifacts that are otherwise intrinsically present in PC systems.

  11. A fast, parallel algorithm to solve the basic fluvial erosion/transport equations

    NASA Astrophysics Data System (ADS)

    Braun, J.

    2012-04-01

    Quantitative models of landform evolution are commonly based on the solution of a set of equations representing the processes of fluvial erosion, transport and deposition, which leads to predict the geometry of a river channel network and its evolution through time. The river network is often regarded as the backbone of any surface processes model (SPM) that might include other physical processes acting at a range of spatial and temporal scales along hill slopes. The basic laws of fluvial erosion requires the computation of local (slope) and non-local (drainage area) quantities at every point of a given landscape, a computationally expensive operation which limits the resolution of most SPMs. I present here an algorithm to compute the various components required in the parameterization of fluvial erosion (and transport) and thus solve the basic fluvial geomorphic equation, that is very efficient because it is O(n) (the number of required arithmetic operations is linearly proportional to the number of nodes defining the landscape), and is fully parallelizable (the computation cost decreases in a direct inverse proportion to the number of processors used to solve the problem). The algorithm is ideally suited for use on latest multi-core processors. Using this new technique, geomorphic problems can be solved at an unprecedented resolution (typically of the order of 10,000 X 10,000 nodes) while keeping the computational cost reasonable (order 1 sec per time step). Furthermore, I will show that the algorithm is applicable to any regular or irregular representation of the landform, and is such that the temporal evolution of the landform can be discretized by a fully implicit time-marching algorithm, making it unconditionally stable. I will demonstrate that such an efficient algorithm is ideally suited to produce a fully predictive SPM that links observationally based parameterizations of small-scale processes to the evolution of large-scale features of the landscapes on geological time scales. It can also be used to model surface processes at the continental or planetary scale and be linked to lithospheric or mantle flow models to predict the potential interactions between tectonics driving surface uplift in orogenic areas, mantle flow producing dynamic topography on continental scales and surface processes.

  12. Interactive numerals

    PubMed Central

    2017-01-01

    Although Arabic numerals (like ‘2016’ and ‘3.14’) are ubiquitous, we show that in interactive computer applications they are often misleading and surprisingly unreliable. We introduce interactive numerals as a new concept and show, like Roman numerals and Arabic numerals, interactive numerals introduce another way of using and thinking about numbers. Properly understanding interactive numerals is essential for all computer applications that involve numerical data entered by users, including finance, medicine, aviation and science. PMID:28484609

  13. Symbolic, Nonsymbolic and Conceptual: An Across-Notation Study on the Space Mapping of Numerals.

    PubMed

    Zhang, Yu; You, Xuqun; Zhu, Rongjuan

    2016-07-01

    Previous studies suggested that there are interconnections between two numeral modalities of symbolic notation and nonsymbolic notation (array of dots), differences and similarities of the processing, and representation of the two modalities have both been found in previous research. However, whether there are differences between the spatial representation and numeral-space mapping of the two numeral modalities of symbolic notation and nonsymbolic notation is still uninvestigated. The present study aims to examine whether there are differences between the spatial representation and numeral-space mapping of the two numeral modalities of symbolic notation and nonsymbolic notation; especially how zero, as both a symbolic magnitude numeral and a nonsymbolic conceptual numeral, mapping onto space; and if the mapping happens automatically at an early stage of the numeral information processing. Results of the two experiments demonstrate that the low-level processing of symbolic numerals including zero and nonsymbolic numerals except zero can mapping onto space, whereas the low-level processing of nonsymbolic zero as a semantic conceptual numeral cannot mapping onto space, which indicating the specialty of zero in the numeral domain. The present study indicates that the processing of non-semantic numerals can mapping onto space, whereas semantic conceptual numerals cannot mapping onto space. © The Author(s) 2016.

  14. Numerical MHD study for plasmoid instability in uniform resistivity

    NASA Astrophysics Data System (ADS)

    Shimizu, Tohru; Kondoh, Koji; Zenitani, Seiji

    2017-11-01

    The plasmoid instability (PI) caused in uniform resistivity is numerically studied with a MHD numerical code of HLLD scheme. It is shown that the PI observed in numerical studies may often include numerical (non-physical) tearing instability caused by the numerical dissipations. By increasing the numerical resolutions, the numerical tearing instability gradually disappears and the physical tearing instability remains. Hence, the convergence of the numerical results is observed. Note that the reconnection rate observed in the numerical tearing instability can be higher than that of the physical tearing instability. On the other hand, regardless of the numerical and physical tearing instabilities, the tearing instability can be classified into symmetric and asymmetric tearing instability. The symmetric tearing instability tends to occur when the thinning of current sheet is stopped by the physical or numerical dissipations, often resulting in the drastic changes in plasmoid chain's structure and its activity. In this paper, by eliminating the numerical tearing instability, we could not specify the critical Lundquist number Sc beyond which PI is fully developed. It suggests that Sc does not exist, at least around S = 105.

  15. Numeral Incorporation in Japanese Sign Language

    ERIC Educational Resources Information Center

    Ktejik, Mish

    2013-01-01

    This article explores the morphological process of numeral incorporation in Japanese Sign Language. Numeral incorporation is defined and the available research on numeral incorporation in signed language is discussed. The numeral signs in Japanese Sign Language are then introduced and followed by an explanation of the numeral morphemes which are…

  16. A delta-rule model of numerical and non-numerical order processing.

    PubMed

    Verguts, Tom; Van Opstal, Filip

    2014-06-01

    Numerical and non-numerical order processing share empirical characteristics (distance effect and semantic congruity), but there are also important differences (in size effect and end effect). At the same time, models and theories of numerical and non-numerical order processing developed largely separately. Currently, we combine insights from 2 earlier models to integrate them in a common framework. We argue that the same learning principle underlies numerical and non-numerical orders, but that environmental features determine the empirical differences. Implications for current theories on order processing are pointed out. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  17. Physical similarity or numerical representation counts in same-different, numerical comparison, physical comparison, and priming tasks?

    PubMed

    Zhang, Li; Xin, Ziqiang; Feng, Tingyong; Chen, Yinghe; Szűcs, Denes

    2018-03-01

    Recent studies have highlighted the fact that some tasks used to study symbolic number representations are confounded by judgments about physical similarity. Here, we investigated whether the contribution of physical similarity and numerical representation differed in the often-used symbolic same-different, numerical comparison, physical comparison, and priming tasks. Experiment 1 showed that subjective physical similarity was the best predictor of participants' performance in the same-different task, regardless of simultaneous or sequential presentation. Furthermore, the contribution of subjective physical similarity was larger in a simultaneous presentation than in a sequential presentation. Experiment 2 showed that only numerical representation was involved in numerical comparison. Experiment 3 showed that both subjective physical similarity and numerical representation contributed to participants' physical comparison performance. Finally, only numerical representation contributed to participants' performance in a priming task as revealed by Experiment 4. Taken together, the contribution of physical similarity and numerical representation depends on task demands. Performance primarily seems to rely on numerical properties in tasks that require explicit quantitative comparison judgments (physical or numerical), while physical stimulus properties exert an effect in the same-different task.

  18. Are numbers grounded in a general magnitude processing system? A functional neuroimaging meta-analysis.

    PubMed

    Sokolowski, H Moriah; Fias, Wim; Bosah Ononye, Chuka; Ansari, Daniel

    2017-10-01

    It is currently debated whether numbers are processed using a number-specific system or a general magnitude processing system, also used for non-numerical magnitudes such as physical size, duration, or luminance. Activation likelihood estimation (ALE) was used to conduct the first quantitative meta-analysis of 93 empirical neuroimaging papers examining neural activation during numerical and non-numerical magnitude processing. Foci were compiled to generate probabilistic maps of activation for non-numerical magnitudes (e.g. physical size), symbolic numerical magnitudes (e.g. Arabic digits), and nonsymbolic numerical magnitudes (e.g. dot arrays). Conjunction analyses revealed overlapping activation for symbolic, nonsymbolic and non-numerical magnitudes in frontal and parietal lobes. Contrast analyses revealed specific activation in the left superior parietal lobule for symbolic numerical magnitudes. In contrast, small regions in the bilateral precuneus were specifically activated for nonsymbolic numerical magnitudes. No regions in the parietal lobes were activated for non-numerical magnitudes that were not also activated for numerical magnitudes. Therefore, numbers are processed using both a generalized magnitude system and format specific number regions. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Ordinal judgments of numerical symbols by macaques (Macaca mulatta)

    NASA Technical Reports Server (NTRS)

    Washburn, David A.; Rumbaugh, Duane M.

    1991-01-01

    Two rhesus monkeys (Macaca mulatta) learned that the arabic numerals 0 through 9 represented corresponding quantities of food pellets. By manipulating a joystick, the monkeys were able to make a selection of paired numerals presented on a computer screen. Although the monkeys received a corresponding number of pellets even if the lesser of the two numerals was selected, they learned generally to choose the numeral of greatest value even when pellet delivery was made arrhythmic. In subsequent tests, they chose the numerals of greater value when presented in novel combinations or in random arrays of up to five numerals. Thus, the monkeys made ordinal judgments of numerical symbols in accordance with their absolute or relative values.

  20. Finger-Based Numerical Skills Link Fine Motor Skills to Numerical Development in Preschoolers.

    PubMed

    Suggate, Sebastian; Stoeger, Heidrun; Fischer, Ursula

    2017-12-01

    Previous studies investigating the association between fine-motor skills (FMS) and mathematical skills have lacked specificity. In this study, we test whether an FMS link to numerical skills is due to the involvement of finger representations in early mathematics. We gave 81 pre-schoolers (mean age of 4 years, 9 months) a set of FMS measures and numerical tasks with and without a specific finger focus. Additionally, we used receptive vocabulary and chronological age as control measures. FMS linked more closely to finger-based than to nonfinger-based numerical skills even after accounting for the control variables. Moreover, the relationship between FMS and numerical skill was entirely mediated by finger-based numerical skills. We concluded that FMS are closely related to early numerical skill development through finger-based numerical counting that aids the acquisition of mathematical mental representations.

  1. Numbers matter to informed patient choices: A randomized design across age and numeracy levels

    PubMed Central

    Peters, Ellen; Hart, P. Sol; Tusler, Martin; Fraenkel, Liana

    2013-01-01

    Background How drug adverse events (AEs) are communicated in the United States may mislead consumers and result in low adherence. Requiring written information to include numeric AE-likelihood information might lessen these effects, but providing numbers may disadvantage less skilled populations. Objective To determine risk comprehension and willingness to use a medication when presented with numeric or non-numeric AE-likelihood information across age, numeracy, and cholesterol-lowering-drug-usage groups. Design In a cross-sectional internet survey (N=905; American Life Panel, 5/15/08–6/18/08), respondents were presented with a hypothetical prescription medication for high cholesterol. AE likelihoods were described using one of six formats (non-numeric: Consumer-Medication-Information (CMI)-like list, risk labels; numeric: percentage, frequency, risk-labels-plus-percentage, risk-labels-plus-frequency). Main outcome measures were risk comprehension (recoded to indicate presence/absence of risk overestimation and underestimation), willingness to use the medication (7-point scale; not likely=0, very likely=6), and main reason for willingness (chosen from eight predefined reasons). Results Individuals given non-numeric information were more likely to overestimate risk, less willing to take the medication, and gave different reasons than those provided numeric information across numeracy and age groups (e.g., among less numerate: 69% and 18% overestimated risks in non-numeric and numeric formats, respectively; among more numerate: these same proportions were 66% and 6%). Less numerate middle-aged and older adults, however, showed less influence of numeric format on willingness to take the medication. Limitations It is unclear whether differences are clinically meaningful although some differences are large. Conclusions Providing numeric AE-likelihood information (compared to non-numeric) is likely to increase risk comprehension across numeracy and age levels. Its effects on uptake and adherence of prescribed drugs should be similar across the population, except perhaps in older, less numerate individuals. PMID:24246563

  2. 77 FR 71287 - CNMI-Only Transitional Worker Numerical Limitation for Fiscal Year 2013

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-30

    ...-ZB15 CNMI-Only Transitional Worker Numerical Limitation for Fiscal Year 2013 AGENCY: U.S. Citizenship and Immigration Services, DHS. ACTION: Notification of numerical limitation. SUMMARY: The Secretary of Homeland Security announces that the numerical limitation for the annual fiscal year numerical limitation...

  3. An Introduction to Numerical Control. Problems for Numerical Control Part Programming.

    ERIC Educational Resources Information Center

    Campbell, Clifton P.

    This combination text and workbook is intended to introduce industrial arts students to numerical control part programming. Discussed in the first section are the impact of numerical control, training efforts, numerical control in established programs, related information for drafting, and the Cartesian Coordinate System and dimensioning…

  4. 38 CFR 4.86 - Exceptional patterns of hearing impairment.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... the Roman numeral designation for hearing impairment from either Table VI or Table VIa, whichever... determine the Roman numeral designation for hearing impairment from either Table VI or Table VIa, whichever results in the higher numeral. That numeral will then be elevated to the next higher Roman numeral. Each...

  5. Numerical study on the Welander oscillatory natural circulation problem using high-order numerical methods

    DOE PAGES

    Zou, Ling; Zhao, Haihua; Kim, Seung Jun

    2016-11-16

    In this study, the classical Welander’s oscillatory natural circulation problem is investigated using high-order numerical methods. As originally studied by Welander, the fluid motion in a differentially heated fluid loop can exhibit stable, weakly instable, and strongly instable modes. A theoretical stability map has also been originally derived from the stability analysis. Numerical results obtained in this paper show very good agreement with Welander’s theoretical derivations. For stable cases, numerical results from both the high-order and low-order numerical methods agree well with the non-dimensional flow rate analytically derived. The high-order numerical methods give much less numerical errors compared to themore » low-order methods. For stability analysis, the high-order numerical methods could perfectly predict the stability map, while the low-order numerical methods failed to do so. For all theoretically unstable cases, the low-order methods predicted them to be stable. The result obtained in this paper is a strong evidence to show the benefits of using high-order numerical methods over the low-order ones, when they are applied to simulate natural circulation phenomenon that has already gain increasing interests in many future nuclear reactor designs.« less

  6. Intentional and automatic numerical processing as predictors of mathematical abilities in primary school children

    PubMed Central

    Pina, Violeta; Castillo, Alejandro; Cohen Kadosh, Roi; Fuentes, Luis J.

    2015-01-01

    Previous studies have suggested that numerical processing relates to mathematical performance, but it seems that such relationship is more evident for intentional than for automatic numerical processing. In the present study we assessed the relationship between the two types of numerical processing and specific mathematical abilities in a sample of 109 children in grades 1–6. Participants were tested in an ample range of mathematical tests and also performed both a numerical and a size comparison task. The results showed that numerical processing related to mathematical performance only when inhibitory control was involved in the comparison tasks. Concretely, we found that intentional numerical processing, as indexed by the numerical distance effect in the numerical comparison task, was related to mathematical reasoning skills only when the task-irrelevant dimension (the physical size) was incongruent; whereas automatic numerical processing, indexed by the congruency effect in the size comparison task, was related to mathematical calculation skills only when digits were separated by small distance. The observed double dissociation highlights the relevance of both intentional and automatic numerical processing in mathematical skills, but when inhibitory control is also involved. PMID:25873909

  7. Numerical Asymptotic Solutions Of Differential Equations

    NASA Technical Reports Server (NTRS)

    Thurston, Gaylen A.

    1992-01-01

    Numerical algorithms derived and compared with classical analytical methods. In method, expansions replaced with integrals evaluated numerically. Resulting numerical solutions retain linear independence, main advantage of asymptotic solutions.

  8. Foundations of children's numerical and mathematical skills: the roles of symbolic and nonsymbolic representations of numerical magnitude.

    PubMed

    Lyons, Ian M; Ansari, Daniel

    2015-01-01

    Numerical and mathematical skills are critical predictors of academic success. The last three decades have seen a substantial growth in our understanding of how the human mind and brain represent and process numbers. In particular, research has shown that we share with animals the ability to represent numerical magnitude (the total number of items in a set) and that preverbal infants can process numerical magnitude. Further research has shown that similar processing signatures characterize numerical magnitude processing across species and developmental time. These findings suggest that an approximate system for nonsymbolic (e.g., dot arrays) numerical magnitude representation serves as the basis for the acquisition of cultural, symbolic (e.g., Arabic numerals) representations of numerical magnitude. This chapter explores this hypothesis by reviewing studies that have examined the relation between individual differences in nonsymbolic numerical magnitude processing and symbolic math abilities (e.g., arithmetic). Furthermore, we examine the extent to which the available literature provides strong evidence for a link between symbolic and nonsymbolic representations of numerical magnitude at the behavioral and neural levels of analysis. We conclude that claims that symbolic number abilities are grounded in the approximate system for the nonsymbolic representation of numerical magnitude are not strongly supported by the available evidence. Alternative models and future research directions are discussed. © 2015 Elsevier Inc. All rights reserved.

  9. Developmental specialization of the left parietal cortex for the semantic representation of Arabic numerals: an fMR-adaptation study.

    PubMed

    Vogel, Stephan E; Goffin, Celia; Ansari, Daniel

    2015-04-01

    The way the human brain constructs representations of numerical symbols is poorly understood. While increasing evidence from neuroimaging studies has indicated that the intraparietal sulcus (IPS) becomes increasingly specialized for symbolic numerical magnitude representation over developmental time, the extent to which these changes are associated with age-related differences in symbolic numerical magnitude representation or with developmental changes in non-numerical processes, such as response selection, remains to be uncovered. To address these outstanding questions we investigated developmental changes in the cortical representation of symbolic numerical magnitude in 6- to 14-year-old children using a passive functional magnetic resonance imaging adaptation design, thereby mitigating the influence of response selection. A single-digit Arabic numeral was repeatedly presented on a computer screen and interspersed with the presentation of novel digits deviating as a function of numerical ratio (smaller/larger number). Results demonstrated a correlation between age and numerical ratio in the left IPS, suggesting an age-related increase in the extent to which numerical symbols are represented in the left IPS. Brain activation of the right IPS was modulated by numerical ratio but did not correlate with age, indicating hemispheric differences in IPS engagement during the development of symbolic numerical representation. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  10. 16 CFR 304.5 - Marking requirements for imitation political items.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ...-serif numerals. Each numeral shall have a vertical dimension of not less than two millimeters (2.0 mm... reproduction, whichever is the lesser. The minimum total horizontal dimension for the four numerals composing... year in sans-serif numerals. Each numeral shall have a vertical dimension of not less than two...

  11. 16 CFR 304.5 - Marking requirements for imitation political items.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ...-serif numerals. Each numeral shall have a vertical dimension of not less than two millimeters (2.0 mm... reproduction, whichever is the lesser. The minimum total horizontal dimension for the four numerals composing... year in sans-serif numerals. Each numeral shall have a vertical dimension of not less than two...

  12. 16 CFR 304.5 - Marking requirements for imitation political items.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ...-serif numerals. Each numeral shall have a vertical dimension of not less than two millimeters (2.0 mm... reproduction, whichever is the lesser. The minimum total horizontal dimension for the four numerals composing... year in sans-serif numerals. Each numeral shall have a vertical dimension of not less than two...

  13. Basic and Advanced Numerical Performances Relate to Mathematical Expertise but Are Fully Mediated by Visuospatial Skills

    ERIC Educational Resources Information Center

    Sella, Francesco; Sader, Elie; Lolliot, Simon; Cohen Kadosh, Roi

    2016-01-01

    Recent studies have highlighted the potential role of basic numerical processing in the acquisition of numerical and mathematical competences. However, it is debated whether high-level numerical skills and mathematics depends specifically on basic numerical representations. In this study mathematicians and nonmathematicians performed a basic…

  14. Promoting broad and stable improvements in low-income children's numerical knowledge through playing number board games.

    PubMed

    Ramani, Geetha B; Siegler, Robert S

    2008-01-01

    Theoretical analyses of the development of numerical representations suggest that playing linear number board games should enhance young children's numerical knowledge. Consistent with this prediction, playing such a game for roughly 1 hr increased low-income preschoolers' (mean age = 5.4 years) proficiency on 4 diverse numerical tasks: numerical magnitude comparison, number line estimation, counting, and numeral identification. The gains remained 9 weeks later. Classmates who played an identical game, except for the squares varying in color rather than number, did not improve on any measure. Also as predicted, home experience playing number board games correlated positively with numerical knowledge. Thus, playing number board games with children from low-income backgrounds may increase their numerical knowledge at the outset of school.

  15. Semantic and perceptual processing of number symbols: evidence from a cross-linguistic fMRI adaptation study.

    PubMed

    Holloway, Ian D; Battista, Christian; Vogel, Stephan E; Ansari, Daniel

    2013-03-01

    The ability to process the numerical magnitude of sets of items has been characterized in many animal species. Neuroimaging data have associated this ability to represent nonsymbolic numerical magnitudes (e.g., arrays of dots) with activity in the bilateral parietal lobes. Yet the quantitative abilities of humans are not limited to processing the numerical magnitude of nonsymbolic sets. Humans have used this quantitative sense as the foundation for symbolic systems for the representation of numerical magnitude. Although numerical symbol use is widespread in human cultures, the brain regions involved in processing of numerical symbols are just beginning to be understood. Here, we investigated the brain regions underlying the semantic and perceptual processing of numerical symbols. Specifically, we used an fMRI adaptation paradigm to examine the neural response to Hindu-Arabic numerals and Chinese numerical ideographs in a group of Chinese readers who could read both symbol types and a control group who could read only the numerals. Across groups, the Hindu-Arabic numerals exhibited ratio-dependent modulation in the left IPS. In contrast, numerical ideographs were associated with activation in the right IPS, exclusively in the Chinese readers. Furthermore, processing of the visual similarity of both digits and ideographs was associated with activation of the left fusiform gyrus. Using culture as an independent variable, we provide clear evidence for differences in the brain regions associated with the semantic and perceptual processing of numerical symbols. Additionally, we reveal a striking difference in the laterality of parietal activation between the semantic processing of the two symbols types.

  16. Sources of Individual Differences in Emerging Competence With Numeration Understanding Versus Multidigit Calculation Skill

    PubMed Central

    Fuchs, Lynn S.; Geary, David C.; Fuchs, Douglas; Compton, Donald L.; Hamlett, Carol L.

    2014-01-01

    This study investigated contributions of general cognitive abilities and foundational mathematical competencies to numeration understanding (i.e., base-10 structure) versus multidigit calculation skill. Children (n = 394, M = 6.5 years) were assessed on general cognitive abilities and foundational numerical competencies at start of 1st grade; on the same numerical competencies, multidigit calculation skill, and numeration understanding at end of 2nd grade; and on multidigit calculation skill and numeration understanding at end of 3rd grade. Path-analytic mediation analysis revealed that general cognitive predictors exerted more direct and more substantial effects on numeration understanding than on multidigit calculations. Foundational mathematics competencies contributed to both outcomes, but largely via 2nd-grade mathematics achievement, and results suggest a mutually supportive role between numeration understanding and multidigit calculations. PMID:25284885

  17. Executive Function Effects and Numerical Development in Children: Behavioural and ERP Evidence from a Numerical Stroop Paradigm

    ERIC Educational Resources Information Center

    Soltesz, Fruzsina; Goswami, Usha; White, Sonia; Szucs, Denes

    2011-01-01

    Most research on numerical development in children is behavioural, focusing on accuracy and response time in different problem formats. However, Temple and Posner (1998) used ERPs and the numerical distance task with 5-year-olds to show that the development of numerical representations is difficult to disentangle from the development of the…

  18. Numeral-Incorporating Roots in Numeral Systems: A Comparative Analysis of Two Sign Languages

    ERIC Educational Resources Information Center

    Fuentes, Mariana; Massone, Maria Ignacia; Fernandez-Viader, Maria del Pilar; Makotrinsky, Alejandro; Pulgarin, Francisca

    2010-01-01

    Numeral-incorporating roots in the numeral systems of Argentine Sign Language (LSA) and Catalan Sign Language (LSC), as well as the main features of the number systems of both languages, are described and compared. Informants discussed the use of numerals and roots in both languages (in most cases in natural contexts). Ten informants took part in…

  19. Ordinal Expressions in Japanese. Papers in Japanese Linguistics, Vol. 2, No. 1.

    ERIC Educational Resources Information Center

    Backus, Robert L.

    The varied forms and semantic factors of Japanese ordinal expressions are related to one another in a coherent system. In Japanese, the cardinal number form is a numeral compound in construction with a referent. The numeral compound consists of a number and a numeral adjunct. Numeral adjuncts are derived from bound forms, or numeral suffixes, and…

  20. Probabilistic numerics and uncertainty in computations

    PubMed Central

    Hennig, Philipp; Osborne, Michael A.; Girolami, Mark

    2015-01-01

    We deliver a call to arms for probabilistic numerical methods: algorithms for numerical tasks, including linear algebra, integration, optimization and solving differential equations, that return uncertainties in their calculations. Such uncertainties, arising from the loss of precision induced by numerical calculation with limited time or hardware, are important for much contemporary science and industry. Within applications such as climate science and astrophysics, the need to make decisions on the basis of computations with large and complex data have led to a renewed focus on the management of numerical uncertainty. We describe how several seminal classic numerical methods can be interpreted naturally as probabilistic inference. We then show that the probabilistic view suggests new algorithms that can flexibly be adapted to suit application specifics, while delivering improved empirical performance. We provide concrete illustrations of the benefits of probabilistic numeric algorithms on real scientific problems from astrometry and astronomical imaging, while highlighting open problems with these new algorithms. Finally, we describe how probabilistic numerical methods provide a coherent framework for identifying the uncertainty in calculations performed with a combination of numerical algorithms (e.g. both numerical optimizers and differential equation solvers), potentially allowing the diagnosis (and control) of error sources in computations. PMID:26346321

  1. Probabilistic numerics and uncertainty in computations.

    PubMed

    Hennig, Philipp; Osborne, Michael A; Girolami, Mark

    2015-07-08

    We deliver a call to arms for probabilistic numerical methods : algorithms for numerical tasks, including linear algebra, integration, optimization and solving differential equations, that return uncertainties in their calculations. Such uncertainties, arising from the loss of precision induced by numerical calculation with limited time or hardware, are important for much contemporary science and industry. Within applications such as climate science and astrophysics, the need to make decisions on the basis of computations with large and complex data have led to a renewed focus on the management of numerical uncertainty. We describe how several seminal classic numerical methods can be interpreted naturally as probabilistic inference. We then show that the probabilistic view suggests new algorithms that can flexibly be adapted to suit application specifics, while delivering improved empirical performance. We provide concrete illustrations of the benefits of probabilistic numeric algorithms on real scientific problems from astrometry and astronomical imaging, while highlighting open problems with these new algorithms. Finally, we describe how probabilistic numerical methods provide a coherent framework for identifying the uncertainty in calculations performed with a combination of numerical algorithms (e.g. both numerical optimizers and differential equation solvers), potentially allowing the diagnosis (and control) of error sources in computations.

  2. Numerical Boundary Condition Procedures

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Topics include numerical procedures for treating inflow and outflow boundaries, steady and unsteady discontinuous surfaces, far field boundaries, and multiblock grids. In addition, the effects of numerical boundary approximations on stability, accuracy, and convergence rate of the numerical solution are discussed.

  3. Integrating numerical computation into the undergraduate education physics curriculum using spreadsheet excel

    NASA Astrophysics Data System (ADS)

    Fauzi, Ahmad

    2017-11-01

    Numerical computation has many pedagogical advantages: it develops analytical skills and problem-solving skills, helps to learn through visualization, and enhances physics education. Unfortunately, numerical computation is not taught to undergraduate education physics students in Indonesia. Incorporate numerical computation into the undergraduate education physics curriculum presents many challenges. The main challenges are the dense curriculum that makes difficult to put new numerical computation course and most students have no programming experience. In this research, we used case study to review how to integrate numerical computation into undergraduate education physics curriculum. The participants of this research were 54 students of the fourth semester of physics education department. As a result, we concluded that numerical computation could be integrated into undergraduate education physics curriculum using spreadsheet excel combined with another course. The results of this research become complements of the study on how to integrate numerical computation in learning physics using spreadsheet excel.

  4. Preface of "The Second Symposium on Border Zones Between Experimental and Numerical Application Including Solution Approaches By Extensions of Standard Numerical Methods"

    NASA Astrophysics Data System (ADS)

    Ortleb, Sigrun; Seidel, Christian

    2017-07-01

    In this second symposium at the limits of experimental and numerical methods, recent research is presented on practically relevant problems. Presentations discuss experimental investigation as well as numerical methods with a strong focus on application. In addition, problems are identified which require a hybrid experimental-numerical approach. Topics include fast explicit diffusion applied to a geothermal energy storage tank, noise in experimental measurements of electrical quantities, thermal fluid structure interaction, tensegrity structures, experimental and numerical methods for Chladni figures, optimized construction of hydroelectric power stations, experimental and numerical limits in the investigation of rain-wind induced vibrations as well as the application of exponential integrators in a domain-based IMEX setting.

  5. Error Estimation and Uncertainty Propagation in Computational Fluid Mechanics

    NASA Technical Reports Server (NTRS)

    Zhu, J. Z.; He, Guowei; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    Numerical simulation has now become an integral part of engineering design process. Critical design decisions are routinely made based on the simulation results and conclusions. Verification and validation of the reliability of the numerical simulation is therefore vitally important in the engineering design processes. We propose to develop theories and methodologies that can automatically provide quantitative information about the reliability of the numerical simulation by estimating numerical approximation error, computational model induced errors and the uncertainties contained in the mathematical models so that the reliability of the numerical simulation can be verified and validated. We also propose to develop and implement methodologies and techniques that can control the error and uncertainty during the numerical simulation so that the reliability of the numerical simulation can be improved.

  6. Influence of biases in numerical magnitude allocation on human prosocial decision making.

    PubMed

    Arshad, Qadeer; Nigmatullina, Yuliya; Siddiqui, Shuaib; Franka, Mustafa; Mediratta, Saniya; Ramachandaran, Sanjeev; Lobo, Rhannon; Malhotra, Paresh A; Roberts, R E; Bronstein, Adolfo M

    2017-12-01

    Over the past decade neuroscientific research has attempted to probe the neurobiological underpinnings of human prosocial decision making. Such research has almost ubiquitously employed tasks such as the dictator game or similar variations (i.e., ultimatum game). Considering the explicit numerical nature of such tasks, it is surprising that the influence of numerical cognition on decision making during task performance remains unknown. While performing these tasks, participants typically tend to anchor on a 50:50 split that necessitates an explicit numerical judgement (i.e., number-pair bisection). Accordingly, we hypothesize that the decision-making process during the dictator game recruits overlapping cognitive processes to those known to be engaged during number-pair bisection. We observed that biases in numerical magnitude allocation correlated with the formulation of decisions during the dictator game. That is, intrinsic biases toward smaller numerical magnitudes were associated with the formulation of less favorable decisions, whereas biases toward larger magnitudes were associated with more favorable choices. We proceeded to corroborate this relationship by subliminally and systematically inducing biases in numerical magnitude toward either higher or lower numbers using a visuo-vestibular stimulation paradigm. Such subliminal alterations in numerical magnitude allocation led to proportional and corresponding changes to an individual's decision making during the dictator game. Critically, no relationship was observed between neither intrinsic nor induced biases in numerical magnitude on decision making when assessed using a nonnumerical-based prosocial questionnaire. Our findings demonstrate numerical influences on decisions formulated during the dictator game and highlight the necessity to control for confounds associated with numerical cognition in human decision-making paradigms. NEW & NOTEWORTHY We demonstrate that intrinsic biases in numerical magnitude can directly predict the amount of money donated by an individual to an anonymous stranger during the dictator game. Furthermore, subliminally inducing perceptual biases in numerical-magnitude allocation can actively drive prosocial choices in the corresponding direction. Our findings provide evidence for numerical influences on decision making during performance of the dictator game. Accordingly, without the implementation of an adequate control for numerical influences, the dictator game and other tasks with an inherent numerical component (i.e., ultimatum game) should be employed with caution in the assessment of human behavior. Copyright © 2017 the American Physiological Society.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zou, Ling; Zhao, Haihua; Kim, Seung Jun

    In this study, the classical Welander’s oscillatory natural circulation problem is investigated using high-order numerical methods. As originally studied by Welander, the fluid motion in a differentially heated fluid loop can exhibit stable, weakly instable, and strongly instable modes. A theoretical stability map has also been originally derived from the stability analysis. Numerical results obtained in this paper show very good agreement with Welander’s theoretical derivations. For stable cases, numerical results from both the high-order and low-order numerical methods agree well with the non-dimensional flow rate analytically derived. The high-order numerical methods give much less numerical errors compared to themore » low-order methods. For stability analysis, the high-order numerical methods could perfectly predict the stability map, while the low-order numerical methods failed to do so. For all theoretically unstable cases, the low-order methods predicted them to be stable. The result obtained in this paper is a strong evidence to show the benefits of using high-order numerical methods over the low-order ones, when they are applied to simulate natural circulation phenomenon that has already gain increasing interests in many future nuclear reactor designs.« less

  8. ERP Correlates of Verbal and Numerical Probabilities in Risky Choices: A Two-Stage Probability Processing View

    PubMed Central

    Li, Shu; Du, Xue-Lei; Li, Qi; Xuan, Yan-Hua; Wang, Yun; Rao, Li-Lin

    2016-01-01

    Two kinds of probability expressions, verbal and numerical, have been used to characterize the uncertainty that people face. However, the question of whether verbal and numerical probabilities are cognitively processed in a similar manner remains unresolved. From a levels-of-processing perspective, verbal and numerical probabilities may be processed differently during early sensory processing but similarly in later semantic-associated operations. This event-related potential (ERP) study investigated the neural processing of verbal and numerical probabilities in risky choices. The results showed that verbal probability and numerical probability elicited different N1 amplitudes but that verbal and numerical probabilities elicited similar N2 and P3 waveforms in response to different levels of probability (high to low). These results were consistent with a levels-of-processing framework and suggest some internal consistency between the cognitive processing of verbal and numerical probabilities in risky choices. Our findings shed light on possible mechanism underlying probability expression and may provide the neural evidence to support the translation of verbal to numerical probabilities (or vice versa). PMID:26834612

  9. Numerical human models for accident research and safety - potentials and limitations.

    PubMed

    Praxl, Norbert; Adamec, Jiri; Muggenthaler, Holger; von Merten, Katja

    2008-01-01

    The method of numerical simulation is frequently used in the area of automotive safety. Recently, numerical models of the human body have been developed for the numerical simulation of occupants. Different approaches in modelling the human body have been used: the finite-element and the multibody technique. Numerical human models representing the two modelling approaches are introduced and the potentials and limitations of these models are discussed.

  10. Cognitive Strategy Use and Measured Numeric Ability in Immediate- and Long-Term Recall of Everyday Numeric Information

    PubMed Central

    Bermingham, Douglas; Hill, Robert D.; Woltz, Dan; Gardner, Michael K.

    2013-01-01

    The goals of this study were to assess the primary effects of the use of cognitive strategy and a combined measure of numeric ability on recall of every-day numeric information (i.e. prices). Additionally, numeric ability was assessed as a moderator in the relationship between strategy use and memory for prices. One hundred participants memorized twelve prices that varied from 1 to 6 digits; they recalled these immediately and after 7 days. The use of strategies, assessed through self-report, was associated with better overall recall, but not forgetting. Numeric ability was not associated with either better overall recall or forgetting. A small moderating interaction was found, in which higher levels of numeric ability enhanced the beneficial effects of strategy use on overall recall. Exploratory analyses found two further small moderating interactions: simple strategy use enhanced overall recall at higher levels of numeric ability, compared to complex strategy use; and complex strategy use was associated with lower levels of forgetting, but only at higher levels of numeric ability, compared to the simple strategy use. These results provide support for an objective measure of numeric ability, as well as adding to the literature on memory and the benefits of cognitive strategy use. PMID:23483964

  11. Cognitive strategy use and measured numeric ability in immediate- and long-term recall of everyday numeric information.

    PubMed

    Bermingham, Douglas; Hill, Robert D; Woltz, Dan; Gardner, Michael K

    2013-01-01

    The goals of this study were to assess the primary effects of the use of cognitive strategy and a combined measure of numeric ability on recall of every-day numeric information (i.e. prices). Additionally, numeric ability was assessed as a moderator in the relationship between strategy use and memory for prices. One hundred participants memorized twelve prices that varied from 1 to 6 digits; they recalled these immediately and after 7 days. The use of strategies, assessed through self-report, was associated with better overall recall, but not forgetting. Numeric ability was not associated with either better overall recall or forgetting. A small moderating interaction was found, in which higher levels of numeric ability enhanced the beneficial effects of strategy use on overall recall. Exploratory analyses found two further small moderating interactions: simple strategy use enhanced overall recall at higher levels of numeric ability, compared to complex strategy use; and complex strategy use was associated with lower levels of forgetting, but only at higher levels of numeric ability, compared to the simple strategy use. These results provide support for an objective measure of numeric ability, as well as adding to the literature on memory and the benefits of cognitive strategy use.

  12. Handwritten numeral databases of Indian scripts and multistage recognition of mixed numerals.

    PubMed

    Bhattacharya, Ujjwal; Chaudhuri, B B

    2009-03-01

    This article primarily concerns the problem of isolated handwritten numeral recognition of major Indian scripts. The principal contributions presented here are (a) pioneering development of two databases for handwritten numerals of two most popular Indian scripts, (b) a multistage cascaded recognition scheme using wavelet based multiresolution representations and multilayer perceptron classifiers and (c) application of (b) for the recognition of mixed handwritten numerals of three Indian scripts Devanagari, Bangla and English. The present databases include respectively 22,556 and 23,392 handwritten isolated numeral samples of Devanagari and Bangla collected from real-life situations and these can be made available free of cost to researchers of other academic Institutions. In the proposed scheme, a numeral is subjected to three multilayer perceptron classifiers corresponding to three coarse-to-fine resolution levels in a cascaded manner. If rejection occurred even at the highest resolution, another multilayer perceptron is used as the final attempt to recognize the input numeral by combining the outputs of three classifiers of the previous stages. This scheme has been extended to the situation when the script of a document is not known a priori or the numerals written on a document belong to different scripts. Handwritten numerals in mixed scripts are frequently found in Indian postal mails and table-form documents.

  13. 28 CFR 553.11 - Limitations on inmate personal property.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...) Numerical limitations. Authorized personal property may be subject to numerical limitations. The institution's Admission and Orientation program shall include notification to the inmate of any numerical limitations in effect at the institution and a current list of any numerical limitations shall be posted on...

  14. 28 CFR 553.11 - Limitations on inmate personal property.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...) Numerical limitations. Authorized personal property may be subject to numerical limitations. The institution's Admission and Orientation program shall include notification to the inmate of any numerical limitations in effect at the institution and a current list of any numerical limitations shall be posted on...

  15. Building Blocks for Reliable Complex Nonlinear Numerical Simulations

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Mansour, Nagi N. (Technical Monitor)

    2002-01-01

    This talk describes some of the building blocks to ensure a higher level of confidence in the predictability and reliability (PAR) of numerical simulation of multiscale complex nonlinear problems. The focus is on relating PAR of numerical simulations with complex nonlinear phenomena of numerics. To isolate sources of numerical uncertainties, the possible discrepancy between the chosen partial differential equation (PDE) model and the real physics and/or experimental data is set aside. The discussion is restricted to how well numerical schemes can mimic the solution behavior of the underlying PDE model for finite time steps and grid spacings. The situation is complicated by the fact that the available theory for the understanding of nonlinear behavior of numerics is not at a stage to fully analyze the nonlinear Euler and Navier-Stokes equations. The discussion is based on the knowledge gained for nonlinear model problems with known analytical solutions to identify and explain the possible sources and remedies of numerical uncertainties in practical computations. Examples relevant to turbulent flow computations are included.

  16. Building Blocks for Reliable Complex Nonlinear Numerical Simulations

    NASA Technical Reports Server (NTRS)

    Yee, H. C.

    2005-01-01

    This chapter describes some of the building blocks to ensure a higher level of confidence in the predictability and reliability (PAR) of numerical simulation of multiscale complex nonlinear problems. The focus is on relating PAR of numerical simulations with complex nonlinear phenomena of numerics. To isolate sources of numerical uncertainties, the possible discrepancy between the chosen partial differential equation (PDE) model and the real physics and/or experimental data is set aside. The discussion is restricted to how well numerical schemes can mimic the solution behavior of the underlying PDE model for finite time steps and grid spacings. The situation is complicated by the fact that the available theory for the understanding of nonlinear behavior of numerics is not at a stage to fully analyze the nonlinear Euler and Navier-Stokes equations. The discussion is based on the knowledge gained for nonlinear model problems with known analytical solutions to identify and explain the possible sources and remedies of numerical uncertainties in practical computations.

  17. Sedimentary Geothermal Feasibility Study: October 2016

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Augustine, Chad; Zerpa, Luis

    The objective of this project is to analyze the feasibility of commercial geothermal projects using numerical reservoir simulation, considering a sedimentary reservoir with low permeability that requires productivity enhancement. A commercial thermal reservoir simulator (STARS, from Computer Modeling Group, CMG) is used in this work for numerical modeling. In the first stage of this project (FY14), a hypothetical numerical reservoir model was developed, and validated against an analytical solution. The following model parameters were considered to obtain an acceptable match between the numerical and analytical solutions: grid block size, time step and reservoir areal dimensions; the latter related to boundarymore » effects on the numerical solution. Systematic model runs showed that insufficient grid sizing generates numerical dispersion that causes the numerical model to underestimate the thermal breakthrough time compared to the analytic model. As grid sizing is decreased, the model results converge on a solution. Likewise, insufficient reservoir model area introduces boundary effects in the numerical solution that cause the model results to differ from the analytical solution.« less

  18. Numerical modelling techniques of soft soil improvement via stone columns: A brief review

    NASA Astrophysics Data System (ADS)

    Zukri, Azhani; Nazir, Ramli

    2018-04-01

    There are a number of numerical studies on stone column systems in the literature. Most of the studies found were involved with two-dimensional analysis of the stone column behaviour, while only a few studies used three-dimensional analysis. The most popular software utilised in those studies was Plaxis 2D and 3D. Other types of software that used for numerical analysis are DIANA, EXAMINE, ZSoil, ABAQUS, ANSYS, NISA, GEOSTUDIO, CRISP, TOCHNOG, CESAR, GEOFEM (2D & 3D), FLAC, and FLAC 3. This paper will review the methodological approaches to model stone column numerically, both in two-dimensional and three-dimensional analyses. The numerical techniques and suitable constitutive model used in the studies will also be discussed. In addition, the validation methods conducted were to verify the numerical analysis conducted will be presented. This review paper also serves as a guide for junior engineers through the applicable procedures and considerations when constructing and running a two or three-dimensional numerical analysis while also citing numerous relevant references.

  19. Building Blocks for Reliable Complex Nonlinear Numerical Simulations. Chapter 2

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Mansour, Nagi N. (Technical Monitor)

    2001-01-01

    This chapter describes some of the building blocks to ensure a higher level of confidence in the predictability and reliability (PAR) of numerical simulation of multiscale complex nonlinear problems. The focus is on relating PAR of numerical simulations with complex nonlinear phenomena of numerics. To isolate sources of numerical uncertainties, the possible discrepancy between the chosen partial differential equation (PDE) model and the real physics and/or experimental data is set aside. The discussion is restricted to how well numerical schemes can mimic the solution behavior of the underlying PDE model for finite time steps and grid spacings. The situation is complicated by the fact that the available theory for the understanding of nonlinear behavior of numerics is not at a stage to fully analyze the nonlinear Euler and Navier-Stokes equations. The discussion is based on the knowledge gained for nonlinear model problems with known analytical solutions to identify and explain the possible sources and remedies of numerical uncertainties in practical computations. Examples relevant to turbulent flow computations are included.

  20. Integrating spatial and numerical structure in mathematical patterning

    NASA Astrophysics Data System (ADS)

    Ni’mah, K.; Purwanto; Irawan, E. B.; Hidayanto, E.

    2018-03-01

    This paper reports a study monitoring the integrating spatial and numerical structure in mathematical patterning skills of 30 students grade 7th of junior high school. The purpose of this research is to clarify the processes by which learners construct new knowledge in mathematical patterning. Findings indicate that: (1) students are unable to organize the structure of spatial and numerical, (2) students were only able to organize the spatial structure, but the numerical structure is still incorrect, (3) students were only able to organize numerical structure, but its spatial structure is still incorrect, (4) students were able to organize both of the spatial and numerical structure.

  1. Basic and Advanced Numerical Performances Relate to Mathematical Expertise but Are Fully Mediated by Visuospatial Skills

    PubMed Central

    2016-01-01

    Recent studies have highlighted the potential role of basic numerical processing in the acquisition of numerical and mathematical competences. However, it is debated whether high-level numerical skills and mathematics depends specifically on basic numerical representations. In this study mathematicians and nonmathematicians performed a basic number line task, which required mapping positive and negative numbers on a physical horizontal line, and has been shown to correlate with more advanced numerical abilities and mathematical achievement. We found that mathematicians were more accurate compared with nonmathematicians when mapping positive, but not negative numbers, which are considered numerical primitives and cultural artifacts, respectively. Moreover, performance on positive number mapping could predict whether one is a mathematician or not, and was mediated by more advanced mathematical skills. This finding might suggest a link between basic and advanced mathematical skills. However, when we included visuospatial skills, as measured by block design subtest, the mediation analysis revealed that the relation between the performance in the number line task and the group membership was explained by non-numerical visuospatial skills. These results demonstrate that relation between basic, even specific, numerical skills and advanced mathematical achievement can be artifactual and explained by visuospatial processing. PMID:26913930

  2. 75 FR 15440 - Guidance for Industry on Standards for Securing the Drug Supply Chain-Standardized Numerical...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-29

    ...] Guidance for Industry on Standards for Securing the Drug Supply Chain--Standardized Numerical... industry entitled ``Standards for Securing the Drug Supply Chain-Standardized Numerical Identification for... the Drug Supply Chain-Standardized Numerical Identification for Prescription Drug Packages.'' In the...

  3. Discontinuous Galerkin methods for Hamiltonian ODEs and PDEs

    NASA Astrophysics Data System (ADS)

    Tang, Wensheng; Sun, Yajuan; Cai, Wenjun

    2017-02-01

    In this article, we present a unified framework of discontinuous Galerkin (DG) discretizations for Hamiltonian ODEs and PDEs. We show that with appropriate numerical fluxes the numerical algorithms deduced from DG discretizations can be combined with the symplectic methods in time to derive the multi-symplectic PRK schemes. The resulting numerical discretizations are applied to the linear and nonlinear Schrödinger equations. Some conservative properties of the numerical schemes are investigated and confirmed in the numerical experiments.

  4. The contributions of numerical acuity and non-numerical stimulus features to the development of the number sense and symbolic math achievement.

    PubMed

    Starr, Ariel; DeWind, Nicholas K; Brannon, Elizabeth M

    2017-11-01

    Numerical acuity, frequently measured by a Weber fraction derived from nonsymbolic numerical comparison judgments, has been shown to be predictive of mathematical ability. However, recent findings suggest that stimulus controls in these tasks are often insufficiently implemented, and the proposal has been made that alternative visual features or inhibitory control capacities may actually explain this relation. Here, we use a novel mathematical algorithm to parse the relative influence of numerosity from other visual features in nonsymbolic numerical discrimination and to examine the strength of the relations between each of these variables, including inhibitory control, and mathematical ability. We examined these questions developmentally by testing 4-year-old children, 6-year-old children, and adults with a nonsymbolic numerical comparison task, a symbolic math assessment, and a test of inhibitory control. We found that the influence of non-numerical features decreased significantly over development but that numerosity was a primary determinate of decision making at all ages. In addition, numerical acuity was a stronger predictor of math achievement than either non-numerical bias or inhibitory control in children. These results suggest that the ability to selectively attend to number contributes to the maturation of the number sense and that numerical acuity, independent of inhibitory control, contributes to math achievement in early childhood. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Differences in Arithmetic Performance between Chinese and German Children Are Accompanied by Differences in Processing of Symbolic Numerical Magnitude

    PubMed Central

    Lonnemann, Jan; Linkersdörfer, Janosch; Hasselhorn, Marcus; Lindberg, Sven

    2016-01-01

    Symbolic numerical magnitude processing skills are assumed to be fundamental to arithmetic learning. It is, however, still an open question whether better arithmetic skills are reflected in symbolic numerical magnitude processing skills. To address this issue, Chinese and German third graders were compared regarding their performance in arithmetic tasks and in a symbolic numerical magnitude comparison task. Chinese children performed better in the arithmetic tasks and were faster in deciding which one of two Arabic numbers was numerically larger. The group difference in symbolic numerical magnitude processing was fully mediated by the performance in arithmetic tasks. We assume that a higher degree of familiarity with arithmetic in Chinese compared to German children leads to a higher speed of retrieving symbolic numerical magnitude knowledge. PMID:27630606

  6. Numerical analysis and experimental research of the rubber boot of the joint drive vehicle

    NASA Astrophysics Data System (ADS)

    Ziobro, Jan

    2016-04-01

    The article presents many numerical studies and experimental research of the drive rubber boot of the joint drive vehicle. Performance requirements have been discussed and the required coefficients of the mathematical model for numerical simulation have been determined. The behavior of living in MSC.MARC environment was examined. In the analysis the following have been used: hyperplastic two-parameter model of the Mooney-Rivlin material, large displacements procedure, safe contact condition, friction on the sides of the boots. 3D numerical model of the joint bootwas analyzed under influence of the forces: tensile, compressive, centrifugal and angular. Numerous results of studies have been presented. An appropriate test stand was built and comparison of the results of the numerical analysis and the results of experimental studies was made. Numerous requests and recommendations for utilitarian character have been presented.

  7. 40 CFR 180.33 - Fees.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    .... (b) Each petition for the establishment of a tolerance at a lower numerical level or levels than a... additional raw agricultural commodities at the same numerical level as a tolerance already established for... has a tolerance for other uses at the same numerical level or a higher numerical level shall be...

  8. Numerical Aerodynamic Simulation

    NASA Technical Reports Server (NTRS)

    1989-01-01

    An overview of historical and current numerical aerodynamic simulation (NAS) is given. The capabilities and goals of the Numerical Aerodynamic Simulation Facility are outlined. Emphasis is given to numerical flow visualization and its applications to structural analysis of aircraft and spacecraft bodies. The uses of NAS in computational chemistry, engine design, and galactic evolution are mentioned.

  9. Numerical considerations for Lagrangian stochastic dispersion models: Eliminating rogue trajectories, and the importance of numerical accuracy

    USDA-ARS?s Scientific Manuscript database

    When Lagrangian stochastic models for turbulent dispersion are applied to complex flows, some type of ad hoc intervention is almost always necessary to eliminate unphysical behavior in the numerical solution. This paper discusses numerical considerations when solving the Langevin-based particle velo...

  10. Revealing Numerical Solutions of a Differential Equation

    ERIC Educational Resources Information Center

    Glaister, P.

    2006-01-01

    In this article, the author considers a student exercise that involves determining the exact and numerical solutions of a particular differential equation. He shows how a typical student solution is at variance with a numerical solution, suggesting that the numerical solution is incorrect. However, further investigation shows that this numerical…

  11. The Relationship between Study Habits, Attitudes and Orientation among Developmental Freshmen of Kean College.

    ERIC Educational Resources Information Center

    Gersten, Susan G. Liss

    A study was conducted to determine if visual linguistic numeric, auditory linguistic numeric, and tactile concrete learners have statistically significant different study habits, study attitudes, and study orientation than their low visual linguistic numeric, low auditory linguistic numeric, and low tactile concrete counterparts. Data were…

  12. Nonlinear dynamics and numerical uncertainties in CFD

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Sweby, P. K.

    1996-01-01

    The application of nonlinear dynamics to improve the understanding of numerical uncertainties in computational fluid dynamics (CFD) is reviewed. Elementary examples in the use of dynamics to explain the nonlinear phenomena and spurious behavior that occur in numerics are given. The role of dynamics in the understanding of long time behavior of numerical integrations and the nonlinear stability, convergence, and reliability of using time-marching, approaches for obtaining steady-state numerical solutions in CFD is explained. The study is complemented with spurious behavior observed in CFD computations.

  13. Numerical integration of asymptotic solutions of ordinary differential equations

    NASA Technical Reports Server (NTRS)

    Thurston, Gaylen A.

    1989-01-01

    Classical asymptotic analysis of ordinary differential equations derives approximate solutions that are numerically stable. However, the analysis also leads to tedious expansions in powers of the relevant parameter for a particular problem. The expansions are replaced with integrals that can be evaluated by numerical integration. The resulting numerical solutions retain the linear independence that is the main advantage of asymptotic solutions. Examples, including the Falkner-Skan equation from laminar boundary layer theory, illustrate the method of asymptotic analysis with numerical integration.

  14. Extensive numerical study of a D-brane, anti-D-brane system in AdS 5 /CFT 4

    NASA Astrophysics Data System (ADS)

    Hegedűs, Árpád

    2015-04-01

    In this paper the hybrid-NLIE approach of [38] is extended to the ground state of a D-brane anti-D-brane system in AdS/CFT. The hybrid-NLIE equations presented in the paper are finite component alternatives of the previously proposed TBA equations and they admit an appropriate framework for the numerical investigation of the ground state of the problem. Straightforward numerical iterative methods fail to converge, thus new numerical methods are worked out to solve the equations. Our numerical data confirm the previous TBA data. In view of the numerical results the mysterious L = 1 case is also commented in the paper.

  15. Binocular device for displaying numerical information in field of view

    NASA Technical Reports Server (NTRS)

    Fuller, H. V. (Inventor)

    1977-01-01

    An apparatus is described for superimposing numerical information on the field of view of binoculars. The invention has application in the flying of radio-controlled model airplanes. Information such as airspeed and angle of attack are sensed on a model airplane and transmitted back to earth where this information is changed into numerical form. Optical means are attached to the binoculars that a pilot is using to track the model air plane for displaying the numerical information in the field of view of the binoculars. The device includes means for focusing the numerical information at infinity whereby the user of the binoculars can see both the field of view and the numerical information without refocusing his eyes.

  16. The effect of mathematics anxiety on the processing of numerical magnitude.

    PubMed

    Maloney, Erin A; Ansari, Daniel; Fugelsang, Jonathan A

    2011-01-01

    In an effort to understand the origins of mathematics anxiety, we investigated the processing of symbolic magnitude by high mathematics-anxious (HMA) and low mathematics-anxious (LMA) individuals by examining their performance on two variants of the symbolic numerical comparison task. In two experiments, a numerical distance by mathematics anxiety (MA) interaction was obtained, demonstrating that the effect of numerical distance on response times was larger for HMA than for LMA individuals. These data support the claim that HMA individuals have less precise representations of numerical magnitude than their LMA peers, suggesting that MA is associated with low-level numerical deficits that compromise the development of higher level mathematical skills.

  17. A comparison of numerical methods for the prediction of two-dimensional heat transfer in an electrothermal deicer pad. M.S. Thesis. Final Contractor Report

    NASA Technical Reports Server (NTRS)

    Wright, William B.

    1988-01-01

    Transient, numerical simulations of the deicing of composite aircraft components by electrothermal heating have been performed in a 2-D rectangular geometry. Seven numerical schemes and four solution methods were used to find the most efficient numerical procedure for this problem. The phase change in the ice was simulated using the Enthalpy method along with the Method for Assumed States. Numerical solutions illustrating deicer performance for various conditions are presented. Comparisons are made with previous numerical models and with experimental data. The simulation can also be used to solve a variety of other heat conduction problems involving composite bodies.

  18. Numerical Modeling of Active Flow Control in a Boundary Layer Ingesting Offset Inlet

    NASA Technical Reports Server (NTRS)

    Allan, Brian G.; Owens, Lewis R.; Berrier, Bobby L.

    2004-01-01

    This investigation evaluates the numerical prediction of flow distortion and pressure recovery for a boundary layer ingesting offset inlet with active flow control devices. The numerical simulations are computed using a Reynolds averaged Navier-Stokes code developed at NASA. The numerical results are validated by comparison to experimental wind tunnel tests conducted at NASA Langley Research Center at both low and high Mach numbers. Baseline comparisons showed good agreement between numerical and experimental results. Numerical simulations for the inlet with passive and active flow control also showed good agreement at low Mach numbers where experimental data has already been acquired. Numerical simulations of the inlet at high Mach numbers with flow control jets showed an improvement of the flow distortion. Studies on the location of the jet actuators, for the high Mach number case, were conducted to provide guidance for the design of a future experimental wind tunnel test.

  19. Intentional and automatic processing of numerical information in mathematical anxiety: testing the influence of emotional priming.

    PubMed

    Ashkenazi, Sarit

    2018-02-05

    Current theoretical approaches suggest that mathematical anxiety (MA) manifests itself as a weakness in quantity manipulations. This study is the first to examine automatic versus intentional processing of numerical information using the numerical Stroop paradigm in participants with high MA. To manipulate anxiety levels, we combined the numerical Stroop task with an affective priming paradigm. We took a group of college students with high MA and compared their performance to a group of participants with low MA. Under low anxiety conditions (neutral priming), participants with high MA showed relatively intact number processing abilities. However, under high anxiety conditions (mathematical priming), participants with high MA showed (1) higher processing of the non-numerical irrelevant information, which aligns with the theoretical view regarding deficits in selective attention in anxiety and (2) an abnormal numerical distance effect. These results demonstrate that abnormal, basic numerical processing in MA is context related.

  20. Zdeněk Kopal: Numerical Analyst

    NASA Astrophysics Data System (ADS)

    Křížek, M.

    2015-07-01

    We give a brief overview of Zdeněk Kopal's life, his activities in the Czech Astronomical Society, his collaboration with Vladimír Vand, and his studies at Charles University, Cambridge, Harvard, and MIT. Then we survey Kopal's professional life. He published 26 monographs and 20 conference proceedings. We will concentrate on Kopal's extensive monograph Numerical Analysis (1955, 1961) that is widely accepted to be the first comprehensive textbook on numerical methods. It describes, for instance, methods for polynomial interpolation, numerical differentiation and integration, numerical solution of ordinary differential equations with initial or boundary conditions, and numerical solution of integral and integro-differential equations. Special emphasis will be laid on error analysis. Kopal himself applied numerical methods to celestial mechanics, in particular to the N-body problem. He also used Fourier analysis to investigate light curves of close binaries to discover their properties. This is, in fact, a problem from mathematical analysis.

  1. Controlling Reflections from Mesh Refinement Interfaces in Numerical Relativity

    NASA Technical Reports Server (NTRS)

    Baker, John G.; Van Meter, James R.

    2005-01-01

    A leading approach to improving the accuracy on numerical relativity simulations of black hole systems is through fixed or adaptive mesh refinement techniques. We describe a generic numerical error which manifests as slowly converging, artificial reflections from refinement boundaries in a broad class of mesh-refinement implementations, potentially limiting the effectiveness of mesh- refinement techniques for some numerical relativity applications. We elucidate this numerical effect by presenting a model problem which exhibits the phenomenon, but which is simple enough that its numerical error can be understood analytically. Our analysis shows that the effect is caused by variations in finite differencing error generated across low and high resolution regions, and that its slow convergence is caused by the presence of dramatic speed differences among propagation modes typical of 3+1 relativity. Lastly, we resolve the problem, presenting a class of finite-differencing stencil modifications which eliminate this pathology in both our model problem and in numerical relativity examples.

  2. Numerical methods in heat transfer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewis, R.W.

    1985-01-01

    This third volume in the series in Numerical Methods in Engineering presents expanded versions of selected papers given at the Conference on Numerical Methods in Thermal Problems held in Venice in July 1981. In this reference work, contributors offer the current state of knowledge on the numerical solution of convective heat transfer problems and conduction heat transfer problems.

  3. Early Numerical Competence and Number Line Task Performance in Kindergarteners

    ERIC Educational Resources Information Center

    Fanari, Rachele; Meloni, Carla; Massidda, Davide

    2017-01-01

    This work aims to evaluate the relationship between early numerical competence in kindergarteners and their numerical representations as measured by the number line task (NLT). Thirty-four 5-year-old children participated in the study. Children's early performance on symbolic and non-symbolic numerical tasks was considered to determine which was a…

  4. Machine Shop. Module 8: CNC (Computerized Numerical Control). Instructor's Guide.

    ERIC Educational Resources Information Center

    Crosswhite, Dwight

    This document consists of materials for a five-unit course on the following topics: (1) safety guidelines; (2) coordinates and dimensions; (3) numerical control math; (4) programming for numerical control machines; and (5) setting and operating the numerical control machine. The instructor's guide begins with a list of competencies covered in the…

  5. 24 CFR 135.30 - Numerical goals for meeting the greatest extent feasible requirement.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 24 Housing and Urban Development 1 2012-04-01 2012-04-01 false Numerical goals for meeting the... Concerns § 135.30 Numerical goals for meeting the greatest extent feasible requirement. (a) General. (1... of section 3 by meeting the numerical goals set forth in this section for providing training...

  6. 24 CFR 135.30 - Numerical goals for meeting the greatest extent feasible requirement.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 24 Housing and Urban Development 1 2013-04-01 2013-04-01 false Numerical goals for meeting the... Concerns § 135.30 Numerical goals for meeting the greatest extent feasible requirement. (a) General. (1... of section 3 by meeting the numerical goals set forth in this section for providing training...

  7. Parental Numeric Language Input to Mandarin Chinese and English Speaking Preschool Children

    ERIC Educational Resources Information Center

    Chang, Alicia; Sandhofer, Catherine M.; Adelchanow, Lauren; Rottman, Benjamin

    2011-01-01

    The present study examined the number-specific parental language input to Mandarin- and English-speaking preschool-aged children. Mandarin and English transcripts from the CHILDES database were examined for amount of numeric speech, specific types of numeric speech and syntactic frames in which numeric speech appeared. The results showed that…

  8. 47 CFR 73.201 - Numerical designation of FM broadcast channels.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 4 2014-10-01 2014-10-01 false Numerical designation of FM broadcast channels... SERVICES RADIO BROADCAST SERVICES FM Broadcast Stations § 73.201 Numerical designation of FM broadcast... numerical designations which are shown in the table below: Frequency (Mc/s) Channel No. 88.1 201 88.3 202 88...

  9. 47 CFR 73.201 - Numerical designation of FM broadcast channels.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 4 2012-10-01 2012-10-01 false Numerical designation of FM broadcast channels... SERVICES RADIO BROADCAST SERVICES FM Broadcast Stations § 73.201 Numerical designation of FM broadcast... numerical designations which are shown in the table below: Frequency (Mc/s) Channel No. 88.1 201 88.3 202 88...

  10. 24 CFR 135.30 - Numerical goals for meeting the greatest extent feasible requirement.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 24 Housing and Urban Development 1 2011-04-01 2011-04-01 false Numerical goals for meeting the... Concerns § 135.30 Numerical goals for meeting the greatest extent feasible requirement. (a) General. (1... of section 3 by meeting the numerical goals set forth in this section for providing training...

  11. 47 CFR 73.201 - Numerical designation of FM broadcast channels.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 4 2013-10-01 2013-10-01 false Numerical designation of FM broadcast channels... SERVICES RADIO BROADCAST SERVICES FM Broadcast Stations § 73.201 Numerical designation of FM broadcast... numerical designations which are shown in the table below: Frequency (Mc/s) Channel No. 88.1 201 88.3 202 88...

  12. 24 CFR 135.30 - Numerical goals for meeting the greatest extent feasible requirement.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 24 Housing and Urban Development 1 2014-04-01 2014-04-01 false Numerical goals for meeting the... Concerns § 135.30 Numerical goals for meeting the greatest extent feasible requirement. (a) General. (1... of section 3 by meeting the numerical goals set forth in this section for providing training...

  13. Continuity and Change in Children's Longitudinal Neural Responses to Numbers

    ERIC Educational Resources Information Center

    Emerson, Robert W.; Cantlon, Jessica F.

    2015-01-01

    Human children possess the ability to approximate numerical quantity nonverbally from a young age. Over the course of early childhood, children develop increasingly precise representations of numerical values, including a symbolic number system that allows them to conceive of numerical information as Arabic numerals or number words. Functional…

  14. Mapping among Number Words, Numerals, and Nonsymbolic Quantities in Preschoolers

    ERIC Educational Resources Information Center

    Hurst, Michelle; Anderson, Ursula; Cordes, Sara

    2017-01-01

    In mathematically literate societies, numerical information is represented in 3 distinct codes: a verbal code (i.e., number words); a digital, symbolic code (e.g., Arabic numerals); and an analogical code (i.e., quantities; Dehaene, 1992). To communicate effectively using these numerical codes, our understanding of number must involve an…

  15. Applications of numerical methods to simulate the movement of contaminants in groundwater.

    PubMed Central

    Sun, N Z

    1989-01-01

    This paper reviews mathematical models and numerical methods that have been extensively used to simulate the movement of contaminants through the subsurface. The major emphasis is placed on the numerical methods of advection-dominated transport problems and inverse problems. Several mathematical models that are commonly used in field problems are listed. A variety of numerical solutions for three-dimensional models are introduced, including the multiple cell balance method that can be considered a variation of the finite element method. The multiple cell balance method is easy to understand and convenient for solving field problems. When the advection transport dominates the dispersion transport, two kinds of numerical difficulties, overshoot and numerical dispersion, are always involved in solving standard, finite difference methods and finite element methods. To overcome these numerical difficulties, various numerical techniques are developed, such as upstream weighting methods and moving point methods. A complete review of these methods is given and we also mention the problems of parameter identification, reliability analysis, and optimal-experiment design that are absolutely necessary for constructing a practical model. PMID:2695327

  16. Risk approximation in decision making: approximative numeric abilities predict advantageous decisions under objective risk.

    PubMed

    Mueller, Silke M; Schiebener, Johannes; Delazer, Margarete; Brand, Matthias

    2018-01-22

    Many decision situations in everyday life involve mathematical considerations. In decisions under objective risk, i.e., when explicit numeric information is available, executive functions and abilities to handle exact numbers and ratios are predictors of objectively advantageous choices. Although still debated, exact numeric abilities, e.g., normative calculation skills, are assumed to be related to approximate number processing skills. The current study investigates the effects of approximative numeric abilities on decision making under objective risk. Participants (N = 153) performed a paradigm measuring number-comparison, quantity-estimation, risk-estimation, and decision-making skills on the basis of rapid dot comparisons. Additionally, a risky decision-making task with exact numeric information was administered, as well as tasks measuring executive functions and exact numeric abilities, e.g., mental calculation and ratio processing skills, were conducted. Approximative numeric abilities significantly predicted advantageous decision making, even beyond the effects of executive functions and exact numeric skills. Especially being able to make accurate risk estimations seemed to contribute to superior choices. We recommend approximation skills and approximate number processing to be subject of future investigations on decision making under risk.

  17. Prediction of dynamic and aerodynamic characteristics of the centrifugal fan with forward curved blades

    NASA Astrophysics Data System (ADS)

    Polanský, Jiří; Kalmár, László; Gášpár, Roman

    2013-12-01

    The main aim of this paper is determine the centrifugal fan with forward curved blades aerodynamic characteristics based on numerical modeling. Three variants of geometry were investigated. The first, basic "A" variant contains 12 blades. The geometry of second "B" variant contains 12 blades and 12 semi-blades with optimal length [1]. The third, control variant "C" contains 24 blades without semi-blades. Numerical calculations were performed by CFD Ansys. Another aim of this paper is to compare results of the numerical simulation with results of approximate numerical procedure. Applied approximate numerical procedure [2] is designated to determine characteristics of the turbulent flow in the bladed space of a centrifugal-flow fan impeller. This numerical method is an extension of the hydro-dynamical cascade theory for incompressible and inviscid fluid flow. Paper also partially compares results from the numerical simulation and results from the experimental investigation. Acoustic phenomena observed during experiment, during numerical simulation manifested as deterioration of the calculation stability, residuals oscillation and thus also as a flow field oscillation. Pressure pulsations are evaluated by using frequency analysis for each variant and working condition.

  18. Magnitude knowledge: the common core of numerical development.

    PubMed

    Siegler, Robert S

    2016-05-01

    The integrated theory of numerical development posits that a central theme of numerical development from infancy to adulthood is progressive broadening of the types and ranges of numbers whose magnitudes are accurately represented. The process includes four overlapping trends: (1) representing increasingly precisely the magnitudes of non-symbolic numbers, (2) connecting small symbolic numbers to their non-symbolic referents, (3) extending understanding from smaller to larger whole numbers, and (4) accurately representing the magnitudes of rational numbers. The present review identifies substantial commonalities, as well as differences, in these four aspects of numerical development. With both whole and rational numbers, numerical magnitude knowledge is concurrently correlated with, longitudinally predictive of, and causally related to multiple aspects of mathematical understanding, including arithmetic and overall math achievement. Moreover, interventions focused on increasing numerical magnitude knowledge often generalize to other aspects of mathematics. The cognitive processes of association and analogy seem to play especially large roles in this development. Thus, acquisition of numerical magnitude knowledge can be seen as the common core of numerical development. © 2016 John Wiley & Sons Ltd.

  19. Basic numerical competences in large-scale assessment data: Structure and long-term relevance.

    PubMed

    Hirsch, Stefa; Lambert, Katharina; Coppens, Karien; Moeller, Korbinian

    2018-03-01

    Basic numerical competences are seen as building blocks for later numerical and mathematical achievement. The current study aimed at investigating the structure of early numeracy reflected by different basic numerical competences in kindergarten and its predictive value for mathematical achievement 6 years later using data from large-scale assessment. This allowed analyses based on considerably large sample sizes (N > 1700). A confirmatory factor analysis indicated that a model differentiating five basic numerical competences at the end of kindergarten fitted the data better than a one-factor model of early numeracy representing a comprehensive number sense. In addition, these basic numerical competences were observed to reliably predict performance in a curricular mathematics test in Grade 6 even after controlling for influences of general cognitive ability. Thus, our results indicated a differentiated view on early numeracy considering basic numerical competences in kindergarten reflected in large-scale assessment data. Consideration of different basic numerical competences allows for evaluating their specific predictive value for later mathematical achievement but also mathematical learning difficulties. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. [The functional independence of lexical numeric knowledge and the representation of magnitude: evidence from one case].

    PubMed

    Salguero-Alcañiz, M P; Lorca-Marín, J A; Alameda-Bailén, J R

    The ultimate purpose of cognitive neuropsychology is to find out how normal cognitive processes work. To this end, it studies subjects who have suffered brain damage but who, until their accident, were competent in the skills that are later to become the object of study. It is therefore necessary to study patients who have difficulty in processing numbers and in calculating in order to further our knowledge of these processes in the normal population. Our aim was to analyse the relationships between the different cognitive processes involved in numeric knowledge. We studied the case of a female patient who suffered an ischemic infarct in the perisylvian region, on both a superficial and deep level. She presented predominantly expressive mixed aphasia and predominantly brachial hemiparesis. Numeric processing and calculation were evaluated. The patient still had her lexical numeric knowledge but her quantitative numeric knowledge was impaired. These alterations in the quantitative numeric knowledge are evidenced by the difficulties the patient had in numeric comprehension tasks, as well as the severe impairments displayed in calculation. These findings allow us to conclude that quantitative numeric knowledge is functionally independent of lexical or non-quantitative numeric knowledge. From this functional autonomy, a possible structural independence can be inferred.

  1. Basic and advanced numerical performances relate to mathematical expertise but are fully mediated by visuospatial skills.

    PubMed

    Sella, Francesco; Sader, Elie; Lolliot, Simon; Cohen Kadosh, Roi

    2016-09-01

    Recent studies have highlighted the potential role of basic numerical processing in the acquisition of numerical and mathematical competences. However, it is debated whether high-level numerical skills and mathematics depends specifically on basic numerical representations. In this study mathematicians and nonmathematicians performed a basic number line task, which required mapping positive and negative numbers on a physical horizontal line, and has been shown to correlate with more advanced numerical abilities and mathematical achievement. We found that mathematicians were more accurate compared with nonmathematicians when mapping positive, but not negative numbers, which are considered numerical primitives and cultural artifacts, respectively. Moreover, performance on positive number mapping could predict whether one is a mathematician or not, and was mediated by more advanced mathematical skills. This finding might suggest a link between basic and advanced mathematical skills. However, when we included visuospatial skills, as measured by block design subtest, the mediation analysis revealed that the relation between the performance in the number line task and the group membership was explained by non-numerical visuospatial skills. These results demonstrate that relation between basic, even specific, numerical skills and advanced mathematical achievement can be artifactual and explained by visuospatial processing. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  2. Professional mathematicians differ from controls in their spatial-numerical associations.

    PubMed

    Cipora, Krzysztof; Hohol, Mateusz; Nuerk, Hans-Christoph; Willmes, Klaus; Brożek, Bartosz; Kucharzyk, Bartłomiej; Nęcka, Edward

    2016-07-01

    While mathematically impaired individuals have been shown to have deficits in all kinds of basic numerical representations, among them spatial-numerical associations, little is known about individuals with exceptionally high math expertise. They might have a more abstract magnitude representation or more flexible spatial associations, so that no automatic left/small and right/large spatial-numerical association is elicited. To pursue this question, we examined the Spatial Numerical Association of Response Codes (SNARC) effect in professional mathematicians which was compared to two control groups: Professionals who use advanced math in their work but are not mathematicians (mostly engineers), and matched controls. Contrarily to both control groups, Mathematicians did not reveal a SNARC effect. The group differences could not be accounted for by differences in mean response speed, response variance or intelligence or a general tendency not to show spatial-numerical associations. We propose that professional mathematicians possess more abstract and/or spatially very flexible numerical representations and therefore do not exhibit or do have a largely reduced default left-to-right spatial-numerical orientation as indexed by the SNARC effect, but we also discuss other possible accounts. We argue that this comparison with professional mathematicians also tells us about the nature of spatial-numerical associations in persons with much less mathematical expertise or knowledge.

  3. Node Resource Manager: A Distributed Computing Software Framework Used for Solving Geophysical Problems

    NASA Astrophysics Data System (ADS)

    Lawry, B. J.; Encarnacao, A.; Hipp, J. R.; Chang, M.; Young, C. J.

    2011-12-01

    With the rapid growth of multi-core computing hardware, it is now possible for scientific researchers to run complex, computationally intensive software on affordable, in-house commodity hardware. Multi-core CPUs (Central Processing Unit) and GPUs (Graphics Processing Unit) are now commonplace in desktops and servers. Developers today have access to extremely powerful hardware that enables the execution of software that could previously only be run on expensive, massively-parallel systems. It is no longer cost-prohibitive for an institution to build a parallel computing cluster consisting of commodity multi-core servers. In recent years, our research team has developed a distributed, multi-core computing system and used it to construct global 3D earth models using seismic tomography. Traditionally, computational limitations forced certain assumptions and shortcuts in the calculation of tomographic models; however, with the recent rapid growth in computational hardware including faster CPU's, increased RAM, and the development of multi-core computers, we are now able to perform seismic tomography, 3D ray tracing and seismic event location using distributed parallel algorithms running on commodity hardware, thereby eliminating the need for many of these shortcuts. We describe Node Resource Manager (NRM), a system we developed that leverages the capabilities of a parallel computing cluster. NRM is a software-based parallel computing management framework that works in tandem with the Java Parallel Processing Framework (JPPF, http://www.jppf.org/), a third party library that provides a flexible and innovative way to take advantage of modern multi-core hardware. NRM enables multiple applications to use and share a common set of networked computers, regardless of their hardware platform or operating system. Using NRM, algorithms can be parallelized to run on multiple processing cores of a distributed computing cluster of servers and desktops, which results in a dramatic speedup in execution time. NRM is sufficiently generic to support applications in any domain, as long as the application is parallelizable (i.e., can be subdivided into multiple individual processing tasks). At present, NRM has been effective in decreasing the overall runtime of several algorithms: 1) the generation of a global 3D model of the compressional velocity distribution in the Earth using tomographic inversion, 2) the calculation of the model resolution matrix, model covariance matrix, and travel time uncertainty for the aforementioned velocity model, and 3) the correlation of waveforms with archival data on a massive scale for seismic event detection. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  4. On multigrid solution of the implicit equations of hydrodynamics. Experiments for the compressible Euler equations in general coordinates

    NASA Astrophysics Data System (ADS)

    Kifonidis, K.; Müller, E.

    2012-08-01

    Aims: We describe and study a family of new multigrid iterative solvers for the multidimensional, implicitly discretized equations of hydrodynamics. Schemes of this class are free of the Courant-Friedrichs-Lewy condition. They are intended for simulations in which widely differing wave propagation timescales are present. A preferred solver in this class is identified. Applications to some simple stiff test problems that are governed by the compressible Euler equations, are presented to evaluate the convergence behavior, and the stability properties of this solver. Algorithmic areas are determined where further work is required to make the method sufficiently efficient and robust for future application to difficult astrophysical flow problems. Methods: The basic equations are formulated and discretized on non-orthogonal, structured curvilinear meshes. Roe's approximate Riemann solver and a second-order accurate reconstruction scheme are used for spatial discretization. Implicit Runge-Kutta (ESDIRK) schemes are employed for temporal discretization. The resulting discrete equations are solved with a full-coarsening, non-linear multigrid method. Smoothing is performed with multistage-implicit smoothers. These are applied here to the time-dependent equations by means of dual time stepping. Results: For steady-state problems, our results show that the efficiency of the present approach is comparable to the best implicit solvers for conservative discretizations of the compressible Euler equations that can be found in the literature. The use of red-black as opposed to symmetric Gauss-Seidel iteration in the multistage-smoother is found to have only a minor impact on multigrid convergence. This should enable scalable parallelization without having to seriously compromise the method's algorithmic efficiency. For time-dependent test problems, our results reveal that the multigrid convergence rate degrades with increasing Courant numbers (i.e. time step sizes). Beyond a Courant number of nine thousand, even complete multigrid breakdown is observed. Local Fourier analysis indicates that the degradation of the convergence rate is associated with the coarse-grid correction algorithm. An implicit scheme for the Euler equations that makes use of the present method was, nevertheless, able to outperform a standard explicit scheme on a time-dependent problem with a Courant number of order 1000. Conclusions: For steady-state problems, the described approach enables the construction of parallelizable, efficient, and robust implicit hydrodynamics solvers. The applicability of the method to time-dependent problems is presently restricted to cases with moderately high Courant numbers. This is due to an insufficient coarse-grid correction of the employed multigrid algorithm for large time steps. Further research will be required to help us to understand and overcome the observed multigrid convergence difficulties for time-dependent problems.

  5. Distributed Processing of Sentinel-2 Products using the BIGEARTH Platform

    NASA Astrophysics Data System (ADS)

    Bacu, Victor; Stefanut, Teodor; Nandra, Constantin; Mihon, Danut; Gorgan, Dorian

    2017-04-01

    The constellation of observational satellites orbiting around Earth is constantly increasing, providing more data that need to be processed in order to extract meaningful information and knowledge from it. Sentinel-2 satellites, part of the Copernicus Earth Observation program, aim to be used in agriculture, forestry and many other land management applications. ESA's SNAP toolbox can be used to process data gathered by Sentinel-2 satellites but is limited to the resources provided by a stand-alone computer. In this paper we present a cloud based software platform that makes use of this toolbox together with other remote sensing software applications to process Sentinel-2 products. The BIGEARTH software platform [1] offers an integrated solution for processing Earth Observation data coming from different sources (such as satellites or on-site sensors). The flow of processing is defined as a chain of tasks based on the WorDeL description language [2]. Each task could rely on a different software technology (such as Grass GIS and ESA's SNAP) in order to process the input data. One important feature of the BIGEARTH platform comes from this possibility of interconnection and integration, throughout the same flow of processing, of the various well known software technologies. All this integration is transparent from the user perspective. The proposed platform extends the SNAP capabilities by enabling specialists to easily scale the processing over distributed architectures, according to their specific needs and resources. The software platform [3] can be used in multiple configurations. In the basic one the software platform runs as a standalone application inside a virtual machine. Obviously in this case the computational resources are limited but it will give an overview of the functionalities of the software platform, and also the possibility to define the flow of processing and later on to execute it on a more complex infrastructure. The most complex and robust configuration is based on cloud computing and allows the installation on a private or public cloud infrastructure. In this configuration, the processing resources can be dynamically allocated and the execution time can be considerably improved by the available virtual resources and the number of parallelizable sequences in the processing flow. The presentation highlights the benefits and issues of the proposed solution by analyzing some significant experimental use cases. Main references for further information: [1] BigEarth project, http://cgis.utcluj.ro/projects/bigearth [2] Constantin Nandra, Dorian Gorgan: "Defining Earth data batch processing tasks by means of a flexible workflow description language", ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., III-4, 59-66, (2016). [3] Victor Bacu, Teodor Stefanut, Dorian Gorgan, "Adaptive Processing of Earth Observation Data on Cloud Infrastructures Based on Workflow Description", Proceedings of the Intelligent Computer Communication and Processing (ICCP), IEEE-Press, pp.444-454, (2015).

  6. Brain Structural Integrity and Intrinsic Functional Connectivity Forecast 6 Year Longitudinal Growth in Children's Numerical Abilities.

    PubMed

    Evans, Tanya M; Kochalka, John; Ngoon, Tricia J; Wu, Sarah S; Qin, Shaozheng; Battista, Christian; Menon, Vinod

    2015-08-19

    Early numerical proficiency lays the foundation for acquiring quantitative skills essential in today's technological society. Identification of cognitive and brain markers associated with long-term growth of children's basic numerical computation abilities is therefore of utmost importance. Previous attempts to relate brain structure and function to numerical competency have focused on behavioral measures from a single time point. Thus, little is known about the brain predictors of individual differences in growth trajectories of numerical abilities. Using a longitudinal design, with multimodal imaging and machine-learning algorithms, we investigated whether brain structure and intrinsic connectivity in early childhood are predictive of 6 year outcomes in numerical abilities spanning childhood and adolescence. Gray matter volume at age 8 in distributed brain regions, including the ventrotemporal occipital cortex (VTOC), the posterior parietal cortex, and the prefrontal cortex, predicted longitudinal gains in numerical, but not reading, abilities. Remarkably, intrinsic connectivity analysis revealed that the strength of functional coupling among these regions also predicted gains in numerical abilities, providing novel evidence for a network of brain regions that works in concert to promote numerical skill acquisition. VTOC connectivity with posterior parietal, anterior temporal, and dorsolateral prefrontal cortices emerged as the most extensive network predicting individual gains in numerical abilities. Crucially, behavioral measures of mathematics, IQ, working memory, and reading did not predict children's gains in numerical abilities. Our study identifies, for the first time, functional circuits in the human brain that scaffold the development of numerical skills, and highlights potential biomarkers for identifying children at risk for learning difficulties. Children show substantial individual differences in math abilities and ease of math learning. Early numerical abilities provide the foundation for future academic and professional success in an increasingly technological society. Understanding the early identification of poor math skills has therefore taken on great significance. This work provides important new insights into brain structure and connectivity measures that can predict longitudinal growth of children's math skills over a 6 year period, and may eventually aid in the early identification of children who might benefit from targeted interventions. Copyright © 2015 the authors 0270-6474/15/3511743-08$15.00/0.

  7. Numerical Facilities: A Review of the Literature. Technical Report 1985-3.

    ERIC Educational Resources Information Center

    Tal, Joseph S.

    This review of the relevant literature in the area of numerical facility attempts to clarify the construct of numerical facility and provide guidance for items tapping this ability. The review is presented in five parts. The first section introduces two approaches that can be used to investigate numerical facility, including factor analysis.…

  8. Representations of Numerical and Non-Numerical Magnitude Both Contribute to Mathematical Competence in Children

    ERIC Educational Resources Information Center

    Lourenco, Stella F.; Bonny, Justin W.

    2017-01-01

    A growing body of evidence suggests that non-symbolic representations of number, which humans share with nonhuman animals, are functionally related to uniquely human mathematical thought. Other research suggesting that numerical and non-numerical magnitudes not only share analog format but also form part of a general magnitude system raises…

  9. The microcomputer scientific software series 1: the numerical information manipulation system.

    Treesearch

    Harold M. Rauscher

    1983-01-01

    The Numerical Information Manipulation System extends the versatility provided by word processing systems for textual data manipulation to mathematical or statistical data in numeric matrix form. Numeric data, stored and processed in the matrix form, may be manipulated in a wide variety of ways. The system allows operations on single elements, entire rows, or columns...

  10. Children’s Numerical Equivalence Judgments: Crossmapping Effects

    PubMed Central

    Mix, Kelly S.

    2009-01-01

    Preschoolers made numerical comparisons between sets with varying degrees of shared surface similarity. When surface similarity was pitted against numerical equivalence (i.e., crossmapping), children made fewer number matches than when surface similarity was neutral (i.e, all sets contained the same objects). Only children who understood the number words for the target sets performed above chance in the crossmapping condition. These findings are consistent with previous research on children’s non-numerical comparisons (e.g., Rattermann & Gentner, 1998; Smith, 1993) and suggest that the same mechanisms may underlie numerical development. PMID:19655027

  11. Numerical integration for ab initio many-electron self energy calculations within the GW approximation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Fang, E-mail: fliu@lsec.cc.ac.cn; Lin, Lin, E-mail: linlin@math.berkeley.edu; Computational Research Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720

    We present a numerical integration scheme for evaluating the convolution of a Green's function with a screened Coulomb potential on the real axis in the GW approximation of the self energy. Our scheme takes the zero broadening limit in Green's function first, replaces the numerator of the integrand with a piecewise polynomial approximation, and performs principal value integration on subintervals analytically. We give the error bound of our numerical integration scheme and show by numerical examples that it is more reliable and accurate than the standard quadrature rules such as the composite trapezoidal rule. We also discuss the benefit ofmore » using different self energy expressions to perform the numerical convolution at different frequencies.« less

  12. Analytical and numerical solution for wave reflection from a porous wave absorber

    NASA Astrophysics Data System (ADS)

    Magdalena, Ikha; Roque, Marian P.

    2018-03-01

    In this paper, wave reflection from a porous wave absorber is investigated theoretically and numerically. The equations that we used are based on shallow water type model. Modification of motion inside the absorber is by including linearized friction term in momentum equation and introducing a filtered velocity. Here, an analytical solution for wave reflection coefficient from a porous wave absorber over a flat bottom is derived. Numerically, we solve the equations using the finite volume method on a staggered grid. To validate our numerical model, comparison of the numerical reflection coefficient is made against the analytical solution. Further, we implement our numerical scheme to study the evolution of surface waves pass through a porous absorber over varied bottom topography.

  13. Numerical study on anaerobic digestion of fruit and vegetable waste: Biogas generation

    NASA Astrophysics Data System (ADS)

    Wardhani, Puteri Kusuma; Watanabe, Masaji

    2016-02-01

    The study provides experimental results and numerical results concerning anaerobic digestion of fruit and vegetable waste. Experiments were carried out by using batch floating drum type digester without mixing and temperature setting. The retention time was 30 days. Numerical results based on Monod type model with influence of temperature is introduced. Initial value problems were analyzed numerically, while kinetic parameters were analyzed by using trial error methods. The numerical results for the first five days seems appropriate in comparison with the experimental outcomes. However, numerical results shows that the model is inappropriate for 30 days of fermentation. This leads to the conclusion that Monod type model is not suitable for describe the mixture degradation of fruit and vegetable waste and horse dung.

  14. Representations of numerical and non-numerical magnitude both contribute to mathematical competence in children.

    PubMed

    Lourenco, Stella F; Bonny, Justin W

    2017-07-01

    A growing body of evidence suggests that non-symbolic representations of number, which humans share with nonhuman animals, are functionally related to uniquely human mathematical thought. Other research suggesting that numerical and non-numerical magnitudes not only share analog format but also form part of a general magnitude system raises questions about whether the non-symbolic basis of mathematical thinking is unique to numerical magnitude. Here we examined this issue in 5- and 6-year-old children using comparison tasks of non-symbolic number arrays and cumulative area as well as standardized tests of math competence. One set of findings revealed that scores on both magnitude comparison tasks were modulated by ratio, consistent with shared analog format. Moreover, scores on these tasks were moderately correlated, suggesting overlap in the precision of numerical and non-numerical magnitudes, as expected under a general magnitude system. Another set of findings revealed that the precision of both types of magnitude contributed shared and unique variance to the same math measures (e.g. calculation and geometry), after accounting for age and verbal competence. These findings argue against an exclusive role for non-symbolic number in supporting early mathematical understanding. Moreover, they suggest that mathematical understanding may be rooted in a general system of magnitude representation that is not specific to numerical magnitude but that also encompasses non-numerical magnitude. © 2016 John Wiley & Sons Ltd.

  15. Development of numerical processing in children with typical and dyscalculic arithmetic skills—a longitudinal study

    PubMed Central

    Landerl, Karin

    2013-01-01

    Numerical processing has been demonstrated to be closely associated with arithmetic skills, however, our knowledge on the development of the relevant cognitive mechanisms is limited. The present longitudinal study investigated the developmental trajectories of numerical processing in 42 children with age-adequate arithmetic development and 41 children with dyscalculia over a 2-year period from beginning of Grade 2, when children were 7; 6 years old, to beginning of Grade 4. A battery of numerical processing tasks (dot enumeration, non-symbolic and symbolic comparison of one- and two-digit numbers, physical comparison, number line estimation) was given five times during the study (beginning and middle of each school year). Efficiency of numerical processing was a very good indicator of development in numerical processing while within-task effects remained largely constant and showed low long-term stability before middle of Grade 3. Children with dyscalculia showed less efficient numerical processing reflected in specifically prolonged response times. Importantly, they showed consistently larger slopes for dot enumeration in the subitizing range, an untypically large compatibility effect when processing two-digit numbers, and they were consistently less accurate in placing numbers on a number line. Thus, we were able to identify parameters that can be used in future research to characterize numerical processing in typical and dyscalculic development. These parameters can also be helpful for identification of children who struggle in their numerical development. PMID:23898310

  16. Free Radical Addition Polymerization Kinetics without Steady-State Approximations: A Numerical Analysis for the Polymer, Physical, or Advanced Organic Chemistry Course

    ERIC Educational Resources Information Center

    Iler, H. Darrell; Brown, Amber; Landis, Amanda; Schimke, Greg; Peters, George

    2014-01-01

    A numerical analysis of the free radical addition polymerization system is described that provides those teaching polymer, physical, or advanced organic chemistry courses the opportunity to introduce students to numerical methods in the context of a simple but mathematically stiff chemical kinetic system. Numerical analysis can lead students to an…

  17. Code Validation Studies of High-Enthalpy Flows

    DTIC Science & Technology

    2006-12-01

    stage of future hypersonic vehicles. The development and design of such vehicles is aided by the use of experimentation and numerical simulation... numerical predictions and experimental measurements. 3. Summary of Previous Work We have studied extensively hypersonic double-cone flows with and in...the experimental measurements and the numerical predictions. When we accounted for that effect in numerical simulations, and also augmented the

  18. Summary of research in applied mathematics, numerical analysis, and computer sciences

    NASA Technical Reports Server (NTRS)

    1986-01-01

    The major categories of current ICASE research programs addressed include: numerical methods, with particular emphasis on the development and analysis of basic numerical algorithms; control and parameter identification problems, with emphasis on effective numerical methods; computational problems in engineering and physical sciences, particularly fluid dynamics, acoustics, and structural analysis; and computer systems and software, especially vector and parallel computers.

  19. The Numerical Range of the Luoshu Is a Piece of Cake--Almost

    ERIC Educational Resources Information Center

    Trenkler, Gotz; Trenkler, Dietrich

    2012-01-01

    The numerical range, easy to understand but often tedious to compute, provides useful information about a matrix. Here we describe the numerical range of a 3 x 3 magic square. Applying our results to one of the most famous of those squares, the Luoshu, it turns out that its numerical range is a piece of cake--almost.

  20. The Transition from Informal to Formal Mathematical Knowledge: Mediation by Numeral Knowledge

    ERIC Educational Resources Information Center

    Purpura, David J.; Baroody, Arthur J.; Lonigan, Christopher J.

    2013-01-01

    The purpose of the present study was to determine if numeral knowledge--the ability to identify Arabic numerals and connect Arabic numerals to their respective quantities--mediates the relation between informal and formal mathematical knowledge. A total of 206 3- to 5-year-old preschool children were assessed on 6 informal mathematics tasks and 2…

  1. Effects of Finger Counting on Numerical Development – The Opposing Views of Neurocognition and Mathematics Education

    PubMed Central

    Moeller, Korbinian; Martignon, Laura; Wessolowski, Silvia; Engel, Joachim; Nuerk, Hans-Christoph

    2011-01-01

    Children typically learn basic numerical and arithmetic principles using finger-based representations. However, whether or not reliance on finger-based representations is beneficial or detrimental is the subject of an ongoing debate between researchers in neurocognition and mathematics education. From the neurocognitive perspective, finger counting provides multisensory input, which conveys both cardinal and ordinal aspects of numbers. Recent data indicate that children with good finger-based numerical representations show better arithmetic skills and that training finger gnosis, or “finger sense,” enhances mathematical skills. Therefore neurocognitive researchers conclude that elaborate finger-based numerical representations are beneficial for later numerical development. However, research in mathematics education recommends fostering mentally based numerical representations so as to induce children to abandon finger counting. More precisely, mathematics education recommends first using finger counting, then concrete structured representations and, finally, mental representations of numbers to perform numerical operations. Taken together, these results reveal an important debate between neurocognitive and mathematics education research concerning the benefits and detriments of finger-based strategies for numerical development. In the present review, the rationale of both lines of evidence will be discussed. PMID:22144969

  2. Differences in arithmetic performance between Chinese and German adults are accompanied by differences in processing of non-symbolic numerical magnitude

    PubMed Central

    Lonnemann, Jan; Li, Su; Zhao, Pei; Li, Peng; Linkersdörfer, Janosch; Lindberg, Sven; Hasselhorn, Marcus; Yan, Song

    2017-01-01

    Human beings are assumed to possess an approximate number system (ANS) dedicated to extracting and representing approximate numerical magnitude information. The ANS is assumed to be fundamental to arithmetic learning and has been shown to be associated with arithmetic performance. It is, however, still a matter of debate whether better arithmetic skills are reflected in the ANS. To address this issue, Chinese and German adults were compared regarding their performance in simple arithmetic tasks and in a non-symbolic numerical magnitude comparison task. Chinese participants showed a better performance in solving simple arithmetic tasks and faster reaction times in the non-symbolic numerical magnitude comparison task without making more errors than their German peers. These differences in performance could not be ascribed to differences in general cognitive abilities. Better arithmetic skills were thus found to be accompanied by a higher speed of retrieving non-symbolic numerical magnitude knowledge but not by a higher precision of non-symbolic numerical magnitude representations. The group difference in the speed of retrieving non-symbolic numerical magnitude knowledge was fully mediated by the performance in arithmetic tasks, suggesting that arithmetic skills shape non-symbolic numerical magnitude processing skills. PMID:28384191

  3. Patterns of linguistic and numerical performance in aphasia.

    PubMed

    Rath, Dajana; Domahs, Frank; Dressel, Katharina; Claros-Salinas, Dolores; Klein, Elise; Willmes, Klaus; Krinzinger, Helga

    2015-02-04

    Empirical research on the relationship between linguistic and numerical processing revealed inconsistent results for different levels of cognitive processing (e.g., lexical, semantic) as well as different stimulus materials (e.g., Arabic digits, number words, letters, non-number words). Information of dissociation patterns in aphasic patients was used in order to investigate the dissociability of linguistic and numerical processes. The aim of the present prospective study was a comprehensive, specific, and systematic investigation of relationships between linguistic and numerical processing, considering the impact of asemantic vs. semantic processing and the type of material employed (numbers compared to letters vs. words). A sample of aphasic patients (n = 60) was assessed with a battery of linguistic and numerical tasks directly comparable for their cognitive processing levels (e.g., perceptual, morpho-lexical, semantic). Mean performance differences and frequencies of (complementary) dissociations in individual patients revealed the most prominent numerical advantage for asemantic tasks when comparing the processing of numbers vs. letters, whereas the least numerical advantage was found for semantic tasks when comparing the processing of numbers vs. words. Different patient subgroups showing differential dissociation patterns were further analysed and discussed. A comprehensive model of linguistic and numerical processing should take these findings into account.

  4. Arithmetic mismatch negativity and numerical magnitude processing in number matching.

    PubMed

    Hsu, Yi-Fang; Szücs, Dénes

    2011-08-11

    This study examined the relationship of the arithmetic mismatch negativity (AMN) and the semantic evaluation of numerical magnitude. The first question was whether the AMN was sensitive to the incongruity in numerical information per se, or rather, to the violation of strategic expectations. The second question was whether the numerical distance effect could appear independently of the AMN. Event-related potentials (ERPs) were recorded while participants decided whether two digits were matching or non-matching in terms of physical similarity. The AMN was enhanced in matching trials presented infrequently relative to non-matching trials presented frequently. The numerical distance effect was found over posterior sites during a 92 ms long interval (236-328 ms) but appeared independently of the AMN. It was not the incongruity in numerical information per se, but rather, the violation of strategic expectations that elicited the AMN. The numerical distance effect might only temporally coincide with the AMN and did not form an inherent part of it.

  5. Parameter estimation in IMEX-trigonometrically fitted methods for the numerical solution of reaction-diffusion problems

    NASA Astrophysics Data System (ADS)

    D'Ambrosio, Raffaele; Moccaldi, Martina; Paternoster, Beatrice

    2018-05-01

    In this paper, an adapted numerical scheme for reaction-diffusion problems generating periodic wavefronts is introduced. Adapted numerical methods for such evolutionary problems are specially tuned to follow prescribed qualitative behaviors of the solutions, making the numerical scheme more accurate and efficient as compared with traditional schemes already known in the literature. Adaptation through the so-called exponential fitting technique leads to methods whose coefficients depend on unknown parameters related to the dynamics and aimed to be numerically computed. Here we propose a strategy for a cheap and accurate estimation of such parameters, which consists essentially in minimizing the leading term of the local truncation error whose expression is provided in a rigorous accuracy analysis. In particular, the presented estimation technique has been applied to a numerical scheme based on combining an adapted finite difference discretization in space with an implicit-explicit time discretization. Numerical experiments confirming the effectiveness of the approach are also provided.

  6. Numerical investigation of flow on NACA4412 aerofoil with different aspect ratios

    NASA Astrophysics Data System (ADS)

    Demir, Hacımurat; Özden, Mustafa; Genç, Mustafa Serdar; Çağdaş, Mücahit

    2016-03-01

    In this study, the flow over NACA4412 was investigated both numerically and experimentally at a different Reynolds numbers. The experiments were carried out in a low speed wind tunnel with various angles of attack and different Reynolds numbers (25000 and 50000). Airfoil was manufactured using 3D printer with a various aspect ratios (AR = 1 and AR = 3). Smoke-wire and oil flow visualization methods were used to visualize the surface flow patterns. NACA4412 aerofoil was designed by using SOLIDWORKS. The structural grid of numerical model was constructed by ANSYS ICEM CFD meshing software. Furthermore, ANSYS FLUENT™ software was used to perform numerical calculations. The numerical results were compared with experimental results. Bubble formation was shown in CFD streamlines and smoke-wire experiments at z / c = 0.4. Furthermore, bubble shrunk at z / c = 0.2 by reason of the effects of tip vortices in both numerical and experimental studies. Consequently, it was seen that there was a good agreement between numerical and experimental results.

  7. Measurement and Prediction of the Thermomechanical Response of Shape Memory Alloy Hybrid Composite Beams

    NASA Technical Reports Server (NTRS)

    Davis, Brian; Turner, Travis L.; Seelecke, Stefan

    2008-01-01

    An experimental and numerical investigation into the static and dynamic responses of shape memory alloy hybrid composite (SMAHC) beams is performed to provide quantitative validation of a recently commercialized numerical analysis/design tool for SMAHC structures. The SMAHC beam specimens consist of a composite matrix with embedded pre-strained SMA actuators, which act against the mechanical boundaries of the structure when thermally activated to adaptively stiffen the structure. Numerical results are produced from the numerical model as implemented into the commercial finite element code ABAQUS. A rigorous experimental investigation is undertaken to acquire high fidelity measurements including infrared thermography and projection moire interferometry for full-field temperature and displacement measurements, respectively. High fidelity numerical results are also obtained from the numerical model and include measured parameters, such as geometric imperfection and thermal load. Excellent agreement is achieved between the predicted and measured results of the static and dynamic thermomechanical response, thereby providing quantitative validation of the numerical tool.

  8. The laterality effect: myth or truth?

    PubMed

    Cohen Kadosh, Roi

    2008-03-01

    Tzelgov and colleagues [Tzelgov, J., Meyer, J., and Henik, A. (1992). Automatic and intentional processing of numerical information. Journal of Experimental Psychology: Learning, Memory and Cognition, 18, 166-179.], offered the existence of the laterality effect as a post-hoc explanation for their results. According to this effect, numbers are classified automatically as small/large versus a standard point under autonomous processing of numerical information. However, the genuinity of the laterality effect was never examined, or was confounded with the numerical distance effect. In the current study, I controlled the numerical distance effect and observed that the laterality effect does exist, and affects the processing of automatic numerical information. The current results suggest that the laterality effect should be taken into account when using paradigms that require automatic numerical processing such as Stroop-like or priming tasks.

  9. Numeral size, spacing between targets, and exposure time in discrimination by elderly people using an lcd monitor.

    PubMed

    Huang, Kuo-Chen; Yeh, Po-Chan

    2007-04-01

    The present study investigated the effects of numeral size, spacing between targets, and exposure time on the discrimination performance by elderly and younger people using a liquid crystal display screen. Analysis showed size of numerals significantly affected discrimination, which increased with increasing numeral size. Spacing between targets also had a significant effect on discrimination, i.e., the larger the space between numerals, the better their discrimination. When the spacing between numerals increased to 4 or 5 points, however, discrimination did not increase beyond that for 3-point spacing. Although performance increased with increasing exposure time, the difference in discrimination at an exposure time of 0.8 vs 1.0 sec. was not significant. The accuracy by the elderly group was less than that by younger subjects.

  10. Numerical Modeling in Geodynamics: Success, Failure and Perspective

    NASA Astrophysics Data System (ADS)

    Ismail-Zadeh, A.

    2005-12-01

    A real success in numerical modeling of dynamics of the Earth can be achieved only by multidisciplinary research teams of experts in geodynamics, applied and pure mathematics, and computer science. The success in numerical modeling is based on the following basic, but simple, rules. (i) People need simplicity most, but they understand intricacies best (B. Pasternak, writer). Start from a simple numerical model, which describes basic physical laws by a set of mathematical equations, and move then to a complex model. Never start from a complex model, because you cannot understand the contribution of each term of the equations to the modeled geophysical phenomenon. (ii) Study the numerical methods behind your computer code. Otherwise it becomes difficult to distinguish true and erroneous solutions to the geodynamic problem, especially when your problem is complex enough. (iii) Test your model versus analytical and asymptotic solutions, simple 2D and 3D model examples. Develop benchmark analysis of different numerical codes and compare numerical results with laboratory experiments. Remember that the numerical tool you employ is not perfect, and there are small bugs in every computer code. Therefore the testing is the most important part of your numerical modeling. (iv) Prove (if possible) or learn relevant statements concerning the existence, uniqueness and stability of the solution to the mathematical and discrete problems. Otherwise you can solve an improperly-posed problem, and the results of the modeling will be far from the true solution of your model problem. (v) Try to analyze numerical models of a geological phenomenon using as less as possible tuning model variables. Already two tuning variables give enough possibilities to constrain your model well enough with respect to observations. The data fitting sometimes is quite attractive and can take you far from a principal aim of your numerical modeling: to understand geophysical phenomena. (vi) If the number of tuning model variables are greater than two, test carefully the effect of each of the variables on the modeled phenomenon. Remember: With four exponents I can fit an elephant (E. Fermi, physicist). (vii) Make your numerical model as accurate as possible, but never put the aim to reach a great accuracy: Undue precision of computations is the first symptom of mathematical illiteracy (N. Krylov, mathematician). How complex should be a numerical model? A model which images any detail of the reality is as useful as a map of scale 1:1 (J. Robinson, economist). This message is quite important for geoscientists, who study numerical models of complex geodynamical processes. I believe that geoscientists will never create a model of the real Earth dynamics, but we should try to model the dynamics such a way to simulate basic geophysical processes and phenomena. Does a particular model have a predictive power? Each numerical model has a predictive power, otherwise the model is useless. The predictability of the model varies with its complexity. Remember that a solution to the numerical model is an approximate solution to the equations, which have been chosen in believe that they describe dynamic processes of the Earth. Hence a numerical model predicts dynamics of the Earth as well as the mathematical equations describe this dynamics. What methodological advances are still needed for testable geodynamic modeling? Inverse (time-reverse) numerical modeling and data assimilation are new methodologies in geodynamics. The inverse modeling can allow to test geodynamic models forward in time using restored (from present-day observations) initial conditions instead of unknown conditions.

  11. Implementing a GPU-based numerical algorithm for modelling dynamics of a high-speed train

    NASA Astrophysics Data System (ADS)

    Sytov, E. S.; Bratus, A. S.; Yurchenko, D.

    2018-04-01

    This paper discusses the initiative of implementing a GPU-based numerical algorithm for studying various phenomena associated with dynamics of a high-speed railway transport. The proposed numerical algorithm for calculating a critical speed of the bogie is based on the first Lyapunov number. Numerical algorithm is validated by analytical results, derived for a simple model. A dynamic model of a carriage connected to a new dual-wheelset flexible bogie is studied for linear and dry friction damping. Numerical results obtained by CPU, MPU and GPU approaches are compared and appropriateness of these methods is discussed.

  12. Numerical Simulation of Selecting Model Scale of Cable in Wind Tunnel Test

    NASA Astrophysics Data System (ADS)

    Huang, Yifeng; Yang, Jixin

    The numerical simulation method based on computational Fluid Dynamics (CFD) provides a possible alternative means of physical wind tunnel test. Firstly, the correctness of the numerical simulation method is validated by one certain example. In order to select the minimum length of the cable as to a certain diameter in the numerical wind tunnel tests, the numerical wind tunnel tests based on CFD are carried out on the cables with several different length-diameter ratios (L/D). The results show that, when the L/D reaches to 18, the drag coefficient is stable essentially.

  13. Runge-Kutta methods combined with compact difference schemes for the unsteady Euler equations

    NASA Technical Reports Server (NTRS)

    Yu, Sheng-Tao

    1992-01-01

    Recent development using compact difference schemes to solve the Navier-Stokes equations show spectral-like accuracy. A study was made of the numerical characteristics of various combinations of the Runge-Kutta (RK) methods and compact difference schemes to calculate the unsteady Euler equations. The accuracy of finite difference schemes is assessed based on the evaluations of dissipative error. The objectives are reducing the numerical damping and, at the same time, preserving numerical stability. While this approach has tremendous success solving steady flows, numerical characteristics of unsteady calculations remain largely unclear. For unsteady flows, in addition to the dissipative errors, phase velocity and harmonic content of the numerical results are of concern. As a result of the discretization procedure, the simulated unsteady flow motions actually propagate in a dispersive numerical medium. Consequently, the dispersion characteristics of the numerical schemes which relate the phase velocity and wave number may greatly impact the numerical accuracy. The aim is to assess the numerical accuracy of the simulated results. To this end, the Fourier analysis is to provide the dispersive correlations of various numerical schemes. First, a detailed investigation of the existing RK methods is carried out. A generalized form of an N-step RK method is derived. With this generalized form, the criteria are derived for the three and four-step RK methods to be third and fourth-order time accurate for the non-linear equations, e.g., flow equations. These criteria are then applied to commonly used RK methods such as Jameson's 3-step and 4-step schemes and Wray's algorithm to identify the accuracy of the methods. For the spatial discretization, compact difference schemes are presented. The schemes are formulated in the operator-type to render themselves suitable for the Fourier analyses. The performance of the numerical methods is shown by numerical examples. These examples are detailed. described. The third case is a two-dimensional simulation of a Lamb vortex in an uniform flow. This calculation provides a realistic assessment of various finite difference schemes in terms of the conservation of the vortex strength and the harmonic content after travelling a substantial distance. The numerical implementation of Giles' non-refelctive equations coupled with the characteristic equations as the boundary condition is discussed in detail. Finally, the single vortex calculation is extended to simulate vortex pairing. For the distance between two vortices less than a threshold value, numerical results show crisp resolution of the vortex merging.

  14. Analytical-numerical solution of a nonlinear integrodifferential equation in econometrics

    NASA Astrophysics Data System (ADS)

    Kakhktsyan, V. M.; Khachatryan, A. Kh.

    2013-07-01

    A mixed problem for a nonlinear integrodifferential equation arising in econometrics is considered. An analytical-numerical method is proposed for solving the problem. Some numerical results are presented.

  15. Numerical Hydrodynamic Study of Hypothetical Levee Setback Scenarios

    DTIC Science & Technology

    2018-01-01

    ER D C /C HL T R- 18 -1 Flood and Coastal Systems Research and Development Program Numerical Hydrodynamic Study of Hypothetical Levee...default. Flood and Coastal Systems Research and Development Program ERDC/CHL TR-18-1 January 2018 Numerical Hydrodynamic Study of Hypothetical...Reduction” ERDC/CHL TR-18-1 ii Abstract A numerical hydrodynamic study was conducted to compare multiple levee setback alternatives to the base

  16. Numerical Analysis of Constrained Dynamical Systems, with Applications to Dynamic Contact of Solids, Nonlinear Elastodynamics and Fluid-Structure Interactions

    DTIC Science & Technology

    2000-12-01

    Numerical Simulations ..... ................. .... 42 1.4.1. Impact of a rod on a rigid wall ..... ................. .... 42 1.4.2. Impact of two...dissipative properties of the proposed scheme . . . . 81 II.4. Representative Numerical Simulations ...... ................. ... 84 11.4.1. Forging of...Representative numerical simulations ...... ............. .. 123 111.3. Model Problem II: a Simplified Model of Thin Beams ... ......... ... 127 III

  17. An Experimental Comparison of Two Methods Of Teaching Numerical Control Manual Programming Concepts; Visual Media Versus Hands-On Equipment.

    ERIC Educational Resources Information Center

    Biekert, Russell

    Accompanying the rapid changes in technology has been a greater dependence on automation and numerical control, which has resulted in the need to find ways of preparing programers for industrial machines using numerical control. To compare the hands-on equipment method and a visual media method of teaching numerical control, an experimental and a…

  18. Non-robust numerical simulations of analogue extension experiments

    NASA Astrophysics Data System (ADS)

    Naliboff, John; Buiter, Susanne

    2016-04-01

    Numerical and analogue models of lithospheric deformation provide significant insight into the tectonic processes that lead to specific structural and geophysical observations. As these two types of models contain distinct assumptions and tradeoffs, investigations drawing conclusions from both can reveal robust links between first-order processes and observations. Recent studies have focused on detailed comparisons between numerical and analogue experiments in both compressional and extensional tectonics, sometimes involving multiple lithospheric deformation codes and analogue setups. While such comparisons often show good agreement on first-order deformation styles, results frequently diverge on second-order structures, such as shear zone dip angles or spacing, and in certain cases even on first-order structures. Here, we present finite-element experiments that are designed to directly reproduce analogue "sandbox" extension experiments at the cm-scale. We use material properties and boundary conditions that are directly taken from analogue experiments and use a Drucker-Prager failure model to simulate shear zone formation in sand. We find that our numerical experiments are highly sensitive to numerous numerical parameters. For example, changes to the numerical resolution, velocity convergence parameters and elemental viscosity averaging commonly produce significant changes in first- and second-order structures accommodating deformation. The sensitivity of the numerical simulations to small parameter changes likely reflects a number of factors, including, but not limited to, high angles of internal friction assigned to sand, complex, unknown interactions between the brittle sand (used as an upper crust equivalent) and viscous silicone (lower crust), highly non-linear strain weakening processes and poor constraints on the cohesion of sand. Our numerical-analogue comparison is hampered by (a) an incomplete knowledge of the fine details of sand failure and sand properties, and (b) likely limitations to the use of a continuum Drucker-Prager model for representing shear zone formation in sand. In some cases our numerical experiments provide reasonable fits to first-order structures observed in the analogue experiments, but the numerical sensitivity to small parameter variations leads us to conclude that the numerical experiments are not robust.

  19. Residents' numeric inputting error in computerized physician order entry prescription.

    PubMed

    Wu, Xue; Wu, Changxu; Zhang, Kan; Wei, Dong

    2016-04-01

    Computerized physician order entry (CPOE) system with embedded clinical decision support (CDS) can significantly reduce certain types of prescription error. However, prescription errors still occur. Various factors such as the numeric inputting methods in human computer interaction (HCI) produce different error rates and types, but has received relatively little attention. This study aimed to examine the effects of numeric inputting methods and urgency levels on numeric inputting errors of prescription, as well as categorize the types of errors. Thirty residents participated in four prescribing tasks in which two factors were manipulated: numeric inputting methods (numeric row in the main keyboard vs. numeric keypad) and urgency levels (urgent situation vs. non-urgent situation). Multiple aspects of participants' prescribing behavior were measured in sober prescribing situations. The results revealed that in urgent situations, participants were prone to make mistakes when using the numeric row in the main keyboard. With control of performance in the sober prescribing situation, the effects of the input methods disappeared, and urgency was found to play a significant role in the generalized linear model. Most errors were either omission or substitution types, but the proportion of transposition and intrusion error types were significantly higher than that of the previous research. Among numbers 3, 8, and 9, which were the less common digits used in prescription, the error rate was higher, which was a great risk to patient safety. Urgency played a more important role in CPOE numeric typing error-making than typing skills and typing habits. It was recommended that inputting with the numeric keypad had lower error rates in urgent situation. An alternative design could consider increasing the sensitivity of the keys with lower frequency of occurrence and decimals. To improve the usability of CPOE, numeric keyboard design and error detection could benefit from spatial incidence of errors found in this study. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  20. Numbers matter to informed patient choices: a randomized design across age and numeracy levels.

    PubMed

    Peters, Ellen; Hart, P Sol; Tusler, Martin; Fraenkel, Liana

    2014-05-01

    How drug adverse events (AEs) are communicated in the United States may mislead consumers and result in low adherence. Requiring written information to include numeric AE-likelihood information might lessen these effects, but providing numbers may disadvantage less skilled populations. The objective was to determine risk comprehension and willingness to use a medication when presented with numeric or nonnumeric AE-likelihood information across age, numeracy, and cholesterol-lowering drug-use groups. In a cross-sectional Internet survey (N = 905; American Life Panel, 15 May 2008 to 18 June 2008), respondents were presented with a hypothetical prescription medication for high cholesterol. AE likelihoods were described using 1 of 6 formats (nonnumeric: consumer medication information (CMI)-like list, risk labels; numeric: percentage, frequency, risk labels + percentage, risk labels + frequency). Main outcome measures were risk comprehension (recoded to indicate presence/absence of risk overestimation and underestimation), willingness to use the medication (7-point scale; not likely = 0, very likely = 6), and main reason for willingness (chosen from 8 predefined reasons). Individuals given nonnumeric information were more likely to overestimate risk, were less willing to take the medication, and gave different reasons than those provided numeric information across numeracy and age groups (e.g., among the less numerate, 69% and 18% overestimated risks in nonnumeric and numeric formats, respectively; among the more numerate, these same proportions were 66% and 6%). Less numerate middle-aged and older adults, however, showed less influence of numeric format on willingness to take the medication. It is unclear whether differences are clinically meaningful, although some differences are large. Providing numeric AE-likelihood information (compared with nonnumeric) is likely to increase risk comprehension across numeracy and age levels. Its effects on uptake and adherence of prescribed drugs should be similar across the population, except perhaps in older, less numerate individuals.

  1. Effect of Numerical Error on Gravity Field Estimation for GRACE and Future Gravity Missions

    NASA Astrophysics Data System (ADS)

    McCullough, Christopher; Bettadpur, Srinivas

    2015-04-01

    In recent decades, gravity field determination from low Earth orbiting satellites, such as the Gravity Recovery and Climate Experiment (GRACE), has become increasingly more effective due to the incorporation of high accuracy measurement devices. Since instrumentation quality will only increase in the near future and the gravity field determination process is computationally and numerically intensive, numerical error from the use of double precision arithmetic will eventually become a prominent error source. While using double-extended or quadruple precision arithmetic will reduce these errors, the numerical limitations of current orbit determination algorithms and processes must be accurately identified and quantified in order to adequately inform the science data processing techniques of future gravity missions. The most obvious numerical limitation in the orbit determination process is evident in the comparison of measured observables with computed values, derived from mathematical models relating the satellites' numerically integrated state to the observable. Significant error in the computed trajectory will corrupt this comparison and induce error in the least squares solution of the gravitational field. In addition, errors in the numerically computed trajectory propagate into the evaluation of the mathematical measurement model's partial derivatives. These errors amalgamate in turn with numerical error from the computation of the state transition matrix, computed using the variational equations of motion, in the least squares mapping matrix. Finally, the solution of the linearized least squares system, computed using a QR factorization, is also susceptible to numerical error. Certain interesting combinations of each of these numerical errors are examined in the framework of GRACE gravity field determination to analyze and quantify their effects on gravity field recovery.

  2. Error-analysis and comparison to analytical models of numerical waveforms produced by the NRAR Collaboration

    NASA Astrophysics Data System (ADS)

    Hinder, Ian; Buonanno, Alessandra; Boyle, Michael; Etienne, Zachariah B.; Healy, James; Johnson-McDaniel, Nathan K.; Nagar, Alessandro; Nakano, Hiroyuki; Pan, Yi; Pfeiffer, Harald P.; Pürrer, Michael; Reisswig, Christian; Scheel, Mark A.; Schnetter, Erik; Sperhake, Ulrich; Szilágyi, Bela; Tichy, Wolfgang; Wardell, Barry; Zenginoğlu, Anıl; Alic, Daniela; Bernuzzi, Sebastiano; Bode, Tanja; Brügmann, Bernd; Buchman, Luisa T.; Campanelli, Manuela; Chu, Tony; Damour, Thibault; Grigsby, Jason D.; Hannam, Mark; Haas, Roland; Hemberger, Daniel A.; Husa, Sascha; Kidder, Lawrence E.; Laguna, Pablo; London, Lionel; Lovelace, Geoffrey; Lousto, Carlos O.; Marronetti, Pedro; Matzner, Richard A.; Mösta, Philipp; Mroué, Abdul; Müller, Doreen; Mundim, Bruno C.; Nerozzi, Andrea; Paschalidis, Vasileios; Pollney, Denis; Reifenberger, George; Rezzolla, Luciano; Shapiro, Stuart L.; Shoemaker, Deirdre; Taracchini, Andrea; Taylor, Nicholas W.; Teukolsky, Saul A.; Thierfelder, Marcus; Witek, Helvi; Zlochower, Yosef

    2013-01-01

    The Numerical-Relativity-Analytical-Relativity (NRAR) collaboration is a joint effort between members of the numerical relativity, analytical relativity and gravitational-wave data analysis communities. The goal of the NRAR collaboration is to produce numerical-relativity simulations of compact binaries and use them to develop accurate analytical templates for the LIGO/Virgo Collaboration to use in detecting gravitational-wave signals and extracting astrophysical information from them. We describe the results of the first stage of the NRAR project, which focused on producing an initial set of numerical waveforms from binary black holes with moderate mass ratios and spins, as well as one non-spinning binary configuration which has a mass ratio of 10. All of the numerical waveforms are analysed in a uniform and consistent manner, with numerical errors evaluated using an analysis code created by members of the NRAR collaboration. We compare previously-calibrated, non-precessing analytical waveforms, notably the effective-one-body (EOB) and phenomenological template families, to the newly-produced numerical waveforms. We find that when the binary's total mass is ˜100-200M⊙, current EOB and phenomenological models of spinning, non-precessing binary waveforms have overlaps above 99% (for advanced LIGO) with all of the non-precessing-binary numerical waveforms with mass ratios ⩽4, when maximizing over binary parameters. This implies that the loss of event rate due to modelling error is below 3%. Moreover, the non-spinning EOB waveforms previously calibrated to five non-spinning waveforms with mass ratio smaller than 6 have overlaps above 99.7% with the numerical waveform with a mass ratio of 10, without even maximizing on the binary parameters.

  3. Evaluation of wave runup predictions from numerical and parametric models

    USGS Publications Warehouse

    Stockdon, Hilary F.; Thompson, David M.; Plant, Nathaniel G.; Long, Joseph W.

    2014-01-01

    Wave runup during storms is a primary driver of coastal evolution, including shoreline and dune erosion and barrier island overwash. Runup and its components, setup and swash, can be predicted from a parameterized model that was developed by comparing runup observations to offshore wave height, wave period, and local beach slope. Because observations during extreme storms are often unavailable, a numerical model is used to simulate the storm-driven runup to compare to the parameterized model and then develop an approach to improve the accuracy of the parameterization. Numerically simulated and parameterized runup were compared to observations to evaluate model accuracies. The analysis demonstrated that setup was accurately predicted by both the parameterized model and numerical simulations. Infragravity swash heights were most accurately predicted by the parameterized model. The numerical model suffered from bias and gain errors that depended on whether a one-dimensional or two-dimensional spatial domain was used. Nonetheless, all of the predictions were significantly correlated to the observations, implying that the systematic errors can be corrected. The numerical simulations did not resolve the incident-band swash motions, as expected, and the parameterized model performed best at predicting incident-band swash heights. An assimilated prediction using a weighted average of the parameterized model and the numerical simulations resulted in a reduction in prediction error variance. Finally, the numerical simulations were extended to include storm conditions that have not been previously observed. These results indicated that the parameterized predictions of setup may need modification for extreme conditions; numerical simulations can be used to extend the validity of the parameterized predictions of infragravity swash; and numerical simulations systematically underpredict incident swash, which is relatively unimportant under extreme conditions.

  4. Designing Adaptive Low-Dissipative High Order Schemes for Long-Time Integrations. Chapter 1

    NASA Technical Reports Server (NTRS)

    Yee, Helen C.; Sjoegreen, B.; Mansour, Nagi N. (Technical Monitor)

    2001-01-01

    A general framework for the design of adaptive low-dissipative high order schemes is presented. It encompasses a rather complete treatment of the numerical approach based on four integrated design criteria: (1) For stability considerations, condition the governing equations before the application of the appropriate numerical scheme whenever it is possible; (2) For consistency, compatible schemes that possess stability properties, including physical and numerical boundary condition treatments, similar to those of the discrete analogue of the continuum are preferred; (3) For the minimization of numerical dissipation contamination, efficient and adaptive numerical dissipation control to further improve nonlinear stability and accuracy should be used; and (4) For practical considerations, the numerical approach should be efficient and applicable to general geometries, and an efficient and reliable dynamic grid adaptation should be used if necessary. These design criteria are, in general, very useful to a wide spectrum of flow simulations. However, the demand on the overall numerical approach for nonlinear stability and accuracy is much more stringent for long-time integration of complex multiscale viscous shock/shear/turbulence/acoustics interactions and numerical combustion. Robust classical numerical methods for less complex flow physics are not suitable or practical for such applications. The present approach is designed expressly to address such flow problems, especially unsteady flows. The minimization of employing very fine grids to overcome the production of spurious numerical solutions and/or instability due to under-resolved grids is also sought. The incremental studies to illustrate the performance of the approach are summarized. Extensive testing and full implementation of the approach is forthcoming. The results shown so far are very encouraging.

  5. 7 CFR 51.2927 - Marking and packing requirements.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... and packing requirements. The minimum size or numerical count of the apricots in any package shall be plainly labeled, stenciled, or otherwise marked on the package. (a) Numerical count. When the numerical...

  6. 7 CFR 51.2927 - Marking and packing requirements.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... and packing requirements. The minimum size or numerical count of the apricots in any package shall be plainly labeled, stenciled, or otherwise marked on the package. (a) Numerical count. When the numerical...

  7. 7 CFR 51.2927 - Marking and packing requirements.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... and packing requirements. The minimum size or numerical count of the apricots in any package shall be plainly labeled, stenciled, or otherwise marked on the package. (a) Numerical count. When the numerical...

  8. Documentation for the MODFLOW 6 framework

    USGS Publications Warehouse

    Hughes, Joseph D.; Langevin, Christian D.; Banta, Edward R.

    2017-08-10

    MODFLOW is a popular open-source groundwater flow model distributed by the U.S. Geological Survey. Growing interest in surface and groundwater interactions, local refinement with nested and unstructured grids, karst groundwater flow, solute transport, and saltwater intrusion, has led to the development of numerous MODFLOW versions. Often times, there are incompatibilities between these different MODFLOW versions. The report describes a new MODFLOW framework called MODFLOW 6 that is designed to support multiple models and multiple types of models. The framework is written in Fortran using a modular object-oriented design. The primary framework components include the simulation (or main program), Timing Module, Solutions, Models, Exchanges, and Utilities. The first version of the framework focuses on numerical solutions, numerical models, and numerical exchanges. This focus on numerical models allows multiple numerical models to be tightly coupled at the matrix level.

  9. Numerical Study of Periodic Traveling Wave Solutions for the Predator-Prey Model with Landscape Features

    NASA Astrophysics Data System (ADS)

    Yun, Ana; Shin, Jaemin; Li, Yibao; Lee, Seunggyu; Kim, Junseok

    We numerically investigate periodic traveling wave solutions for a diffusive predator-prey system with landscape features. The landscape features are modeled through the homogeneous Dirichlet boundary condition which is imposed at the edge of the obstacle domain. To effectively treat the Dirichlet boundary condition, we employ a robust and accurate numerical technique by using a boundary control function. We also propose a robust algorithm for calculating the numerical periodicity of the traveling wave solution. In numerical experiments, we show that periodic traveling waves which move out and away from the obstacle are effectively generated. We explain the formation of the traveling waves by comparing the wavelengths. The spatial asynchrony has been shown in quantitative detail for various obstacles. Furthermore, we apply our numerical technique to the complicated real landscape features.

  10. A numerical comparison of discrete Kalman filtering algorithms: An orbit determination case study

    NASA Technical Reports Server (NTRS)

    Thornton, C. L.; Bierman, G. J.

    1976-01-01

    The numerical stability and accuracy of various Kalman filter algorithms are thoroughly studied. Numerical results and conclusions are based on a realistic planetary approach orbit determination study. The case study results of this report highlight the numerical instability of the conventional and stabilized Kalman algorithms. Numerical errors associated with these algorithms can be so large as to obscure important mismodeling effects and thus give misleading estimates of filter accuracy. The positive result of this study is that the Bierman-Thornton U-D covariance factorization algorithm is computationally efficient, with CPU costs that differ negligibly from the conventional Kalman costs. In addition, accuracy of the U-D filter using single-precision arithmetic consistently matches the double-precision reference results. Numerical stability of the U-D filter is further demonstrated by its insensitivity of variations in the a priori statistics.

  11. Numeracy and the Persuasive Effect of Policy Information and Party Cues

    PubMed Central

    Mérola, Vittorio; Hitt, Matthew P.

    2016-01-01

    Numeric political appeals represent a prevalent but overlooked domain of public opinion research. When can quantitative information change political attitudes, and is this change trumped by partisan effects? We analyze how numeracy—or individual differences in citizens’ ability to process and apply numeric policy information—moderates the effectiveness of numeric political appeals on a moderately salient policy issue. Results show that those low in numeracy exhibit a strong party-cue effect, treating numeric information in a superficial and heuristic fashion. Conversely, those high in numeracy are persuaded by numeric information, even when it is sponsored by the opposing party, overcoming the party-cue effect. Our results make clear that overlooking numeric ability when analyzing quantitative political appeals can mask significant persuasion effects, and we build on recent work advancing the understanding of individual differences in public opinion. PMID:27274578

  12. A review of numerical techniques approaching microstructures of crystalline rocks

    NASA Astrophysics Data System (ADS)

    Zhang, Yahui; Wong, Louis Ngai Yuen

    2018-06-01

    The macro-mechanical behavior of crystalline rocks including strength, deformability and failure pattern are dominantly influenced by their grain-scale structures. Numerical technique is commonly used to assist understanding the complicated mechanisms from a microscopic perspective. Each numerical method has its respective strengths and limitations. This review paper elucidates how numerical techniques take geometrical aspects of the grain into consideration. Four categories of numerical methods are examined: particle-based methods, block-based methods, grain-based methods, and node-based methods. Focusing on the grain-scale characters, specific relevant issues including increasing complexity of micro-structure, deformation and breakage of model elements, fracturing and fragmentation process are described in more detail. Therefore, the intrinsic capabilities and limitations of different numerical approaches in terms of accounting for the micro-mechanics of crystalline rocks and their phenomenal mechanical behavior are explicitly presented.

  13. A new numerical approach for uniquely solvable exterior Riemann-Hilbert problem on region with corners

    NASA Astrophysics Data System (ADS)

    Zamzamir, Zamzana; Murid, Ali H. M.; Ismail, Munira

    2014-06-01

    Numerical solution for uniquely solvable exterior Riemann-Hilbert problem on region with corners at offcorner points has been explored by discretizing the related integral equation using Picard iteration method without any modifications to the left-hand side (LHS) and right-hand side (RHS) of the integral equation. Numerical errors for all iterations are converge to the required solution. However, for certain problems, it gives lower accuracy. Hence, this paper presents a new numerical approach for the problem by treating the generalized Neumann kernel at LHS and the function at RHS of the integral equation. Due to the existence of the corner points, Gaussian quadrature is employed which avoids the corner points during numerical integration. Numerical example on a test region is presented to demonstrate the effectiveness of this formulation.

  14. Numerical simulation of the generation, propagation, and diffraction of nonlinear waves in a rectangular basin: A three-dimensional numerical wave tank

    NASA Astrophysics Data System (ADS)

    Darwiche, Mahmoud Khalil M.

    The research presented herein is a contribution to the understanding of the numerical modeling of fully nonlinear, transient water waves. The first part of the work involves the development of a time-domain model for the numerical generation of fully nonlinear, transient waves by a piston type wavemaker in a three-dimensional, finite, rectangular tank. A time-domain boundary-integral model is developed for simulating the evolving fluid field. A robust nonsingular, adaptive integration technique for the assembly of the boundary-integral coefficient matrix is developed and tested. A parametric finite-difference technique for calculating the fluid- particle kinematics is also developed and tested. A novel compatibility and continuity condition is implemented to minimize the effect of the singularities that are inherent at the intersections of the various Dirichlet and/or Neumann subsurfaces. Results are presented which demonstrate the accuracy and convergence of the numerical model. The second portion of the work is a study of the interaction of the numerically-generated, fully nonlinear, transient waves with a bottom-mounted, surface-piercing, vertical, circular cylinder. The numerical model developed in the first part of this dissertation is extended to include the presence of the cylinder at the centerline of the basin. The diffraction of the numerically generated waves by the cylinder is simulated, and the particle kinematics of the diffracted flow field are calculated and reported. Again, numerical results showing the accuracy and convergence of the extended model are presented.

  15. Developing group investigation-based book on numerical analysis to increase critical thinking student’s ability

    NASA Astrophysics Data System (ADS)

    Maharani, S.; Suprapto, E.

    2018-03-01

    Critical thinking is very important in Mathematics; it can make student more understanding mathematics concept. Critical thinking is also needed in numerical analysis. The Numerical analysis's book is not yet including critical thinking in them. This research aims to develop group investigation-based book on numerical analysis to increase critical thinking student’s ability, to know the quality of the group investigation-based book on numerical analysis is valid, practical, and effective. The research method is Research and Development (R&D) with the subject are 30 student college department of Mathematics education at Universitas PGRI Madiun. The development model used is 4-D modified to 3-D until the stage development. The type of data used is descriptive qualitative data. Instruments used are sheets of validation, test, and questionnaire. Development results indicate that group investigation-based book on numerical analysis in the category of valid a value 84.25%. Students response to the books very positive, so group investigation-based book on numerical analysis category practical, i.e., 86.00%. The use of group investigation-based book on numerical analysis has been meeting the completeness criteria classical learning that is 84.32 %. Based on research result of this study concluded that group investigation-based book on numerical analysis is feasible because it meets the criteria valid, practical, and effective. So, the book can be used by every mathematics academician. The next research can be observed that book based group investigation in other subjects.

  16. Adaptive Grid Generation for Numerical Solution of Partial Differential Equations.

    DTIC Science & Technology

    1983-12-01

    numerical solution of fluid dynamics problems is presented. However, the method is applicable to the numer- ical evaluation of any partial differential...emphasis is being placed on numerical solution of the governing differential equations by finite difference methods . In the past two decades, considerable...original equations presented in that paper. The solution of the second problem is more difficult. 2 The method of Thompson et al. provides control for

  17. Numerical simulations to the nonlinear model of interpersonal relationships with time fractional derivative

    NASA Astrophysics Data System (ADS)

    Gencoglu, Muharrem Tuncay; Baskonus, Haci Mehmet; Bulut, Hasan

    2017-01-01

    The main aim of this manuscript is to obtain numerical solutions for the nonlinear model of interpersonal relationships with time fractional derivative. The variational iteration method is theoretically implemented and numerically conducted only to yield the desired solutions. Numerical simulations of desired solutions are plotted by using Wolfram Mathematica 9. The authors would like to thank the reviewers for their comments that help improve the manuscript.

  18. Long-term dynamic modeling of tethered spacecraft using nodal position finite element method and symplectic integration

    NASA Astrophysics Data System (ADS)

    Li, G. Q.; Zhu, Z. H.

    2015-12-01

    Dynamic modeling of tethered spacecraft with the consideration of elasticity of tether is prone to the numerical instability and error accumulation over long-term numerical integration. This paper addresses the challenges by proposing a globally stable numerical approach with the nodal position finite element method (NPFEM) and the implicit, symplectic, 2-stage and 4th order Gaussian-Legendre Runge-Kutta time integration. The NPFEM eliminates the numerical error accumulation by using the position instead of displacement of tether as the state variable, while the symplectic integration enforces the energy and momentum conservation of the discretized finite element model to ensure the global stability of numerical solution. The effectiveness and robustness of the proposed approach is assessed by an elastic pendulum problem, whose dynamic response resembles that of tethered spacecraft, in comparison with the commonly used time integrators such as the classical 4th order Runge-Kutta schemes and other families of non-symplectic Runge-Kutta schemes. Numerical results show that the proposed approach is accurate and the energy of the corresponding numerical model is conservative over the long-term numerical integration. Finally, the proposed approach is applied to the dynamic modeling of deorbiting process of tethered spacecraft over a long period.

  19. Activity in the fronto-parietal network indicates numerical inductive reasoning beyond calculation: An fMRI study combined with a cognitive model

    PubMed Central

    Liang, Peipeng; Jia, Xiuqin; Taatgen, Niels A.; Borst, Jelmer P.; Li, Kuncheng

    2016-01-01

    Numerical inductive reasoning refers to the process of identifying and extrapolating the rule involved in numeric materials. It is associated with calculation, and shares the common activation of the fronto-parietal regions with calculation, which suggests that numerical inductive reasoning may correspond to a general calculation process. However, compared with calculation, rule identification is critical and unique to reasoning. Previous studies have established the central role of the fronto-parietal network for relational integration during rule identification in numerical inductive reasoning. The current question of interest is whether numerical inductive reasoning exclusively corresponds to calculation or operates beyond calculation, and whether it is possible to distinguish between them based on the activity pattern in the fronto-parietal network. To directly address this issue, three types of problems were created: numerical inductive reasoning, calculation, and perceptual judgment. Our results showed that the fronto-parietal network was more active in numerical inductive reasoning which requires more exchanges between intermediate representations and long-term declarative knowledge during rule identification. These results survived even after controlling for the covariates of response time and error rate. A computational cognitive model was developed using the cognitive architecture ACT-R to account for the behavioral results and brain activity in the fronto-parietal network. PMID:27193284

  20. Activity in the fronto-parietal network indicates numerical inductive reasoning beyond calculation: An fMRI study combined with a cognitive model.

    PubMed

    Liang, Peipeng; Jia, Xiuqin; Taatgen, Niels A; Borst, Jelmer P; Li, Kuncheng

    2016-05-19

    Numerical inductive reasoning refers to the process of identifying and extrapolating the rule involved in numeric materials. It is associated with calculation, and shares the common activation of the fronto-parietal regions with calculation, which suggests that numerical inductive reasoning may correspond to a general calculation process. However, compared with calculation, rule identification is critical and unique to reasoning. Previous studies have established the central role of the fronto-parietal network for relational integration during rule identification in numerical inductive reasoning. The current question of interest is whether numerical inductive reasoning exclusively corresponds to calculation or operates beyond calculation, and whether it is possible to distinguish between them based on the activity pattern in the fronto-parietal network. To directly address this issue, three types of problems were created: numerical inductive reasoning, calculation, and perceptual judgment. Our results showed that the fronto-parietal network was more active in numerical inductive reasoning which requires more exchanges between intermediate representations and long-term declarative knowledge during rule identification. These results survived even after controlling for the covariates of response time and error rate. A computational cognitive model was developed using the cognitive architecture ACT-R to account for the behavioral results and brain activity in the fronto-parietal network.

  1. Numerical analysis for the fractional diffusion and fractional Buckmaster equation by the two-step Laplace Adam-Bashforth method

    NASA Astrophysics Data System (ADS)

    Jain, Sonal

    2018-01-01

    In this paper, we aim to use the alternative numerical scheme given by Gnitchogna and Atangana for solving partial differential equations with integer and non-integer differential operators. We applied this method to fractional diffusion model and fractional Buckmaster models with non-local fading memory. The method yields a powerful numerical algorithm for fractional order derivative to implement. Also we present in detail the stability analysis of the numerical method for solving the diffusion equation. This proof shows that this method is very stable and also converges very quickly to exact solution and finally some numerical simulation is presented.

  2. A study of numerical methods of solution of the equations of motion of a controlled satellite under the influence of gravity gradient torque

    NASA Technical Reports Server (NTRS)

    Thompson, J. F.; Mcwhorter, J. C.; Siddiqi, S. A.; Shanks, S. P.

    1973-01-01

    Numerical methods of integration of the equations of motion of a controlled satellite under the influence of gravity-gradient torque are considered. The results of computer experimentation using a number of Runge-Kutta, multi-step, and extrapolation methods for the numerical integration of this differential system are presented, and particularly efficient methods are noted. A large bibliography of numerical methods for initial value problems for ordinary differential equations is presented, and a compilation of Runge-Kutta and multistep formulas is given. Less common numerical integration techniques from the literature are noted for further consideration.

  3. Parental numeric language input to Mandarin Chinese and English speaking preschool children.

    PubMed

    Chang, Alicia; Sandhofer, Catherine M; Adelchanow, Lauren; Rottman, Benjamin

    2011-03-01

    The present study examined the number-specific parental language input to Mandarin- and English-speaking preschool-aged children. Mandarin and English transcripts from the CHILDES database were examined for amount of numeric speech, specific types of numeric speech and syntactic frames in which numeric speech appeared. The results showed that Mandarin-speaking parents talked about number more frequently than English-speaking parents. Further, the ways in which parents talked about number terms in the two languages was more supportive of a cardinal interpretation in Mandarin than in English. We discuss these results in terms of their implications for numerical understanding and later mathematical performance.

  4. A discontinuous Galerkin method for poroelastic wave propagation: The two-dimensional case

    NASA Astrophysics Data System (ADS)

    Dudley Ward, N. F.; Lähivaara, T.; Eveson, S.

    2017-12-01

    In this paper, we consider a high-order discontinuous Galerkin (DG) method for modelling wave propagation in coupled poroelastic-elastic media. The upwind numerical flux is derived as an exact solution for the Riemann problem including the poroelastic-elastic interface. Attenuation mechanisms in both Biot's low- and high-frequency regimes are considered. The current implementation supports non-uniform basis orders which can be used to control the numerical accuracy element by element. In the numerical examples, we study the convergence properties of the proposed DG scheme and provide experiments where the numerical accuracy of the scheme under consideration is compared to analytic and other numerical solutions.

  5. Robust recognition of handwritten numerals based on dual cooperative network

    NASA Technical Reports Server (NTRS)

    Lee, Sukhan; Choi, Yeongwoo

    1992-01-01

    An approach to robust recognition of handwritten numerals using two operating parallel networks is presented. The first network uses inputs in Cartesian coordinates, and the second network uses the same inputs transformed into polar coordinates. How the proposed approach realizes the robustness to local and global variations of input numerals by handling inputs both in Cartesian coordinates and in its transformed Polar coordinates is described. The required network structures and its learning scheme are discussed. Experimental results show that by tracking only a small number of distinctive features for each teaching numeral in each coordinate, the proposed system can provide robust recognition of handwritten numerals.

  6. The Numerical Studies Program for the Atmospheric General Circulation Experiment (AGCE) for Spacelab Flights

    NASA Technical Reports Server (NTRS)

    Fowlis, W. W. (Editor); Davis, M. H. (Editor)

    1981-01-01

    The atmospheric general circulation experiment (AGCE) numerical design for Spacelab flights was studied. A spherical baroclinic flow experiment which models the large scale circulations of the Earth's atmosphere was proposed. Gravity is simulated by a radial dielectric body force. The major objective of the AGCE is to study nonlinear baroclinic wave flows in spherical geometry. Numerical models must be developed which accurately predict the basic axisymmetric states and the stability of nonlinear baroclinic wave flows. A three dimensional, fully nonlinear, numerical model and the AGCE based on the complete set of equations is required. Progress in the AGCE numerical design studies program is reported.

  7. Numerical solution of potential flow about arbitrary 2-dimensional multiple bodies

    NASA Technical Reports Server (NTRS)

    Thompson, J. F.; Thames, F. C.

    1982-01-01

    A procedure for the finite-difference numerical solution of the lifting potential flow about any number of arbitrarily shaped bodies is given. The solution is based on a technique of automatic numerical generation of a curvilinear coordinate system having coordinate lines coincident with the contours of all bodies in the field, regardless of their shapes and number. The effects of all numerical parameters involved are analyzed and appropriate values are recommended. Comparisons with analytic solutions for single Karman-Trefftz airfoils and a circular cylinder pair show excellent agreement. The technique of application of the boundary-fitted coordinate systems to the numerical solution of partial differential equations is illustrated.

  8. Numerical Simulation of Partially-Coherent Broadband Optical Imaging Using the FDTD Method

    PubMed Central

    Çapoğlu, İlker R.; White, Craig A.; Rogers, Jeremy D.; Subramanian, Hariharan; Taflove, Allen; Backman, Vadim

    2012-01-01

    Rigorous numerical modeling of optical systems has attracted interest in diverse research areas ranging from biophotonics to photolithography. We report the full-vector electromagnetic numerical simulation of a broadband optical imaging system with partially-coherent and unpolarized illumination. The scattering of light from the sample is calculated using the finite-difference time-domain (FDTD) numerical method. Geometrical optics principles are applied to the scattered light to obtain the intensity distribution at the image plane. Multilayered object spaces are also supported by our algorithm. For the first time, numerical FDTD calculations are directly compared to and shown to agree well with broadband experimental microscopy results. PMID:21540939

  9. Neural underpinnings of divergent production of rules in numerical analogical reasoning.

    PubMed

    Wu, Xiaofei; Jung, Rex E; Zhang, Hao

    2016-05-01

    Creativity plays an important role in numerical problem solving. Although the neural underpinnings of creativity have been studied over decades, very little is known about neural mechanisms of the creative process that relates to numerical problem solving. In the present study, we employed a numerical analogical reasoning task with functional Magnetic Resonance Imaging (fMRI) to investigate the neural correlates of divergent production of rules in numerical analogical reasoning. Participants performed two tasks: a multiple solution analogical reasoning task and a single solution analogical reasoning task. Results revealed that divergent production of rules involves significant activations at Brodmann area (BA) 10 in the right middle frontal cortex, BA 40 in the left inferior parietal lobule, and BA 8 in the superior frontal cortex. The results suggest that right BA 10 and left BA 40 are involved in the generation of novel rules, and BA 8 is associated with the inhibition of initial rules in numerical analogical reasoning. The findings shed light on the neural mechanisms of creativity in numerical processing. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Variation of student numerical and figural reasoning approaches by pattern generalization type, strategy use and grade level

    NASA Astrophysics Data System (ADS)

    El Mouhayar, Rabih; Jurdak, Murad

    2016-02-01

    This paper explored variation of student numerical and figural reasoning approaches across different pattern generalization types and across grade level. An instrument was designed for this purpose. The instrument was given to a sample of 1232 students from grades 4 to 11 from five schools in Lebanon. Analysis of data showed that the numerical reasoning approach seems to be more dominant than the figural reasoning approach for the near and far pattern generalization types but not for the immediate generalization type. The findings showed that for the recursive strategy, the numerical reasoning approach seems to be more dominant than the figural reasoning approach for each of the three pattern generalization types. However, the figural reasoning approach seems to be more dominant than the numerical reasoning approach for the functional strategy, for each generalization type. The findings also showed that the numerical reasoning was more dominant than the figural reasoning in lower grade levels (grades 4 and 5) for each generalization type. In contrast, the figural reasoning became more dominant than the numerical reasoning in the upper grade levels (grades 10 and 11).

  11. Numerical Approach to Modeling and Characterization of Refractive Index Changes for a Long-Period Fiber Grating Fabricated by Femtosecond Laser

    PubMed Central

    Saad, Akram; Cho, Yonghyun; Ahmed, Farid; Jun, Martin Byung-Guk

    2016-01-01

    A 3D finite element model constructed to predict the intensity-dependent refractive index profile induced by femtosecond laser radiation is presented. A fiber core irradiated by a pulsed laser is modeled as a cylinder subject to predefined boundary conditions using COMSOL5.2 Multiphysics commercial package. The numerically obtained refractive index change is used to numerically design and experimentally fabricate long-period fiber grating (LPFG) in pure silica core single-mode fiber employing identical laser conditions. To reduce the high computational requirements, the beam envelope method approach is utilized in the aforementioned numerical models. The number of periods, grating length, and grating period considered in this work are numerically quantified. The numerically obtained spectral growth of the modeled LPFG seems to be consistent with the transmission of the experimentally fabricated LPFG single mode fiber. The sensing capabilities of the modeled LPFG are tested by varying the refractive index of the surrounding medium. The numerically obtained spectrum corresponding to the varied refractive index shows good agreement with the experimental findings. PMID:28774060

  12. Numerical Approach to Modeling and Characterization of Refractive Index Changes for a Long-Period Fiber Grating Fabricated by Femtosecond Laser.

    PubMed

    Saad, Akram; Cho, Yonghyun; Ahmed, Farid; Jun, Martin Byung-Guk

    2016-11-21

    A 3D finite element model constructed to predict the intensity-dependent refractive index profile induced by femtosecond laser radiation is presented. A fiber core irradiated by a pulsed laser is modeled as a cylinder subject to predefined boundary conditions using COMSOL5.2 Multiphysics commercial package. The numerically obtained refractive index change is used to numerically design and experimentally fabricate long-period fiber grating (LPFG) in pure silica core single-mode fiber employing identical laser conditions. To reduce the high computational requirements, the beam envelope method approach is utilized in the aforementioned numerical models. The number of periods, grating length, and grating period considered in this work are numerically quantified. The numerically obtained spectral growth of the modeled LPFG seems to be consistent with the transmission of the experimentally fabricated LPFG single mode fiber. The sensing capabilities of the modeled LPFG are tested by varying the refractive index of the surrounding medium. The numerically obtained spectrum corresponding to the varied refractive index shows good agreement with the experimental findings.

  13. Representation of DNA sequences in genetic codon context with applications in exon and intron prediction.

    PubMed

    Yin, Changchuan

    2015-04-01

    To apply digital signal processing (DSP) methods to analyze DNA sequences, the sequences first must be specially mapped into numerical sequences. Thus, effective numerical mappings of DNA sequences play key roles in the effectiveness of DSP-based methods such as exon prediction. Despite numerous mappings of symbolic DNA sequences to numerical series, the existing mapping methods do not include the genetic coding features of DNA sequences. We present a novel numerical representation of DNA sequences using genetic codon context (GCC) in which the numerical values are optimized by simulation annealing to maximize the 3-periodicity signal to noise ratio (SNR). The optimized GCC representation is then applied in exon and intron prediction by Short-Time Fourier Transform (STFT) approach. The results show the GCC method enhances the SNR values of exon sequences and thus increases the accuracy of predicting protein coding regions in genomes compared with the commonly used 4D binary representation. In addition, this study offers a novel way to reveal specific features of DNA sequences by optimizing numerical mappings of symbolic DNA sequences.

  14. Numerical Hydrodynamics in General Relativity.

    PubMed

    Font, José A

    2003-01-01

    The current status of numerical solutions for the equations of ideal general relativistic hydrodynamics is reviewed. With respect to an earlier version of the article, the present update provides additional information on numerical schemes, and extends the discussion of astrophysical simulations in general relativistic hydrodynamics. Different formulations of the equations are presented, with special mention of conservative and hyperbolic formulations well-adapted to advanced numerical methods. A large sample of available numerical schemes is discussed, paying particular attention to solution procedures based on schemes exploiting the characteristic structure of the equations through linearized Riemann solvers. A comprehensive summary of astrophysical simulations in strong gravitational fields is presented. These include gravitational collapse, accretion onto black holes, and hydrodynamical evolutions of neutron stars. The material contained in these sections highlights the numerical challenges of various representative simulations. It also follows, to some extent, the chronological development of the field, concerning advances on the formulation of the gravitational field and hydrodynamic equations and the numerical methodology designed to solve them. Supplementary material is available for this article at 10.12942/lrr-2003-4.

  15. Numerical heating in Particle-In-Cell simulations with Monte Carlo binary collisions

    NASA Astrophysics Data System (ADS)

    Alves, E. Paulo; Mori, Warren; Fiuza, Frederico

    2017-10-01

    The binary Monte Carlo collision (BMCC) algorithm is a robust and popular method to include Coulomb collision effects in Particle-in-Cell (PIC) simulations of plasmas. While a number of works have focused on extending the validity of the model to different physical regimes of temperature and density, little attention has been given to the fundamental coupling between PIC and BMCC algorithms. Here, we show that the coupling between PIC and BMCC algorithms can give rise to (nonphysical) numerical heating of the system, that can be far greater than that observed when these algorithms operate independently. This deleterious numerical heating effect can significantly impact the evolution of the simulated system particularly for long simulation times. In this work, we describe the source of this numerical heating, and derive scaling laws for the numerical heating rates based on the numerical parameters of PIC-BMCC simulations. We compare our theoretical scalings with PIC-BMCC numerical experiments, and discuss strategies to minimize this parasitic effect. This work is supported by DOE FES under FWP 100237 and 100182.

  16. Experimental and numerical simulation of a rotor/stator interaction event localized on a single blade within an industrial high-pressure compressor

    NASA Astrophysics Data System (ADS)

    Batailly, Alain; Agrapart, Quentin; Millecamps, Antoine; Brunel, Jean-François

    2016-08-01

    This contribution addresses a confrontation between the experimental simulation of a rotor/stator interaction case initiated by structural contacts with numerical predictions made with an in-house numerical strategy. Contrary to previous studies carried out within the low-pressure compressor of an aircraft engine, this interaction is found to be non-divergent: high amplitudes of vibration are experimentally observed and numerically predicted over a short period of time. An in-depth analysis of experimental data first allows for a precise characterization of the interaction as a rubbing event involving the first torsional mode of a single blade. Numerical results are in good agreement with experimental observations: the critical angular speed, the wear patterns on the casing as well as the blade dynamics are accurately predicted. Through out the article, the in-house numerical strategy is also confronted to another numerical strategy that may be found in the literature for the simulation of rubbing events: key differences are underlined with respect to the prediction of non-linear interaction phenomena.

  17. On controlling nonlinear dissipation in high order filter methods for ideal and non-ideal MHD

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Sjogreen, B.

    2004-01-01

    The newly developed adaptive numerical dissipation control in spatially high order filter schemes for the compressible Euler and Navier-Stokes equations has been recently extended to the ideal and non-ideal magnetohydrodynamics (MHD) equations. These filter schemes are applicable to complex unsteady MHD high-speed shock/shear/turbulence problems. They also provide a natural and efficient way for the minimization of Div(B) numerical error. The adaptive numerical dissipation mechanism consists of automatic detection of different flow features as distinct sensors to signal the appropriate type and amount of numerical dissipation/filter where needed and leave the rest of the region free from numerical dissipation contamination. The numerical dissipation considered consists of high order linear dissipation for the suppression of high frequency oscillation and the nonlinear dissipative portion of high-resolution shock-capturing methods for discontinuity capturing. The applicable nonlinear dissipative portion of high-resolution shock-capturing methods is very general. The objective of this paper is to investigate the performance of three commonly used types of nonlinear numerical dissipation for both the ideal and non-ideal MHD.

  18. A third-order computational method for numerical fluxes to guarantee nonnegative difference coefficients for advection-diffusion equations in a semi-conservative form

    NASA Astrophysics Data System (ADS)

    Sakai, K.; Watabe, D.; Minamidani, T.; Zhang, G. S.

    2012-10-01

    According to Godunov theorem for numerical calculations of advection equations, there exist no higher-order schemes with constant positive difference coefficients in a family of polynomial schemes with an accuracy exceeding the first-order. We propose a third-order computational scheme for numerical fluxes to guarantee the non-negative difference coefficients of resulting finite difference equations for advection-diffusion equations in a semi-conservative form, in which there exist two kinds of numerical fluxes at a cell surface and these two fluxes are not always coincident in non-uniform velocity fields. The present scheme is optimized so as to minimize truncation errors for the numerical fluxes while fulfilling the positivity condition of the difference coefficients which are variable depending on the local Courant number and diffusion number. The feature of the present optimized scheme consists in keeping the third-order accuracy anywhere without any numerical flux limiter. We extend the present method into multi-dimensional equations. Numerical experiments for advection-diffusion equations showed nonoscillatory solutions.

  19. A study on the behaviour of high-order flux reconstruction method with different low-dissipation numerical fluxes for large eddy simulation

    NASA Astrophysics Data System (ADS)

    Boxi, Lin; Chao, Yan; Shusheng, Chen

    2017-10-01

    This work focuses on the numerical dissipation features of high-order flux reconstruction (FR) method combined with different numerical fluxes in turbulence flows. The famous Roe and AUSM+ numerical fluxes together with their corresponding low-dissipation enhanced versions (LMRoe, SLAU2) and higher resolution variants (HR-LMRoe, HR-SLAU2) are incorporated into FR framework, and the dissipation interplay of these combinations is investigated in implicit large eddy simulation. The numerical dissipation stemming from these convective numerical fluxes is quantified by simulating the inviscid Gresho vortex, the transitional Taylor-Green vortex and the homogenous decaying isotropic turbulence. The results suggest that low-dissipation enhanced versions are preferential both in high-order and low-order cases to their original forms, while the use of HR-SLAU2 has marginal improvements and the HR-LMRoe leads to degenerated solution with high-order. In high-order the effects of numerical fluxes are reduced, and their viscosity may not be dissipative enough to provide physically consistent turbulence when under-resolved.

  20. Surveying the Numeric Databanks.

    ERIC Educational Resources Information Center

    O'Leary, Mick

    1987-01-01

    Describes six leading numeric databank services and compares them with bibliographic databases in terms of customers' needs, search software, pricing arrangements, and the role of the search specialist. A listing of the locations of the numeric databanks discussed is provided. (CLB)

Top