Sample records for dynamic inverse computation

  1. Computational structures for robotic computations

    NASA Technical Reports Server (NTRS)

    Lee, C. S. G.; Chang, P. R.

    1987-01-01

    The computational problem of inverse kinematics and inverse dynamics of robot manipulators by taking advantage of parallelism and pipelining architectures is discussed. For the computation of inverse kinematic position solution, a maximum pipelined CORDIC architecture has been designed based on a functional decomposition of the closed-form joint equations. For the inverse dynamics computation, an efficient p-fold parallel algorithm to overcome the recurrence problem of the Newton-Euler equations of motion to achieve the time lower bound of O(log sub 2 n) has also been developed.

  2. What you feel is what you see: inverse dynamics estimation underlies the resistive sensation of a delayed cursor.

    PubMed

    Takamuku, Shinya; Gomi, Hiroaki

    2015-07-22

    How our central nervous system (CNS) learns and exploits relationships between force and motion is a fundamental issue in computational neuroscience. While several lines of evidence have suggested that the CNS predicts motion states and signals from motor commands for control and perception (forward dynamics), it remains controversial whether it also performs the 'inverse' computation, i.e. the estimation of force from motion (inverse dynamics). Here, we show that the resistive sensation we experience while moving a delayed cursor, perceived purely from the change in visual motion, provides evidence of the inverse computation. To clearly specify the computational process underlying the sensation, we systematically varied the visual feedback and examined its effect on the strength of the sensation. In contrast to the prevailing theory that sensory prediction errors modulate our perception, the sensation did not correlate with errors in cursor motion due to the delay. Instead, it correlated with the amount of exposure to the forward acceleration of the cursor. This indicates that the delayed cursor is interpreted as a mechanical load, and the sensation represents its visually implied reaction force. Namely, the CNS automatically computes inverse dynamics, using visually detected motions, to monitor the dynamic forces involved in our actions. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  3. What you feel is what you see: inverse dynamics estimation underlies the resistive sensation of a delayed cursor

    PubMed Central

    Takamuku, Shinya; Gomi, Hiroaki

    2015-01-01

    How our central nervous system (CNS) learns and exploits relationships between force and motion is a fundamental issue in computational neuroscience. While several lines of evidence have suggested that the CNS predicts motion states and signals from motor commands for control and perception (forward dynamics), it remains controversial whether it also performs the ‘inverse’ computation, i.e. the estimation of force from motion (inverse dynamics). Here, we show that the resistive sensation we experience while moving a delayed cursor, perceived purely from the change in visual motion, provides evidence of the inverse computation. To clearly specify the computational process underlying the sensation, we systematically varied the visual feedback and examined its effect on the strength of the sensation. In contrast to the prevailing theory that sensory prediction errors modulate our perception, the sensation did not correlate with errors in cursor motion due to the delay. Instead, it correlated with the amount of exposure to the forward acceleration of the cursor. This indicates that the delayed cursor is interpreted as a mechanical load, and the sensation represents its visually implied reaction force. Namely, the CNS automatically computes inverse dynamics, using visually detected motions, to monitor the dynamic forces involved in our actions. PMID:26156766

  4. Spatial operator factorization and inversion of the manipulator mass matrix

    NASA Technical Reports Server (NTRS)

    Rodriguez, Guillermo; Kreutz-Delgado, Kenneth

    1992-01-01

    This paper advances two linear operator factorizations of the manipulator mass matrix. Embedded in the factorizations are many of the techniques that are regarded as very efficient computational solutions to inverse and forward dynamics problems. The operator factorizations provide a high-level architectural understanding of the mass matrix and its inverse, which is not visible in the detailed algorithms. They also lead to a new approach to the development of computer programs or organize complexity in robot dynamics.

  5. Integration of Gravitational Torques in Cerebellar Pathways Allows for the Dynamic Inverse Computation of Vertical Pointing Movements of a Robot Arm

    PubMed Central

    Gentili, Rodolphe J.; Papaxanthis, Charalambos; Ebadzadeh, Mehdi; Eskiizmirliler, Selim; Ouanezar, Sofiane; Darlot, Christian

    2009-01-01

    Background Several authors suggested that gravitational forces are centrally represented in the brain for planning, control and sensorimotor predictions of movements. Furthermore, some studies proposed that the cerebellum computes the inverse dynamics (internal inverse model) whereas others suggested that it computes sensorimotor predictions (internal forward model). Methodology/Principal Findings This study proposes a model of cerebellar pathways deduced from both biological and physical constraints. The model learns the dynamic inverse computation of the effect of gravitational torques from its sensorimotor predictions without calculating an explicit inverse computation. By using supervised learning, this model learns to control an anthropomorphic robot arm actuated by two antagonists McKibben artificial muscles. This was achieved by using internal parallel feedback loops containing neural networks which anticipate the sensorimotor consequences of the neural commands. The artificial neural networks architecture was similar to the large-scale connectivity of the cerebellar cortex. Movements in the sagittal plane were performed during three sessions combining different initial positions, amplitudes and directions of movements to vary the effects of the gravitational torques applied to the robotic arm. The results show that this model acquired an internal representation of the gravitational effects during vertical arm pointing movements. Conclusions/Significance This is consistent with the proposal that the cerebellar cortex contains an internal representation of gravitational torques which is encoded through a learning process. Furthermore, this model suggests that the cerebellum performs the inverse dynamics computation based on sensorimotor predictions. This highlights the importance of sensorimotor predictions of gravitational torques acting on upper limb movements performed in the gravitational field. PMID:19384420

  6. X-38 Experimental Controls Laws

    NASA Technical Reports Server (NTRS)

    Munday, Steve; Estes, Jay; Bordano, Aldo J.

    2000-01-01

    X-38 Experimental Control Laws X-38 is a NASA JSC/DFRC experimental flight test program developing a series of prototypes for an International Space Station (ISS) Crew Return Vehicle, often called an ISS "lifeboat." X- 38 Vehicle 132 Free Flight 3, currently scheduled for the end of this month, will be the first flight test of a modem FCS architecture called Multi-Application Control-Honeywell (MACH), originally developed by the Honeywell Technology Center. MACH wraps classical P&I outer attitude loops around a modem dynamic inversion attitude rate loop. The dynamic inversion process requires that the flight computer have an onboard aircraft model of expected vehicle dynamics based upon the aerodynamic database. Dynamic inversion is computationally intensive, so some timing modifications were made to implement MACH on the slower flight computers of the subsonic test vehicles. In addition to linear stability margin analyses and high fidelity 6-DOF simulation, hardware-in-the-loop testing is used to verify the implementation of MACH and its robustness to aerodynamic and environmental uncertainties and disturbances.

  7. Typical use of inverse dynamics in perceiving motion in autistic adults: Exploring computational principles of perception and action.

    PubMed

    Takamuku, Shinya; Forbes, Paul A G; Hamilton, Antonia F de C; Gomi, Hiroaki

    2018-05-07

    There is increasing evidence for motor difficulties in many people with autism spectrum condition (ASC). These difficulties could be linked to differences in the use of internal models which represent relations between motions and forces/efforts. The use of these internal models may be dependent on the cerebellum which has been shown to be abnormal in autism. Several studies have examined internal computations of forward dynamics (motion from force information) in autism, but few have tested the inverse dynamics computation, that is, the determination of force-related information from motion information. Here, we examined this ability in autistic adults by measuring two perceptual biases which depend on the inverse computation. First, we asked participants whether they experienced a feeling of resistance when moving a delayed cursor, which corresponds to the inertial force of the cursor implied by its motion-both typical and ASC participants reported similar feelings of resistance. Second, participants completed a psychophysical task in which they judged the velocity of a moving hand with or without a visual cue implying inertial force. Both typical and ASC participants perceived the hand moving with the inertial cue to be slower than the hand without it. In both cases, the magnitude of the effects did not differ between the two groups. Our results suggest that the neural systems engaged in the inverse dynamics computation are preserved in ASC, at least in the observed conditions. Autism Res 2018. © 2018 International Society for Autism Research, Wiley Periodicals, Inc. We tested the ability to estimate force information from motion information, which arises from a specific "inverse dynamics" computation. Autistic adults and a matched control group reported feeling a resistive sensation when moving a delayed cursor and also judged a moving hand to be slower when it was pulling a load. These findings both suggest that the ability to estimate force information from motion information is intact in autism. © 2018 International Society for Autism Research, Wiley Periodicals, Inc.

  8. Performance evaluation of the inverse dynamics method for optimal spacecraft reorientation

    NASA Astrophysics Data System (ADS)

    Ventura, Jacopo; Romano, Marcello; Walter, Ulrich

    2015-05-01

    This paper investigates the application of the inverse dynamics in the virtual domain method to Euler angles, quaternions, and modified Rodrigues parameters for rapid optimal attitude trajectory generation for spacecraft reorientation maneuvers. The impact of the virtual domain and attitude representation is numerically investigated for both minimum time and minimum energy problems. Owing to the nature of the inverse dynamics method, it yields sub-optimal solutions for minimum time problems. Furthermore, the virtual domain improves the optimality of the solution, but at the cost of more computational time. The attitude representation also affects solution quality and computational speed. For minimum energy problems, the optimal solution can be obtained without the virtual domain with any considered attitude representation.

  9. A 3D generic inverse dynamic method using wrench notation and quaternion algebra.

    PubMed

    Dumas, R; Aissaoui, R; de Guise, J A

    2004-06-01

    In the literature, conventional 3D inverse dynamic models are limited in three aspects related to inverse dynamic notation, body segment parameters and kinematic formalism. First, conventional notation yields separate computations of the forces and moments with successive coordinate system transformations. Secondly, the way conventional body segment parameters are defined is based on the assumption that the inertia tensor is principal and the centre of mass is located between the proximal and distal ends. Thirdly, the conventional kinematic formalism uses Euler or Cardanic angles that are sequence-dependent and suffer from singularities. In order to overcome these limitations, this paper presents a new generic method for inverse dynamics. This generic method is based on wrench notation for inverse dynamics, a general definition of body segment parameters and quaternion algebra for the kinematic formalism.

  10. A Scalable O(N) Algorithm for Large-Scale Parallel First-Principles Molecular Dynamics Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osei-Kuffuor, Daniel; Fattebert, Jean-Luc

    2014-01-01

    Traditional algorithms for first-principles molecular dynamics (FPMD) simulations only gain a modest capability increase from current petascale computers, due to their O(N 3) complexity and their heavy use of global communications. To address this issue, we are developing a truly scalable O(N) complexity FPMD algorithm, based on density functional theory (DFT), which avoids global communications. The computational model uses a general nonorthogonal orbital formulation for the DFT energy functional, which requires knowledge of selected elements of the inverse of the associated overlap matrix. We present a scalable algorithm for approximately computing selected entries of the inverse of the overlap matrix,more » based on an approximate inverse technique, by inverting local blocks corresponding to principal submatrices of the global overlap matrix. The new FPMD algorithm exploits sparsity and uses nearest neighbor communication to provide a computational scheme capable of extreme scalability. Accuracy is controlled by the mesh spacing of the finite difference discretization, the size of the localization regions in which the electronic orbitals are confined, and a cutoff beyond which the entries of the overlap matrix can be omitted when computing selected entries of its inverse. We demonstrate the algorithm's excellent parallel scaling for up to O(100K) atoms on O(100K) processors, with a wall-clock time of O(1) minute per molecular dynamics time step.« less

  11. A Lie-Theoretic Perspective on O(n) Mass Matrix Inversion for Serial Manipulators and Polypeptide Chains.

    PubMed

    Lee, Kiju; Wang, Yunfeng; Chirikjian, Gregory S

    2007-11-01

    Over the past several decades a number of O(n) methods for forward and inverse dynamics computations have been developed in the multi-body dynamics and robotics literature. A method was developed in 1974 by Fixman for O(n) computation of the mass-matrix determinant for a serial polymer chain consisting of point masses. In other recent papers, we extended this method in order to compute the inverse of the mass matrix for serial chains consisting of point masses. In the present paper, we extend these ideas further and address the case of serial chains composed of rigid-bodies. This requires the use of relatively deep mathematics associated with the rotation group, SO(3), and the special Euclidean group, SE(3), and specifically, it requires that one differentiates functions of Lie-group-valued argument.

  12. Efficient mapping algorithms for scheduling robot inverse dynamics computation on a multiprocessor system

    NASA Technical Reports Server (NTRS)

    Lee, C. S. G.; Chen, C. L.

    1989-01-01

    Two efficient mapping algorithms for scheduling the robot inverse dynamics computation consisting of m computational modules with precedence relationship to be executed on a multiprocessor system consisting of p identical homogeneous processors with processor and communication costs to achieve minimum computation time are presented. An objective function is defined in terms of the sum of the processor finishing time and the interprocessor communication time. The minimax optimization is performed on the objective function to obtain the best mapping. This mapping problem can be formulated as a combination of the graph partitioning and the scheduling problems; both have been known to be NP-complete. Thus, to speed up the searching for a solution, two heuristic algorithms were proposed to obtain fast but suboptimal mapping solutions. The first algorithm utilizes the level and the communication intensity of the task modules to construct an ordered priority list of ready modules and the module assignment is performed by a weighted bipartite matching algorithm. For a near-optimal mapping solution, the problem can be solved by the heuristic algorithm with simulated annealing. These proposed optimization algorithms can solve various large-scale problems within a reasonable time. Computer simulations were performed to evaluate and verify the performance and the validity of the proposed mapping algorithms. Finally, experiments for computing the inverse dynamics of a six-jointed PUMA-like manipulator based on the Newton-Euler dynamic equations were implemented on an NCUBE/ten hypercube computer to verify the proposed mapping algorithms. Computer simulation and experimental results are compared and discussed.

  13. A Lie-Theoretic Perspective on O(n) Mass Matrix Inversion for Serial Manipulators and Polypeptide Chains

    PubMed Central

    Lee, Kiju; Wang, Yunfeng; Chirikjian, Gregory S.

    2010-01-01

    Over the past several decades a number of O(n) methods for forward and inverse dynamics computations have been developed in the multi-body dynamics and robotics literature. A method was developed in 1974 by Fixman for O(n) computation of the mass-matrix determinant for a serial polymer chain consisting of point masses. In other recent papers, we extended this method in order to compute the inverse of the mass matrix for serial chains consisting of point masses. In the present paper, we extend these ideas further and address the case of serial chains composed of rigid-bodies. This requires the use of relatively deep mathematics associated with the rotation group, SO(3), and the special Euclidean group, SE(3), and specifically, it requires that one differentiates functions of Lie-group-valued argument. PMID:20165563

  14. Dynamics and Novel Mechanisms of SN2 Reactions on ab Initio Analytical Potential Energy Surfaces.

    PubMed

    Szabó, István; Czakó, Gábor

    2017-11-30

    We describe a novel theoretical approach to the bimolecular nucleophilic substitution (S N 2) reactions that is based on analytical potential energy surfaces (PESs) obtained by fitting a few tens of thousands high-level ab initio energy points. These PESs allow computing millions of quasi-classical trajectories thereby providing unprecedented statistical accuracy for S N 2 reactions, as well as performing high-dimensional quantum dynamics computations. We developed full-dimensional ab initio PESs for the F - + CH 3 Y [Y = F, Cl, I] systems, which describe the direct and indirect, complex-forming Walden-inversion, the frontside attack, and the new double-inversion pathways as well as the proton-transfer channels. Reaction dynamics simulations on the new PESs revealed (a) a novel double-inversion S N 2 mechanism, (b) frontside complex formation, (c) the dynamics of proton transfer, (d) vibrational and rotational mode specificity, (e) mode-specific product vibrational distributions, (f) agreement between classical and quantum dynamics, (g) good agreement with measured scattering angle and product internal energy distributions, and (h) significant leaving group effect in accord with experiments.

  15. Upper limb joint forces and moments during underwater cyclical movements.

    PubMed

    Lauer, Jessy; Rouard, Annie Hélène; Vilas-Boas, João Paulo

    2016-10-03

    Sound inverse dynamics modeling is lacking in aquatic locomotion research because of the difficulty in measuring hydrodynamic forces in dynamic conditions. Here we report the successful implementation and validation of an innovative methodology crossing new computational fluid dynamics and inverse dynamics techniques to quantify upper limb joint forces and moments while moving in water. Upper limb kinematics of seven male swimmers sculling while ballasted with 4kg was recorded through underwater motion capture. Together with body scans, segment inertial properties, and hydrodynamic resistances computed from a unique dynamic mesh algorithm capable to handle large body deformations, these data were fed into an inverse dynamics model to solve for joint kinetics. Simulation validity was assessed by comparing the impulse produced by the arms, calculated by integrating vertical forces over a stroke period, to the net theoretical impulse of buoyancy and ballast forces. A resulting gap of 1.2±3.5% provided confidence in the results. Upper limb joint load was within 5% of swimmer׳s body weight, which tends to supports the use of low-load aquatic exercises to reduce joint stress. We expect this significant methodological improvement to pave the way towards deeper insights into the mechanics of aquatic movement and the establishment of practice guidelines in rehabilitation, fitness or swimming performance. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Recurrent Neural Network for Computing the Drazin Inverse.

    PubMed

    Stanimirović, Predrag S; Zivković, Ivan S; Wei, Yimin

    2015-11-01

    This paper presents a recurrent neural network (RNN) for computing the Drazin inverse of a real matrix in real time. This recurrent neural network (RNN) is composed of n independent parts (subnetworks), where n is the order of the input matrix. These subnetworks can operate concurrently, so parallel and distributed processing can be achieved. In this way, the computational advantages over the existing sequential algorithms can be attained in real-time applications. The RNN defined in this paper is convenient for an implementation in an electronic circuit. The number of neurons in the neural network is the same as the number of elements in the output matrix, which represents the Drazin inverse. The difference between the proposed RNN and the existing ones for the Drazin inverse computation lies in their network architecture and dynamics. The conditions that ensure the stability of the defined RNN as well as its convergence toward the Drazin inverse are considered. In addition, illustrative examples and examples of application to the practical engineering problems are discussed to show the efficacy of the proposed neural network.

  17. Inverse Tone Mapping Based upon Retina Response

    PubMed Central

    Huo, Yongqing; Yang, Fan; Brost, Vincent

    2014-01-01

    The development of high dynamic range (HDR) display arouses the research of inverse tone mapping methods, which expand dynamic range of the low dynamic range (LDR) image to match that of HDR monitor. This paper proposed a novel physiological approach, which could avoid artifacts occurred in most existing algorithms. Inspired by the property of the human visual system (HVS), this dynamic range expansion scheme performs with a low computational complexity and a limited number of parameters and obtains high-quality HDR results. Comparisons with three recent algorithms in the literature also show that the proposed method reveals more important image details and produces less contrast loss and distortion. PMID:24744678

  18. Nonlinear system guidance in the presence of transmission zero dynamics

    NASA Technical Reports Server (NTRS)

    Meyer, G.; Hunt, L. R.; Su, R.

    1995-01-01

    An iterative procedure is proposed for computing the commanded state trajectories and controls that guide a possibly multiaxis, time-varying, nonlinear system with transmission zero dynamics through a given arbitrary sequence of control points. The procedure is initialized by the system inverse with the transmission zero effects nulled out. Then the 'steady state' solution of the perturbation model with the transmission zero dynamics intact is computed and used to correct the initial zero-free solution. Both time domain and frequency domain methods are presented for computing the steady state solutions of the possibly nonminimum phase transmission zero dynamics. The procedure is illustrated by means of linear and nonlinear examples.

  19. Spatial operator approach to flexible multibody system dynamics and control

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.

    1991-01-01

    The inverse and forward dynamics problems for flexible multibody systems were solved using the techniques of spatially recursive Kalman filtering and smoothing. These algorithms are easily developed using a set of identities associated with mass matrix factorization and inversion. These identities are easily derived using the spatial operator algebra developed by the author. Current work is aimed at computational experiments with the described algorithms and at modelling for control design of limber manipulator systems. It is also aimed at handling and manipulation of flexible objects.

  20. High effective inverse dynamics modelling for dual-arm robot

    NASA Astrophysics Data System (ADS)

    Shen, Haoyu; Liu, Yanli; Wu, Hongtao

    2018-05-01

    To deal with the problem of inverse dynamics modelling for dual arm robot, a recursive inverse dynamics modelling method based on decoupled natural orthogonal complement is presented. In this model, the concepts and methods of Decoupled Natural Orthogonal Complement matrices are used to eliminate the constraint forces in the Newton-Euler kinematic equations, and the screws is used to express the kinematic and dynamics variables. On this basis, the paper has developed a special simulation program with symbol software of Mathematica and conducted a simulation research on the a dual-arm robot. Simulation results show that the proposed method based on decoupled natural orthogonal complement can save an enormous amount of CPU time that was spent in computing compared with the recursive Newton-Euler kinematic equations and the results is correct and reasonable, which can verify the reliability and efficiency of the method.

  1. Feasible Muscle Activation Ranges Based on Inverse Dynamics Analyses of Human Walking

    PubMed Central

    Simpson, Cole S.; Sohn, M. Hongchul; Allen, Jessica L.; Ting, Lena H.

    2015-01-01

    Although it is possible to produce the same movement using an infinite number of different muscle activation patterns owing to musculoskeletal redundancy, the degree to which observed variations in muscle activity can deviate from optimal solutions computed from biomechanical models is not known. Here, we examined the range of biomechanically permitted activation levels in individual muscles during human walking using a detailed musculoskeletal model and experimentally-measured kinetics and kinematics. Feasible muscle activation ranges define the minimum and maximum possible level of each muscle’s activation that satisfy inverse dynamics joint torques assuming that all other muscles can vary their activation as needed. During walking, 73% of the muscles had feasible muscle activation ranges that were greater than 95% of the total muscle activation range over more than 95% of the gait cycle, indicating that, individually, most muscles could be fully active or fully inactive while still satisfying inverse dynamics joint torques. Moreover, the shapes of the feasible muscle activation ranges did not resemble previously-reported muscle activation patterns nor optimal solutions, i.e. static optimization and computed muscle control, that are based on the same biomechanical constraints. Our results demonstrate that joint torque requirements from standard inverse dynamics calculations are insufficient to define the activation of individual muscles during walking in healthy individuals. Identifying feasible muscle activation ranges may be an effective way to evaluate the impact of additional biomechanical and/or neural constraints on possible versus actual muscle activity in both normal and impaired movements. PMID:26300401

  2. Recursive Factorization of the Inverse Overlap Matrix in Linear-Scaling Quantum Molecular Dynamics Simulations.

    PubMed

    Negre, Christian F A; Mniszewski, Susan M; Cawkwell, Marc J; Bock, Nicolas; Wall, Michael E; Niklasson, Anders M N

    2016-07-12

    We present a reduced complexity algorithm to compute the inverse overlap factors required to solve the generalized eigenvalue problem in a quantum-based molecular dynamics (MD) simulation. Our method is based on the recursive, iterative refinement of an initial guess of Z (inverse square root of the overlap matrix S). The initial guess of Z is obtained beforehand by using either an approximate divide-and-conquer technique or dynamical methods, propagated within an extended Lagrangian dynamics from previous MD time steps. With this formulation, we achieve long-term stability and energy conservation even under the incomplete, approximate, iterative refinement of Z. Linear-scaling performance is obtained using numerically thresholded sparse matrix algebra based on the ELLPACK-R sparse matrix data format, which also enables efficient shared-memory parallelization. As we show in this article using self-consistent density-functional-based tight-binding MD, our approach is faster than conventional methods based on the diagonalization of overlap matrix S for systems as small as a few hundred atoms, substantially accelerating quantum-based simulations even for molecular structures of intermediate size. For a 4158-atom water-solvated polyalanine system, we find an average speedup factor of 122 for the computation of Z in each MD step.

  3. Recursive Factorization of the Inverse Overlap Matrix in Linear Scaling Quantum Molecular Dynamics Simulations

    DOE PAGES

    Negre, Christian F. A; Mniszewski, Susan M.; Cawkwell, Marc Jon; ...

    2016-06-06

    We present a reduced complexity algorithm to compute the inverse overlap factors required to solve the generalized eigenvalue problem in a quantum-based molecular dynamics (MD) simulation. Our method is based on the recursive iterative re nement of an initial guess Z of the inverse overlap matrix S. The initial guess of Z is obtained beforehand either by using an approximate divide and conquer technique or dynamically, propagated within an extended Lagrangian dynamics from previous MD time steps. With this formulation, we achieve long-term stability and energy conservation even under incomplete approximate iterative re nement of Z. Linear scaling performance ismore » obtained using numerically thresholded sparse matrix algebra based on the ELLPACK-R sparse matrix data format, which also enables e cient shared memory parallelization. As we show in this article using selfconsistent density functional based tight-binding MD, our approach is faster than conventional methods based on the direct diagonalization of the overlap matrix S for systems as small as a few hundred atoms, substantially accelerating quantum-based simulations even for molecular structures of intermediate size. For a 4,158 atom water-solvated polyalanine system we nd an average speedup factor of 122 for the computation of Z in each MD step.« less

  4. Efficient preconditioning of the electronic structure problem in large scale ab initio molecular dynamics simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schiffmann, Florian; VandeVondele, Joost, E-mail: Joost.VandeVondele@mat.ethz.ch

    2015-06-28

    We present an improved preconditioning scheme for electronic structure calculations based on the orbital transformation method. First, a preconditioner is developed which includes information from the full Kohn-Sham matrix but avoids computationally demanding diagonalisation steps in its construction. This reduces the computational cost of its construction, eliminating a bottleneck in large scale simulations, while maintaining rapid convergence. In addition, a modified form of Hotelling’s iterative inversion is introduced to replace the exact inversion of the preconditioner matrix. This method is highly effective during molecular dynamics (MD), as the solution obtained in earlier MD steps is a suitable initial guess. Filteringmore » small elements during sparse matrix multiplication leads to linear scaling inversion, while retaining robustness, already for relatively small systems. For system sizes ranging from a few hundred to a few thousand atoms, which are typical for many practical applications, the improvements to the algorithm lead to a 2-5 fold speedup per MD step.« less

  5. Assessing when chromosomal rearrangements affect the dynamics of speciation: implications from computer simulations

    PubMed Central

    Feder, Jeffrey L.; Nosil, Patrik; Flaxman, Samuel M.

    2014-01-01

    Many hypotheses have been put forth to explain the origin and spread of inversions, and their significance for speciation. Several recent genic models have proposed that inversions promote speciation with gene flow due to the adaptive significance of the genes contained within them and because of the effects inversions have on suppressing recombination. However, the consequences of inversions for the dynamics of genome wide divergence across the speciation continuum remain unclear, an issue we examine here. We review a framework for the genomics of speciation involving the congealing of the genome into alternate adaptive states representing species (“genome wide congealing”). We then place inversions in this context as examples of how genetic hitchhiking can potentially hasten genome wide congealing. Specifically, we use simulation models to (i) examine the conditions under which inversions may speed genome congealing and (ii) quantify predicted magnitudes of these effects. Effects of inversions on promoting speciation were most common and pronounced when inversions were initially fixed between populations before secondary contact and adaptation involved many genes with small fitness effects. Further work is required on the role of underdominance and epistasis between a few loci of major effect within inversions. The results highlight five important aspects of the roles of inversions in speciation: (i) the geographic context of the origins and spread of inversions, (ii) the conditions under which inversions can facilitate divergence, (iii) the magnitude of that facilitation, (iv) the extent to which the buildup of divergence is likely to be biased within vs. outside of inversions, and (v) the dynamics of the appearance and disappearance of exceptional divergence within inversions. We conclude by discussing the empirical challenges in showing that inversions play a central role in facilitating speciation with gene flow. PMID:25206365

  6. A Special Topic From Nuclear Reactor Dynamics for the Undergraduate Physics Curriculum

    ERIC Educational Resources Information Center

    Sevenich, R. A.

    1977-01-01

    Presents an intuitive derivation of the point reactor equations followed by formulation of equations for inverse and direct kinetics which are readily programmed on a digital computer. Suggests several computer simulations involving the effect of control rod motion on reactor power. (MLH)

  7. Inverse algorithms for 2D shallow water equations in presence of wet dry fronts: Application to flood plain dynamics

    NASA Astrophysics Data System (ADS)

    Monnier, J.; Couderc, F.; Dartus, D.; Larnier, K.; Madec, R.; Vila, J.-P.

    2016-11-01

    The 2D shallow water equations adequately model some geophysical flows with wet-dry fronts (e.g. flood plain or tidal flows); nevertheless deriving accurate, robust and conservative numerical schemes for dynamic wet-dry fronts over complex topographies remains a challenge. Furthermore for these flows, data are generally complex, multi-scale and uncertain. Robust variational inverse algorithms, providing sensitivity maps and data assimilation processes may contribute to breakthrough shallow wet-dry front dynamics modelling. The present study aims at deriving an accurate, positive and stable finite volume scheme in presence of dynamic wet-dry fronts, and some corresponding inverse computational algorithms (variational approach). The schemes and algorithms are assessed on classical and original benchmarks plus a real flood plain test case (Lèze river, France). Original sensitivity maps with respect to the (friction, topography) pair are performed and discussed. The identification of inflow discharges (time series) or friction coefficients (spatially distributed parameters) demonstrate the algorithms efficiency.

  8. Recursive flexible multibody system dynamics using spatial operators

    NASA Technical Reports Server (NTRS)

    Jain, A.; Rodriguez, G.

    1992-01-01

    This paper uses spatial operators to develop new spatially recursive dynamics algorithms for flexible multibody systems. The operator description of the dynamics is identical to that for rigid multibody systems. Assumed-mode models are used for the deformation of each individual body. The algorithms are based on two spatial operator factorizations of the system mass matrix. The first (Newton-Euler) factorization of the mass matrix leads to recursive algorithms for the inverse dynamics, mass matrix evaluation, and composite-body forward dynamics for the systems. The second (innovations) factorization of the mass matrix, leads to an operator expression for the mass matrix inverse and to a recursive articulated-body forward dynamics algorithm. The primary focus is on serial chains, but extensions to general topologies are also described. A comparison of computational costs shows that the articulated-body, forward dynamics algorithm is much more efficient than the composite-body algorithm for most flexible multibody systems.

  9. A real-time inverse quantised transform for multi-standard with dynamic resolution support

    NASA Astrophysics Data System (ADS)

    Sun, Chi-Chia; Lin, Chun-Ying; Zhang, Ce

    2016-06-01

    In this paper, a real-time configurable intelligent property (IP) core is presented for image/video decoding process in compatibility with the standard MPEG-4 Visual and the standard H.264/AVC. The inverse quantised discrete cosine and integer transform can be used to perform inverse quantised discrete cosine transform and inverse quantised inverse integer transforms which only required shift and add operations. Meanwhile, COordinate Rotation DIgital Computer iterations and compensation steps are adjustable in order to compensate for the video compression quality regarding various data throughput. The implementations are embedded in publicly available software XVID Codes 1.2.2 for the standard MPEG-4 Visual and the H.264/AVC reference software JM 16.1, where the experimental results show that the balance between the computational complexity and video compression quality is retained. At the end, FPGA synthesised results show that the proposed IP core can bring advantages to low hardware costs and also provide real-time performance for Full HD and 4K-2K video decoding.

  10. Inferring Muscle-Tendon Unit Power from Ankle Joint Power during the Push-Off Phase of Human Walking: Insights from a Multiarticular EMG-Driven Model

    PubMed Central

    2016-01-01

    Introduction Inverse dynamics joint kinetics are often used to infer contributions from underlying groups of muscle-tendon units (MTUs). However, such interpretations are confounded by multiarticular (multi-joint) musculature, which can cause inverse dynamics to over- or under-estimate net MTU power. Misestimation of MTU power could lead to incorrect scientific conclusions, or to empirical estimates that misguide musculoskeletal simulations, assistive device designs, or clinical interventions. The objective of this study was to investigate the degree to which ankle joint power overestimates net plantarflexor MTU power during the Push-off phase of walking, due to the behavior of the flexor digitorum and hallucis longus (FDHL)–multiarticular MTUs crossing the ankle and metatarsophalangeal (toe) joints. Methods We performed a gait analysis study on six healthy participants, recording ground reaction forces, kinematics, and electromyography (EMG). Empirical data were input into an EMG-driven musculoskeletal model to estimate ankle power. This model enabled us to parse contributions from mono- and multi-articular MTUs, and required only one scaling and one time delay factor for each subject and speed, which were solved for based on empirical data. Net plantarflexing MTU power was computed by the model and quantitatively compared to inverse dynamics ankle power. Results The EMG-driven model was able to reproduce inverse dynamics ankle power across a range of gait speeds (R2 ≥ 0.97), while also providing MTU-specific power estimates. We found that FDHL dynamics caused ankle power to slightly overestimate net plantarflexor MTU power, but only by ~2–7%. Conclusions During Push-off, FDHL MTU dynamics do not substantially confound the inference of net plantarflexor MTU power from inverse dynamics ankle power. However, other methodological limitations may cause inverse dynamics to overestimate net MTU power; for instance, due to rigid-body foot assumptions. Moving forward, the EMG-driven modeling approach presented could be applied to understand other tasks or larger multiarticular MTUs. PMID:27764110

  11. Inferring Muscle-Tendon Unit Power from Ankle Joint Power during the Push-Off Phase of Human Walking: Insights from a Multiarticular EMG-Driven Model.

    PubMed

    Honert, Eric C; Zelik, Karl E

    2016-01-01

    Inverse dynamics joint kinetics are often used to infer contributions from underlying groups of muscle-tendon units (MTUs). However, such interpretations are confounded by multiarticular (multi-joint) musculature, which can cause inverse dynamics to over- or under-estimate net MTU power. Misestimation of MTU power could lead to incorrect scientific conclusions, or to empirical estimates that misguide musculoskeletal simulations, assistive device designs, or clinical interventions. The objective of this study was to investigate the degree to which ankle joint power overestimates net plantarflexor MTU power during the Push-off phase of walking, due to the behavior of the flexor digitorum and hallucis longus (FDHL)-multiarticular MTUs crossing the ankle and metatarsophalangeal (toe) joints. We performed a gait analysis study on six healthy participants, recording ground reaction forces, kinematics, and electromyography (EMG). Empirical data were input into an EMG-driven musculoskeletal model to estimate ankle power. This model enabled us to parse contributions from mono- and multi-articular MTUs, and required only one scaling and one time delay factor for each subject and speed, which were solved for based on empirical data. Net plantarflexing MTU power was computed by the model and quantitatively compared to inverse dynamics ankle power. The EMG-driven model was able to reproduce inverse dynamics ankle power across a range of gait speeds (R2 ≥ 0.97), while also providing MTU-specific power estimates. We found that FDHL dynamics caused ankle power to slightly overestimate net plantarflexor MTU power, but only by ~2-7%. During Push-off, FDHL MTU dynamics do not substantially confound the inference of net plantarflexor MTU power from inverse dynamics ankle power. However, other methodological limitations may cause inverse dynamics to overestimate net MTU power; for instance, due to rigid-body foot assumptions. Moving forward, the EMG-driven modeling approach presented could be applied to understand other tasks or larger multiarticular MTUs.

  12. jInv: A Modular and Scalable Framework for Electromagnetic Inverse Problems

    NASA Astrophysics Data System (ADS)

    Belliveau, P. T.; Haber, E.

    2016-12-01

    Inversion is a key tool in the interpretation of geophysical electromagnetic (EM) data. Three-dimensional (3D) EM inversion is very computationally expensive and practical software for inverting large 3D EM surveys must be able to take advantage of high performance computing (HPC) resources. It has traditionally been difficult to achieve those goals in a high level dynamic programming environment that allows rapid development and testing of new algorithms, which is important in a research setting. With those goals in mind, we have developed jInv, a framework for PDE constrained parameter estimation problems. jInv provides optimization and regularization routines, a framework for user defined forward problems, and interfaces to several direct and iterative solvers for sparse linear systems. The forward modeling framework provides finite volume discretizations of differential operators on rectangular tensor product meshes and tetrahedral unstructured meshes that can be used to easily construct forward modeling and sensitivity routines for forward problems described by partial differential equations. jInv is written in the emerging programming language Julia. Julia is a dynamic language targeted at the computational science community with a focus on high performance and native support for parallel programming. We have developed frequency and time-domain EM forward modeling and sensitivity routines for jInv. We will illustrate its capabilities and performance with two synthetic time-domain EM inversion examples. First, in airborne surveys, which use many sources, we achieve distributed memory parallelism by decoupling the forward and inverse meshes and performing forward modeling for each source on small, locally refined meshes. Secondly, we invert grounded source time-domain data from a gradient array style induced polarization survey using a novel time-stepping technique that allows us to compute data from different time-steps in parallel. These examples both show that it is possible to invert large scale 3D time-domain EM datasets within a modular, extensible framework written in a high-level, easy to use programming language.

  13. Preview-Based Stable-Inversion for Output Tracking

    NASA Technical Reports Server (NTRS)

    Zou, Qing-Ze; Devasia, Santosh

    1999-01-01

    Stable Inversion techniques can be used to achieve high-accuracy output tracking. However, for nonminimum phase systems, the inverse is non-causal - hence the inverse has to be pre-computed using a pre-specified desired-output trajectory. This requirement for pre-specification of the desired output restricts the use of inversion-based approaches to trajectory planning problems (for nonminimum phase systems). In the present article, it is shown that preview information of the desired output can be used to achieve online inversion-based output tracking of linear systems. The amount of preview-time needed is quantified in terms of the tracking error and the internal dynamics of the system (zeros of the system). The methodology is applied to the online output tracking of a flexible structure and experimental results are presented.

  14. Computationally efficient multibody simulations

    NASA Technical Reports Server (NTRS)

    Ramakrishnan, Jayant; Kumar, Manoj

    1994-01-01

    Computationally efficient approaches to the solution of the dynamics of multibody systems are presented in this work. The computational efficiency is derived from both the algorithmic and implementational standpoint. Order(n) approaches provide a new formulation of the equations of motion eliminating the assembly and numerical inversion of a system mass matrix as required by conventional algorithms. Computational efficiency is also gained in the implementation phase by the symbolic processing and parallel implementation of these equations. Comparison of this algorithm with existing multibody simulation programs illustrates the increased computational efficiency.

  15. Computation of forces from deformed visco-elastic biological tissues

    NASA Astrophysics Data System (ADS)

    Muñoz, José J.; Amat, David; Conte, Vito

    2018-04-01

    We present a least-squares based inverse analysis of visco-elastic biological tissues. The proposed method computes the set of contractile forces (dipoles) at the cell boundaries that induce the observed and quantified deformations. We show that the computation of these forces requires the regularisation of the problem functional for some load configurations that we study here. The functional measures the error of the dynamic problem being discretised in time with a second-order implicit time-stepping and in space with standard finite elements. We analyse the uniqueness of the inverse problem and estimate the regularisation parameter by means of an L-curved criterion. We apply the methodology to a simple toy problem and to an in vivo set of morphogenetic deformations of the Drosophila embryo.

  16. Spin pumping and inverse spin Hall voltages from dynamical antiferromagnets

    NASA Astrophysics Data System (ADS)

    Johansen, Øyvind; Brataas, Arne

    2017-06-01

    Dynamical antiferromagnets can pump spins into adjacent conductors. The high antiferromagnetic resonance frequencies represent a challenge for experimental detection, but magnetic fields can reduce these resonance frequencies. We compute the ac and dc inverse spin Hall voltages resulting from dynamical spin excitations as a function of a magnetic field along the easy axis and the polarization of the driving ac magnetic field perpendicular to the easy axis. We consider the insulating antiferromagnets MnF2,FeF2, and NiO. Near the spin-flop transition, there is a significant enhancement of the dc spin pumping and inverse spin Hall voltage for the uniaxial antiferromagnets MnF2 and FeF2. In the uniaxial antiferromagnets it is also found that the ac spin pumping is independent of the external magnetic field when the driving field has the optimal circular polarization. In the biaxial NiO, the voltages are much weaker, and there is no spin-flop enhancement of the dc component.

  17. Review of Modelling Techniques for In Vivo Muscle Force Estimation in the Lower Extremities during Strength Training

    PubMed Central

    Schellenberg, Florian; Oberhofer, Katja; Taylor, William R.

    2015-01-01

    Background. Knowledge of the musculoskeletal loading conditions during strength training is essential for performance monitoring, injury prevention, rehabilitation, and training design. However, measuring muscle forces during exercise performance as a primary determinant of training efficacy and safety has remained challenging. Methods. In this paper we review existing computational techniques to determine muscle forces in the lower limbs during strength exercises in vivo and discuss their potential for uptake into sports training and rehabilitation. Results. Muscle forces during exercise performance have almost exclusively been analysed using so-called forward dynamics simulations, inverse dynamics techniques, or alternative methods. Musculoskeletal models based on forward dynamics analyses have led to considerable new insights into muscular coordination, strength, and power during dynamic ballistic movement activities, resulting in, for example, improved techniques for optimal performance of the squat jump, while quasi-static inverse dynamics optimisation and EMG-driven modelling have helped to provide an understanding of low-speed exercises. Conclusion. The present review introduces the different computational techniques and outlines their advantages and disadvantages for the informed usage by nonexperts. With sufficient validation and widespread application, muscle force calculations during strength exercises in vivo are expected to provide biomechanically based evidence for clinicians and therapists to evaluate and improve training guidelines. PMID:26417378

  18. Review of Modelling Techniques for In Vivo Muscle Force Estimation in the Lower Extremities during Strength Training.

    PubMed

    Schellenberg, Florian; Oberhofer, Katja; Taylor, William R; Lorenzetti, Silvio

    2015-01-01

    Knowledge of the musculoskeletal loading conditions during strength training is essential for performance monitoring, injury prevention, rehabilitation, and training design. However, measuring muscle forces during exercise performance as a primary determinant of training efficacy and safety has remained challenging. In this paper we review existing computational techniques to determine muscle forces in the lower limbs during strength exercises in vivo and discuss their potential for uptake into sports training and rehabilitation. Muscle forces during exercise performance have almost exclusively been analysed using so-called forward dynamics simulations, inverse dynamics techniques, or alternative methods. Musculoskeletal models based on forward dynamics analyses have led to considerable new insights into muscular coordination, strength, and power during dynamic ballistic movement activities, resulting in, for example, improved techniques for optimal performance of the squat jump, while quasi-static inverse dynamics optimisation and EMG-driven modelling have helped to provide an understanding of low-speed exercises. The present review introduces the different computational techniques and outlines their advantages and disadvantages for the informed usage by nonexperts. With sufficient validation and widespread application, muscle force calculations during strength exercises in vivo are expected to provide biomechanically based evidence for clinicians and therapists to evaluate and improve training guidelines.

  19. Kinematics and dynamics of robotic systems with multiple closed loops

    NASA Astrophysics Data System (ADS)

    Zhang, Chang-De

    The kinematics and dynamics of robotic systems with multiple closed loops, such as Stewart platforms, walking machines, and hybrid manipulators, are studied. In the study of kinematics, focus is on the closed-form solutions of the forward position analysis of different parallel systems. A closed-form solution means that the solution is expressed as a polynomial in one variable. If the order of the polynomial is less than or equal to four, the solution has analytical closed-form. First, the conditions of obtaining analytical closed-form solutions are studied. For a Stewart platform, the condition is found to be that one rotational degree of freedom of the output link is decoupled from the other five. Based on this condition, a class of Stewart platforms which has analytical closed-form solution is formulated. Conditions of analytical closed-form solution for other parallel systems are also studied. Closed-form solutions of forward kinematics for walking machines and multi-fingered grippers are then studied. For a parallel system with three three-degree-of-freedom subchains, there are 84 possible ways to select six independent joints among nine joints. These 84 ways can be classified into three categories: Category 3:3:0, Category 3:2:1, and Category 2:2:2. It is shown that the first category has no solutions; the solutions of the second category have analytical closed-form; and the solutions of the last category are higher order polynomials. The study is then extended to a nearly general Stewart platform. The solution is a 20th order polynomial and the Stewart platform has a maximum of 40 possible configurations. Also, the study is extended to a new class of hybrid manipulators which consists of two serially connected parallel mechanisms. In the study of dynamics, a computationally efficient method for inverse dynamics of manipulators based on the virtual work principle is developed. Although this method is comparable with the recursive Newton-Euler method for serial manipulators, its advantage is more noteworthy when applied to parallel systems. An approach of inverse dynamics of a walking machine is also developed, which includes inverse dynamic modeling, foot force distribution, and joint force/torque allocation.

  20. Estimating net joint torques from kinesiological data using optimal linear system theory.

    PubMed

    Runge, C F; Zajac, F E; Allum, J H; Risher, D W; Bryson, A E; Honegger, F

    1995-12-01

    Net joint torques (NJT) are frequently computed to provide insights into the motor control of dynamic biomechanical systems. An inverse dynamics approach is almost always used, whereby the NJT are computed from 1) kinematic measurements (e.g., position of the segments), 2) kinetic measurements (e.g., ground reaction forces) that are, in effect, constraints defining unmeasured kinematic quantities based on a dynamic segmental model, and 3) numerical differentiation of the measured kinematics to estimate velocities and accelerations that are, in effect, additional constraints. Due to errors in the measurements, the segmental model, and the differentiation process, estimated NJT rarely produce the observed movement in a forward simulation when the dynamics of the segmental system are inherently unstable (e.g., human walking). Forward dynamic simulations are, however, essential to studies of muscle coordination. We have developed an alternative approach, using the linear quadratic follower (LQF) algorithm, which computes the NJT such that a stable simulation of the observed movement is produced and the measurements are replicated as well as possible. The LQF algorithm does not employ constraints depending on explicit differentiation of the kinematic data, but rather employs those depending on specification of a cost function, based on quantitative assumptions about data confidence. We illustrate the usefulness of the LQF approach by using it to estimate NJT exerted by standing humans perturbed by support-surface movements. We show that unless the number of kinematic and force variables recorded is sufficiently high, the confidence that can be placed in the estimates of the NJT, obtained by any method (e.g., LQF, or the inverse dynamics approach), may be unsatisfactorily low.

  1. Regularity Aspects in Inverse Musculoskeletal Biomechanics

    NASA Astrophysics Data System (ADS)

    Lund, Marie; Stâhl, Fredrik; Gulliksson, Mârten

    2008-09-01

    Inverse simulations of musculoskeletal models computes the internal forces such as muscle and joint reaction forces, which are hard to measure, using the more easily measured motion and external forces as input data. Because of the difficulties of measuring muscle forces and joint reactions, simulations are hard to validate. One way of reducing errors for the simulations is to ensure that the mathematical problem is well-posed. This paper presents a study of regularity aspects for an inverse simulation method, often called forward dynamics or dynamical optimization, that takes into account both measurement errors and muscle dynamics. Regularity is examined for a test problem around the optimum using the approximated quadratic problem. The results shows improved rank by including a regularization term in the objective that handles the mechanical over-determinancy. Using the 3-element Hill muscle model the chosen regularization term is the norm of the activation. To make the problem full-rank only the excitation bounds should be included in the constraints. However, this results in small negative values of the activation which indicates that muscles are pushing and not pulling, which is unrealistic but the error maybe small enough to be accepted for specific applications. These results are a start to ensure better results of inverse musculoskeletal simulations from a numerical point of view.

  2. Studies of Trace Gas Chemical Cycles Using Inverse Methods and Global Chemical Transport Models

    NASA Technical Reports Server (NTRS)

    Prinn, Ronald G.

    2003-01-01

    We report progress in the first year, and summarize proposed work for the second year of the three-year dynamical-chemical modeling project devoted to: (a) development, testing, and refining of inverse methods for determining regional and global transient source and sink strengths for long lived gases important in ozone depletion and climate forcing, (b) utilization of inverse methods to determine these source/sink strengths using either MATCH (Model for Atmospheric Transport and Chemistry) which is based on analyzed observed wind fields or back-trajectories computed from these wind fields, (c) determination of global (and perhaps regional) average hydroxyl radical concentrations using inverse methods with multiple titrating gases, and (d) computation of the lifetimes and spatially resolved destruction rates of trace gases using 3D models. Important goals include determination of regional source strengths of methane, nitrous oxide, methyl bromide, and other climatically and chemically important biogenic/anthropogenic trace gases and also of halocarbons restricted by the Montreal protocol and its follow-on agreements and hydrohalocarbons now used as alternatives to the restricted halocarbons.

  3. Efficacy of guided spiral drawing in the classification of Parkinson's Disease.

    PubMed

    Zham, Poonam; Arjunan, Sridhar; Raghav, Sanjay; Kumar, Dinesh Kant

    2017-10-11

    Change of handwriting can be an early marker for severity of Parkinson's disease but suffers from poor sensitivity and specificity due to inter-subject variations. This study has investigated the group-difference in the dynamic features during sketching of spiral between PD and control subjects with the aim of developing an accurate method for diagnosing PD patients. Dynamic handwriting features were computed for 206 specimens collected from 62 Subjects (31 Parkinson's and 31 Controls). These were analyzed based on the severity of the disease to determine group-difference. Spearman rank correlation coefficient was computed to evaluate the strength of association for the different features. Maximum area under ROC curve (AUC) using the dynamic features during different writing and spiral sketching tasks were in the range of 67 to 79 %. However, when angular features ( and ) and count of direction inversion during sketching of the spiral were used, AUC improved to 93.3%. Spearman correlation coefficient was highest for and . The angular features and count of direction inversion which can be obtained in real-time while sketching the Archimedean guided spiral on a digital tablet can be used for differentiating between Parkinson's and healthy cohort.

  4. Computational Fluid Dynamics Simulation of Flows in an Oxidation Ditch Driven by a New Surface Aerator.

    PubMed

    Huang, Weidong; Li, Kun; Wang, Gan; Wang, Yingzhe

    2013-11-01

    In this article, we present a newly designed inverse umbrella surface aerator, and tested its performance in driving flow of an oxidation ditch. Results show that it has a better performance in driving the oxidation ditch than the original one with higher average velocity and more uniform flow field. We also present a computational fluid dynamics model for predicting the flow field in an oxidation ditch driven by a surface aerator. The improved momentum source term approach to simulate the flow field of the oxidation ditch driven by an inverse umbrella surface aerator was developed and validated through experiments. Four kinds of turbulent models were investigated with the approach, including the standard k - ɛ model, RNG k - ɛ model, realizable k - ɛ model, and Reynolds stress model, and the predicted data were compared with those calculated with the multiple rotating reference frame approach (MRF) and sliding mesh approach (SM). Results of the momentum source term approach are in good agreement with the experimental data, and its prediction accuracy is better than MRF, close to SM. It is also found that the momentum source term approach has lower computational expenses, is simpler to preprocess, and is easier to use.

  5. Inverse Force Determination on a Small Scale Launch Vehicle Model Using a Dynamic Balance

    NASA Technical Reports Server (NTRS)

    Ngo, Christina L.; Powell, Jessica M.; Ross, James C.

    2017-01-01

    A launch vehicle can experience large unsteady aerodynamic forces in the transonic regime that, while usually only lasting for tens of seconds during launch, could be devastating if structural components and electronic hardware are not designed to account for them. These aerodynamic loads are difficult to experimentally measure and even harder to computationally estimate. The current method for estimating buffet loads is through the use of a few hundred unsteady pressure transducers and wind tunnel test. Even with a large number of point measurements, the computed integrated load is not an accurate enough representation of the total load caused by buffeting. This paper discusses an attempt at using a dynamic balance to experimentally determine buffet loads on a generic scale hammer head launch vehicle model tested at NASA Ames Research Center's 11' x 11' transonic wind tunnel. To use a dynamic balance, the structural characteristics of the model needed to be identified so that the natural modal response could be and removed from the aerodynamic forces. A finite element model was created on a simplified version of the model to evaluate the natural modes of the balance flexures, assist in model design, and to compare to experimental data. Several modal tests were conducted on the model in two different configurations to check for non-linearity, and to estimate the dynamic characteristics of the model. The experimental results were used in an inverse force determination technique with a psuedo inverse frequency response function. Due to the non linearity, the model not being axisymmetric, and inconsistent data between the two shake tests from different mounting configuration, it was difficult to create a frequency response matrix that satisfied all input and output conditions for wind tunnel configuration to accurately predict unsteady aerodynamic loads.

  6. Effect of Oxygen Enrichment in Propane Laminar Diffusion Flames under Microgravity and Earth Gravity Conditions

    NASA Astrophysics Data System (ADS)

    Bhatia, Pramod; Singh, Ravinder

    2017-06-01

    Diffusion flames are the most common type of flame which we see in our daily life such as candle flame and match-stick flame. Also, they are the most used flames in practical combustion system such as industrial burner (coal fired, gas fired or oil fired), diesel engines, gas turbines, and solid fuel rockets. In the present study, steady-state global chemistry calculations for 24 different flames were performed using an axisymmetric computational fluid dynamics code (UNICORN). Computation involved simulations of inverse and normal diffusion flames of propane in earth and microgravity condition with varying oxidizer compositions (21, 30, 50, 100 % O2, by mole, in N2). 2 cases were compared with the experimental result for validating the computational model. These flames were stabilized on a 5.5 mm diameter burner with 10 mm of burner length. The effect of oxygen enrichment and variation in gravity (earth gravity and microgravity) on shape and size of diffusion flames, flame temperature, flame velocity have been studied from the computational result obtained. Oxygen enrichment resulted in significant increase in flame temperature for both types of diffusion flames. Also, oxygen enrichment and gravity variation have significant effect on the flame configuration of normal diffusion flames in comparison with inverse diffusion flames. Microgravity normal diffusion flames are spherical in shape and much wider in comparison to earth gravity normal diffusion flames. In inverse diffusion flames, microgravity flames were wider than earth gravity flames. However, microgravity inverse flames were not spherical in shape.

  7. Determination of eigenvalues of dynamical systems by symbolic computation

    NASA Technical Reports Server (NTRS)

    Howard, J. C.

    1982-01-01

    A symbolic computation technique for determining the eigenvalues of dynamical systems is described wherein algebraic operations, symbolic differentiation, matrix formulation and inversion, etc., can be performed on a digital computer equipped with a formula-manipulation compiler. An example is included that demonstrates the facility with which the system dynamics matrix and the control distribution matrix from the state space formulation of the equations of motion can be processed to obtain eigenvalue loci as a function of a system parameter. The example chosen to demonstrate the technique is a fourth-order system representing the longitudinal response of a DC 8 aircraft to elevator inputs. This simplified system has two dominant modes, one of which is lightly damped and the other well damped. The loci may be used to determine the value of the controlling parameter that satisfied design requirements. The results were obtained using the MACSYMA symbolic manipulation system.

  8. Inverse targeting —An effective immunization strategy

    NASA Astrophysics Data System (ADS)

    Schneider, C. M.; Mihaljev, T.; Herrmann, H. J.

    2012-05-01

    We propose a new method to immunize populations or computer networks against epidemics which is more efficient than any continuous immunization method considered before. The novelty of our method resides in the way of determining the immunization targets. First we identify those individuals or computers that contribute the least to the disease spreading measured through their contribution to the size of the largest connected cluster in the social or a computer network. The immunization process follows the list of identified individuals or computers in inverse order, immunizing first those which are most relevant for the epidemic spreading. We have applied our immunization strategy to several model networks and two real networks, the Internet and the collaboration network of high-energy physicists. We find that our new immunization strategy is in the case of model networks up to 14%, and for real networks up to 33% more efficient than immunizing dynamically the most connected nodes in a network. Our strategy is also numerically efficient and can therefore be applied to large systems.

  9. Revealing a double-inversion mechanism for the F⁻+CH₃Cl SN2 reaction.

    PubMed

    Szabó, István; Czakó, Gábor

    2015-01-19

    Stereo-specific reaction mechanisms play a fundamental role in chemistry. The back-side attack inversion and front-side attack retention pathways of the bimolecular nucleophilic substitution (SN2) reactions are the textbook examples for stereo-specific chemical processes. Here, we report an accurate global analytic potential energy surface (PES) for the F(-)+CH₃Cl SN2 reaction, which describes both the back-side and front-side attack substitution pathways as well as the proton-abstraction channel. Moreover, reaction dynamics simulations on this surface reveal a novel double-inversion mechanism, in which an abstraction-induced inversion via a FH···CH₂Cl(-) transition state is followed by a second inversion via the usual [F···CH₃···Cl](-) saddle point, thereby opening a lower energy reaction path for retention than the front-side attack. Quasi-classical trajectory computations for the F(-)+CH₃Cl(ν1=0, 1) reactions show that the front-side attack is a fast direct, whereas the double inversion is a slow indirect process.

  10. Reversible elementary cellular automaton with rule number 150 and periodic boundary conditions over 𝔽p

    NASA Astrophysics Data System (ADS)

    Martín Del Rey, A.; Rodríguez Sánchez, G.

    2015-03-01

    The study of the reversibility of elementary cellular automata with rule number 150 over the finite state set 𝔽p and endowed with periodic boundary conditions is done. The dynamic of such discrete dynamical systems is characterized by means of characteristic circulant matrices, and their analysis allows us to state that the reversibility depends on the number of cells of the cellular space and to explicitly compute the corresponding inverse cellular automata.

  11. Dynamic Shape Reconstruction of Three-Dimensional Frame Structures Using the Inverse Finite Element Method

    NASA Technical Reports Server (NTRS)

    Gherlone, Marco; Cerracchio, Priscilla; Mattone, Massimiliano; Di Sciuva, Marco; Tessler, Alexander

    2011-01-01

    A robust and efficient computational method for reconstructing the three-dimensional displacement field of truss, beam, and frame structures, using measured surface-strain data, is presented. Known as shape sensing , this inverse problem has important implications for real-time actuation and control of smart structures, and for monitoring of structural integrity. The present formulation, based on the inverse Finite Element Method (iFEM), uses a least-squares variational principle involving strain measures of Timoshenko theory for stretching, torsion, bending, and transverse shear. Two inverse-frame finite elements are derived using interdependent interpolations whose interior degrees-of-freedom are condensed out at the element level. In addition, relationships between the order of kinematic-element interpolations and the number of required strain gauges are established. As an example problem, a thin-walled, circular cross-section cantilevered beam subjected to harmonic excitations in the presence of structural damping is modeled using iFEM; where, to simulate strain-gauge values and to provide reference displacements, a high-fidelity MSC/NASTRAN shell finite element model is used. Examples of low and high-frequency dynamic motion are analyzed and the solution accuracy examined with respect to various levels of discretization and the number of strain gauges.

  12. Angle-domain common imaging gather extraction via Kirchhoff prestack depth migration based on a traveltime table in transversely isotropic media

    NASA Astrophysics Data System (ADS)

    Liu, Shaoyong; Gu, Hanming; Tang, Yongjie; Bingkai, Han; Wang, Huazhong; Liu, Dingjin

    2018-04-01

    Angle-domain common image-point gathers (ADCIGs) can alleviate the limitations of common image-point gathers in an offset domain, and have been widely used for velocity inversion and amplitude variation with angle (AVA) analysis. We propose an effective algorithm for generating ADCIGs in transversely isotropic (TI) media based on the gradient of traveltime by Kirchhoff pre-stack depth migration (KPSDM), as the dynamic programming method for computing the traveltime in TI media would not suffer from the limitation of shadow zones and traveltime interpolation. Meanwhile, we present a specific implementation strategy for ADCIG extraction via KPSDM. Three major steps are included in the presented strategy: (1) traveltime computation using a dynamic programming approach in TI media; (2) slowness vector calculation by the gradient of a traveltime table calculated previously; (3) construction of illumination vectors and subsurface angles in the migration process. Numerical examples are included to demonstrate the effectiveness of our approach, which henceforce shows its potential application for subsequent tomographic velocity inversion and AVA.

  13. Towards inverse modeling of turbidity currents: The inverse lock-exchange problem

    NASA Astrophysics Data System (ADS)

    Lesshafft, Lutz; Meiburg, Eckart; Kneller, Ben; Marsden, Alison

    2011-04-01

    A new approach is introduced for turbidite modeling, leveraging the potential of computational fluid dynamics methods to simulate the flow processes that led to turbidite formation. The practical use of numerical flow simulation for the purpose of turbidite modeling so far is hindered by the need to specify parameters and initial flow conditions that are a priori unknown. The present study proposes a method to determine optimal simulation parameters via an automated optimization process. An iterative procedure matches deposit predictions from successive flow simulations against available localized reference data, as in practice may be obtained from well logs, and aims at convergence towards the best-fit scenario. The final result is a prediction of the entire deposit thickness and local grain size distribution. The optimization strategy is based on a derivative-free, surrogate-based technique. Direct numerical simulations are performed to compute the flow dynamics. A proof of concept is successfully conducted for the simple test case of a two-dimensional lock-exchange turbidity current. The optimization approach is demonstrated to accurately retrieve the initial conditions used in a reference calculation.

  14. 4D Subject-Specific Inverse Modeling of the Chick Embryonic Heart Outflow Tract Hemodynamics

    PubMed Central

    Goenezen, Sevan; Chivukula, Venkat Keshav; Midgett, Madeline; Phan, Ly; Rugonyi, Sandra

    2015-01-01

    Blood flow plays a critical role in regulating embryonic cardiac growth and development, with altered flow leading to congenital heart disease. Progress in the field, however, is hindered by a lack of quantification of hemodynamic conditions in the developing heart. In this study, we present a methodology to quantify blood flow dynamics in the embryonic heart using subject-specific computational fluid dynamics (CFD) models. While the methodology is general, we focused on a model of the chick embryonic heart outflow tract (OFT), which distally connects the heart to the arterial system, and is the region of origin of many congenital cardiac defects. Using structural and Doppler velocity data collected from optical coherence tomography (OCT), we generated 4D (3D + time) embryo-specific CFD models of the heart OFT. To replicate the blood flow dynamics over time during the cardiac cycle, we developed an iterative inverse-method optimization algorithm, which determines the CFD model boundary conditions such that differences between computed velocities and measured velocities at one point within the OFT lumen are minimized. Results from our developed CFD model agree with previously measured hemodynamics in the OFT. Further, computed velocities and measured velocities differ by less than 15% at locations that were not used in the optimization, validating the model. The presented methodology can be used in quantifications of embryonic cardiac hemodynamics under normal and altered blood flow conditions, enabling an in depth quantitative study of how blood flow influences cardiac development. PMID:26361767

  15. Mantle Circulation Models with variational data assimilation: Inferring past mantle flow and structure from plate motion histories and seismic tomography

    NASA Astrophysics Data System (ADS)

    Bunge, H.; Hagelberg, C.; Travis, B.

    2002-12-01

    EarthScope will deliver data on structure and dynamics of continental North America and the underlying mantle on an unprecedented scale. Indeed, the scope of EarthScope makes its mission comparable to the large remote sensing efforts that are transforming the oceanographic and atmospheric sciences today. Arguably the main impact of new solid Earth observing systems is to transform our use of geodynamic models increasingly from conditions that are data poor to an environment that is data rich. Oceanographers and meteorologists already have made substantial progress in adapting to this environment, by developing new approaches of interpreting oceanographic and atmospheric data objectively through data assimilation methods in their models. However, a similarly rigorous theoretical framework for merging EarthScope derived solid Earth data with geodynamic models has yet to be devised. Here we explore the feasibility of data assimilation in mantle convection studies in an attempt to fit global geodynamic model calculations explicitly to tomographic and tectonic constraints. This is an inverse problem not quite unlike the inverse problem of finding optimal seismic velocity structures faced by seismologists. We derive the generalized inverse of mantle convection from a variational approach and present the adjoint equations of mantle flow. The substantial computational burden associated with solutions to the generalized inverse problem of mantle convection is made feasible using a highly efficient finite element approach based on the 3-D spherical fully parallelized mantle dynamics code TERRA, implemented on a cost-effective topical PC-cluster (geowulf) dedicated specifically to large-scale geophysical simulations. This dedicated geophysical modeling computer allows us to investigate global inverse convection problems having a spatial discretization of less than 50 km throughout the mantle. We present a synthetic high-resolution modeling experiment to demonstrate that mid-Cretaceous mantle structure can be inferred accurately from our inverse approach assuming present-day mantle structure is well-known, even if an initial first guess assumption about the mid-Cretaceous mantle involved only a simple 1-D radial temperature profile. We suggest that geodynamic inverse modeling should make it possible to infer a number of flow parameters from observational constraints of the mantle.

  16. Efficient robust reconstruction of dynamic PET activity maps with radioisotope decay constraints.

    PubMed

    Gao, Fei; Liu, Huafeng; Shi, Pengcheng

    2010-01-01

    Dynamic PET imaging performs sequence of data acquisition in order to provide visualization and quantification of physiological changes in specific tissues and organs. The reconstruction of activity maps is generally the first step in dynamic PET. State space Hinfinity approaches have been proved to be a robust method for PET image reconstruction where, however, temporal constraints are not considered during the reconstruction process. In addition, the state space strategies for PET image reconstruction have been computationally prohibitive for practical usage because of the need for matrix inversion. In this paper, we present a minimax formulation of the dynamic PET imaging problem where a radioisotope decay model is employed as physics-based temporal constraints on the photon counts. Furthermore, a robust steady state Hinfinity filter is developed to significantly improve the computational efficiency with minimal loss of accuracy. Experiments are conducted on Monte Carlo simulated image sequences for quantitative analysis and validation.

  17. Computational Fluid Dynamics Simulation of Flows in an Oxidation Ditch Driven by a New Surface Aerator

    PubMed Central

    Huang, Weidong; Li, Kun; Wang, Gan; Wang, Yingzhe

    2013-01-01

    Abstract In this article, we present a newly designed inverse umbrella surface aerator, and tested its performance in driving flow of an oxidation ditch. Results show that it has a better performance in driving the oxidation ditch than the original one with higher average velocity and more uniform flow field. We also present a computational fluid dynamics model for predicting the flow field in an oxidation ditch driven by a surface aerator. The improved momentum source term approach to simulate the flow field of the oxidation ditch driven by an inverse umbrella surface aerator was developed and validated through experiments. Four kinds of turbulent models were investigated with the approach, including the standard k−ɛ model, RNG k−ɛ model, realizable k−ɛ model, and Reynolds stress model, and the predicted data were compared with those calculated with the multiple rotating reference frame approach (MRF) and sliding mesh approach (SM). Results of the momentum source term approach are in good agreement with the experimental data, and its prediction accuracy is better than MRF, close to SM. It is also found that the momentum source term approach has lower computational expenses, is simpler to preprocess, and is easier to use. PMID:24302850

  18. Recursive dynamics for flexible multibody systems using spatial operators

    NASA Technical Reports Server (NTRS)

    Jain, A.; Rodriguez, G.

    1990-01-01

    Due to their structural flexibility, spacecraft and space manipulators are multibody systems with complex dynamics and possess a large number of degrees of freedom. Here the spatial operator algebra methodology is used to develop a new dynamics formulation and spatially recursive algorithms for such flexible multibody systems. A key feature of the formulation is that the operator description of the flexible system dynamics is identical in form to the corresponding operator description of the dynamics of rigid multibody systems. A significant advantage of this unifying approach is that it allows ideas and techniques for rigid multibody systems to be easily applied to flexible multibody systems. The algorithms use standard finite-element and assumed modes models for the individual body deformation. A Newton-Euler Operator Factorization of the mass matrix of the multibody system is first developed. It forms the basis for recursive algorithms such as for the inverse dynamics, the computation of the mass matrix, and the composite body forward dynamics for the system. Subsequently, an alternative Innovations Operator Factorization of the mass matrix, each of whose factors is invertible, is developed. It leads to an operator expression for the inverse of the mass matrix, and forms the basis for the recursive articulated body forward dynamics algorithm for the flexible multibody system. For simplicity, most of the development here focuses on serial chain multibody systems. However, extensions of the algorithms to general topology flexible multibody systems are described. While the computational cost of the algorithms depends on factors such as the topology and the amount of flexibility in the multibody system, in general, it appears that in contrast to the rigid multibody case, the articulated body forward dynamics algorithm is the more efficient algorithm for flexible multibody systems containing even a small number of flexible bodies. The variety of algorithms described here permits a user to choose the algorithm which is optimal for the multibody system at hand. The availability of a number of algorithms is even more important for real-time applications, where implementation on parallel processors or custom computing hardware is often necessary to maximize speed.

  19. Dynamic data integration and stochastic inversion of a confined aquifer

    NASA Astrophysics Data System (ADS)

    Wang, D.; Zhang, Y.; Irsa, J.; Huang, H.; Wang, L.

    2013-12-01

    Much work has been done in developing and applying inverse methods to aquifer modeling. The scope of this paper is to investigate the applicability of a new direct method for large inversion problems and to incorporate uncertainty measures in the inversion outcomes (Wang et al., 2013). The problem considered is a two-dimensional inverse model (50×50 grid) of steady-state flow for a heterogeneous ground truth model (500×500 grid) with two hydrofacies. From the ground truth model, decreasing number of wells (12, 6, 3) were sampled for facies types, based on which experimental indicator histograms and directional variograms were computed. These parameters and models were used by Sequential Indicator Simulation to generate 100 realizations of hydrofacies patterns in a 100×100 (geostatistical) grid, which were conditioned to the facies measurements at wells. These realizations were smoothed with Simulated Annealing, coarsened to the 50×50 inverse grid, before they were conditioned with the direct method to the dynamic data, i.e., observed heads and groundwater fluxes at the same sampled wells. A set of realizations of estimated hydraulic conductivities (Ks), flow fields, and boundary conditions were created, which centered on the 'true' solutions from solving the ground truth model. Both hydrofacies conductivities were computed with an estimation accuracy of ×10% (12 wells), ×20% (6 wells), ×35% (3 wells) of the true values. For boundary condition estimation, the accuracy was within × 15% (12 wells), 30% (6 wells), and 50% (3 wells) of the true values. The inversion system of equations was solved with LSQR (Paige et al, 1982), for which coordinate transform and matrix scaling preprocessor were used to improve the condition number (CN) of the coefficient matrix. However, when the inverse grid was refined to 100×100, Gaussian Noise Perturbation was used to limit the growth of the CN before the matrix solve. To scale the inverse problem up (i.e., without smoothing and coarsening and therefore reducing the associated estimation uncertainty), a parallel LSQR solver was written and verified. For the 50×50 grid, the parallel solver sped up the serial solution time by 14X using 4 CPUs (research on parallel performance and scaling is ongoing). A sensitivity analysis was conducted to examine the relation between the observed data and the inversion outcomes, where measurement errors of increasing magnitudes (i.e., ×1, 2, 5, 10% of the total head variation and up to ×2% of the total flux variation) were imposed on the observed data. Inversion results were stable but the accuracy of Ks and boundary estimation degraded with increasing errors, as expected. In particular, quality of the observed heads is critical to hydraulic head recovery, while quality of the observed fluxes plays a dominant role in K estimation. References: Wang, D., Y. Zhang, J. Irsa, H. Huang, and L. Wang (2013), Data integration and stochastic inversion of a confined aquifer with high performance computing, Advances in Water Resources, in preparation. Paige, C. C., and M. A. Saunders (1982), LSQR: an algorithm for sparse linear equations and sparse least squares, ACM Transactions on Mathematical Software, 8(1), 43-71.

  20. Network-Physics (NP) BEC DIGITAL(#)-VULNERABILITY; ``Q-Computing"=Simple-Arithmetic;Modular-Congruences=SignalXNoise PRODUCTS=Clock-model;BEC-Factorization;RANDOM-# Definition;P=/=NP TRIVIAL Proof!!!

    NASA Astrophysics Data System (ADS)

    Pi, E. I.; Siegel, E.

    2010-03-01

    Siegel[AMS Natl.Mtg.(2002)-Abs.973-60-124] digits logarithmic- law inversion to ONLY BEQS BEC:Quanta/Bosons=#: EMP-like SEVERE VULNERABILITY of ONLY #-networks(VS.ANALOG INvulnerability) via Barabasi NP(VS.dynamics[Not.AMS(5/2009)] critique);(so called)``quantum-computing''(QC) = simple-arithmetic (sansdivision);algorithmiccomplexities:INtractibility/UNdecidabi lity/INefficiency/NONcomputability/HARDNESS(so MIScalled) ``noise''-induced-phase-transition(NIT)ACCELERATION:Cook-Levin theorem Reducibility = RG fixed-points; #-Randomness DEFINITION via WHAT? Query(VS. Goldreich[Not.AMS(2002)] How? mea culpa)= ONLY MBCS hot-plasma v #-clumping NON-random BEC; Modular-Arithmetic Congruences = Signal x Noise PRODUCTS = clock-model; NON-Shor[Physica A,341,586(04)]BEC logarithmic-law inversion factorization: Watkins #-theory U statistical- physics); P=/=NP C-S TRIVIAL Proof: Euclid!!! [(So Miscalled) computational-complexity J-O obviation(3 millennia AGO geometry: NO:CC,``CS'';``Feet of Clay!!!'']; Query WHAT?:Definition: (so MIScalled)``complexity''=UTTER-SIMPLICITY!! v COMPLICATEDNESS MEASURE(S).

  1. Prediction-Correction Algorithms for Time-Varying Constrained Optimization

    DOE PAGES

    Simonetto, Andrea; Dall'Anese, Emiliano

    2017-07-26

    This article develops online algorithms to track solutions of time-varying constrained optimization problems. Particularly, resembling workhorse Kalman filtering-based approaches for dynamical systems, the proposed methods involve prediction-correction steps to provably track the trajectory of the optimal solutions of time-varying convex problems. The merits of existing prediction-correction methods have been shown for unconstrained problems and for setups where computing the inverse of the Hessian of the cost function is computationally affordable. This paper addresses the limitations of existing methods by tackling constrained problems and by designing first-order prediction steps that rely on the Hessian of the cost function (and do notmore » require the computation of its inverse). In addition, the proposed methods are shown to improve the convergence speed of existing prediction-correction methods when applied to unconstrained problems. Numerical simulations corroborate the analytical results and showcase performance and benefits of the proposed algorithms. A realistic application of the proposed method to real-time control of energy resources is presented.« less

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simonetto, Andrea; Dall'Anese, Emiliano

    This article develops online algorithms to track solutions of time-varying constrained optimization problems. Particularly, resembling workhorse Kalman filtering-based approaches for dynamical systems, the proposed methods involve prediction-correction steps to provably track the trajectory of the optimal solutions of time-varying convex problems. The merits of existing prediction-correction methods have been shown for unconstrained problems and for setups where computing the inverse of the Hessian of the cost function is computationally affordable. This paper addresses the limitations of existing methods by tackling constrained problems and by designing first-order prediction steps that rely on the Hessian of the cost function (and do notmore » require the computation of its inverse). In addition, the proposed methods are shown to improve the convergence speed of existing prediction-correction methods when applied to unconstrained problems. Numerical simulations corroborate the analytical results and showcase performance and benefits of the proposed algorithms. A realistic application of the proposed method to real-time control of energy resources is presented.« less

  3. Computing the Moore-Penrose Inverse of a Matrix with a Computer Algebra System

    ERIC Educational Resources Information Center

    Schmidt, Karsten

    2008-01-01

    In this paper "Derive" functions are provided for the computation of the Moore-Penrose inverse of a matrix, as well as for solving systems of linear equations by means of the Moore-Penrose inverse. Making it possible to compute the Moore-Penrose inverse easily with one of the most commonly used Computer Algebra Systems--and to have the blueprint…

  4. Improved real-time dynamics from imaginary frequency lattice simulations

    NASA Astrophysics Data System (ADS)

    Pawlowski, Jan M.; Rothkopf, Alexander

    2018-03-01

    The computation of real-time properties, such as transport coefficients or bound state spectra of strongly interacting quantum fields in thermal equilibrium is a pressing matter. Since the sign problem prevents a direct evaluation of these quantities, lattice data needs to be analytically continued from the Euclidean domain of the simulation to Minkowski time, in general an ill-posed inverse problem. Here we report on a novel approach to improve the determination of real-time information in the form of spectral functions by setting up a simulation prescription in imaginary frequencies. By carefully distinguishing between initial conditions and quantum dynamics one obtains access to correlation functions also outside the conventional Matsubara frequencies. In particular the range between ω0 and ω1 = 2πT, which is most relevant for the inverse problem may be more highly resolved. In combination with the fact that in imaginary frequencies the kernel of the inverse problem is not an exponential but only a rational function we observe significant improvements in the reconstruction of spectral functions, demonstrated in a simple 0+1 dimensional scalar field theory toy model.

  5. Improving rotorcraft survivability to RPG attack using inverse methods

    NASA Astrophysics Data System (ADS)

    Anderson, D.; Thomson, D. G.

    2009-09-01

    This paper presents the results of a preliminary investigation of optimal threat evasion strategies for improving the survivability of rotorcraft under attack by rocket propelled grenades (RPGs). The basis of this approach is the application of inverse simulation techniques pioneered for simulation of aggressive helicopter manoeuvres to the RPG engagement problem. In this research, improvements in survivability are achieved by computing effective evasive manoeuvres. The first step in this process uses the missile approach warning system camera (MAWS) on the aircraft to provide angular information of the threat. Estimates of the RPG trajectory and impact point are then estimated. For the current flight state an appropriate evasion response is selected then realised via inverse simulation of the platform dynamics. Results are presented for several representative engagements showing the efficacy of the approach.

  6. Characterization of dynamic changes of current source localization based on spatiotemporal fMRI constrained EEG source imaging

    NASA Astrophysics Data System (ADS)

    Nguyen, Thinh; Potter, Thomas; Grossman, Robert; Zhang, Yingchun

    2018-06-01

    Objective. Neuroimaging has been employed as a promising approach to advance our understanding of brain networks in both basic and clinical neuroscience. Electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) represent two neuroimaging modalities with complementary features; EEG has high temporal resolution and low spatial resolution while fMRI has high spatial resolution and low temporal resolution. Multimodal EEG inverse methods have attempted to capitalize on these properties but have been subjected to localization error. The dynamic brain transition network (DBTN) approach, a spatiotemporal fMRI constrained EEG source imaging method, has recently been developed to address these issues by solving the EEG inverse problem in a Bayesian framework, utilizing fMRI priors in a spatial and temporal variant manner. This paper presents a computer simulation study to provide a detailed characterization of the spatial and temporal accuracy of the DBTN method. Approach. Synthetic EEG data were generated in a series of computer simulations, designed to represent realistic and complex brain activity at superficial and deep sources with highly dynamical activity time-courses. The source reconstruction performance of the DBTN method was tested against the fMRI-constrained minimum norm estimates algorithm (fMRIMNE). The performances of the two inverse methods were evaluated both in terms of spatial and temporal accuracy. Main results. In comparison with the commonly used fMRIMNE method, results showed that the DBTN method produces results with increased spatial and temporal accuracy. The DBTN method also demonstrated the capability to reduce crosstalk in the reconstructed cortical time-course(s) induced by neighboring regions, mitigate depth bias and improve overall localization accuracy. Significance. The improved spatiotemporal accuracy of the reconstruction allows for an improved characterization of complex neural activity. This improvement can be extended to any subsequent brain connectivity analyses used to construct the associated dynamic brain networks.

  7. Flow Applications of the Least Squares Finite Element Method

    NASA Technical Reports Server (NTRS)

    Jiang, Bo-Nan

    1998-01-01

    The main thrust of the effort has been towards the development, analysis and implementation of the least-squares finite element method (LSFEM) for fluid dynamics and electromagnetics applications. In the past year, there were four major accomplishments: 1) special treatments in computational fluid dynamics and computational electromagnetics, such as upwinding, numerical dissipation, staggered grid, non-equal order elements, operator splitting and preconditioning, edge elements, and vector potential are unnecessary; 2) the analysis of the LSFEM for most partial differential equations can be based on the bounded inverse theorem; 3) the finite difference and finite volume algorithms solve only two Maxwell equations and ignore the divergence equations; and 4) the first numerical simulation of three-dimensional Marangoni-Benard convection was performed using the LSFEM.

  8. Center of pressure based segment inertial parameters validation

    PubMed Central

    Rezzoug, Nasser; Gorce, Philippe; Isableu, Brice; Venture, Gentiane

    2017-01-01

    By proposing efficient methods for estimating Body Segment Inertial Parameters’ (BSIP) estimation and validating them with a force plate, it is possible to improve the inverse dynamic computations that are necessary in multiple research areas. Until today a variety of studies have been conducted to improve BSIP estimation but to our knowledge a real validation has never been completely successful. In this paper, we propose a validation method using both kinematic and kinetic parameters (contact forces) gathered from optical motion capture system and a force plate respectively. To compare BSIPs, we used the measured contact forces (Force plate) as the ground truth, and reconstructed the displacements of the Center of Pressure (COP) using inverse dynamics from two different estimation techniques. Only minor differences were seen when comparing the estimated segment masses. Their influence on the COP computation however is large and the results show very distinguishable patterns of the COP movements. Improving BSIP techniques is crucial and deviation from the estimations can actually result in large errors. This method could be used as a tool to validate BSIP estimation techniques. An advantage of this approach is that it facilitates the comparison between BSIP estimation methods and more specifically it shows the accuracy of those parameters. PMID:28662090

  9. Transistor analogs of emergent iono-neuronal dynamics.

    PubMed

    Rachmuth, Guy; Poon, Chi-Sang

    2008-06-01

    Neuromorphic analog metal-oxide-silicon (MOS) transistor circuits promise compact, low-power, and high-speed emulations of iono-neuronal dynamics orders-of-magnitude faster than digital simulation. However, their inherently limited input voltage dynamic range vs power consumption and silicon die area tradeoffs makes them highly sensitive to transistor mismatch due to fabrication inaccuracy, device noise, and other nonidealities. This limitation precludes robust analog very-large-scale-integration (aVLSI) circuits implementation of emergent iono-neuronal dynamics computations beyond simple spiking with limited ion channel dynamics. Here we present versatile neuromorphic analog building-block circuits that afford near-maximum voltage dynamic range operating within the low-power MOS transistor weak-inversion regime which is ideal for aVLSI implementation or implantable biomimetic device applications. The fabricated microchip allowed robust realization of dynamic iono-neuronal computations such as coincidence detection of presynaptic spikes or pre- and postsynaptic activities. As a critical performance benchmark, the high-speed and highly interactive iono-neuronal simulation capability on-chip enabled our prompt discovery of a minimal model of chaotic pacemaker bursting, an emergent iono-neuronal behavior of fundamental biological significance which has hitherto defied experimental testing or computational exploration via conventional digital or analog simulations. These compact and power-efficient transistor analogs of emergent iono-neuronal dynamics open new avenues for next-generation neuromorphic, neuroprosthetic, and brain-machine interface applications.

  10. Tuning Fractures With Dynamic Data

    NASA Astrophysics Data System (ADS)

    Yao, Mengbi; Chang, Haibin; Li, Xiang; Zhang, Dongxiao

    2018-02-01

    Flow in fractured porous media is crucial for production of oil/gas reservoirs and exploitation of geothermal energy. Flow behaviors in such media are mainly dictated by the distribution of fractures. Measuring and inferring the distribution of fractures is subject to large uncertainty, which, in turn, leads to great uncertainty in the prediction of flow behaviors. Inverse modeling with dynamic data may assist to constrain fracture distributions, thus reducing the uncertainty of flow prediction. However, inverse modeling for flow in fractured reservoirs is challenging, owing to the discrete and non-Gaussian distribution of fractures, as well as strong nonlinearity in the relationship between flow responses and model parameters. In this work, building upon a series of recent advances, an inverse modeling approach is proposed to efficiently update the flow model to match the dynamic data while retaining geological realism in the distribution of fractures. In the approach, the Hough-transform method is employed to parameterize non-Gaussian fracture fields with continuous parameter fields, thus rendering desirable properties required by many inverse modeling methods. In addition, a recently developed forward simulation method, the embedded discrete fracture method (EDFM), is utilized to model the fractures. The EDFM maintains computational efficiency while preserving the ability to capture the geometrical details of fractures because the matrix is discretized as structured grid, while the fractures being handled as planes are inserted into the matrix grids. The combination of Hough representation of fractures with the EDFM makes it possible to tune the fractures (through updating their existence, location, orientation, length, and other properties) without requiring either unstructured grids or regridding during updating. Such a treatment is amenable to numerous inverse modeling approaches, such as the iterative inverse modeling method employed in this study, which is capable of dealing with strongly nonlinear problems. A series of numerical case studies with increasing complexity are set up to examine the performance of the proposed approach.

  11. Monte Carlo Volcano Seismic Moment Tensors

    NASA Astrophysics Data System (ADS)

    Waite, G. P.; Brill, K. A.; Lanza, F.

    2015-12-01

    Inverse modeling of volcano seismic sources can provide insight into the geometry and dynamics of volcanic conduits. But given the logistical challenges of working on an active volcano, seismic networks are typically deficient in spatial and temporal coverage; this potentially leads to large errors in source models. In addition, uncertainties in the centroid location and moment-tensor components, including volumetric components, are difficult to constrain from the linear inversion results, which leads to a poor understanding of the model space. In this study, we employ a nonlinear inversion using a Monte Carlo scheme with the objective of defining robustly resolved elements of model space. The model space is randomized by centroid location and moment tensor eigenvectors. Point sources densely sample the summit area and moment tensors are constrained to a randomly chosen geometry within the inversion; Green's functions for the random moment tensors are all calculated from modeled single forces, making the nonlinear inversion computationally reasonable. We apply this method to very-long-period (VLP) seismic events that accompany minor eruptions at Fuego volcano, Guatemala. The library of single force Green's functions is computed with a 3D finite-difference modeling algorithm through a homogeneous velocity-density model that includes topography, for a 3D grid of nodes, spaced 40 m apart, within the summit region. The homogenous velocity and density model is justified by long wavelength of VLP data. The nonlinear inversion reveals well resolved model features and informs the interpretation through a better understanding of the possible models. This approach can also be used to evaluate possible station geometries in order to optimize networks prior to deployment.

  12. Adjoint Sensitivity Analysis of Orbital Mechanics: Application to Computations of Observables' Partials with Respect to Harmonics of the Planetary Gravity Fields

    NASA Technical Reports Server (NTRS)

    Ustinov, Eugene A.; Sunseri, Richard F.

    2005-01-01

    An approach is presented to the inversion of gravity fields based on evaluation of partials of observables with respect to gravity harmonics using the solution of adjoint problem of orbital dynamics of the spacecraft. Corresponding adjoint operator is derived directly from the linear operator of the linearized forward problem of orbital dynamics. The resulting adjoint problem is similar to the forward problem and can be solved by the same methods. For given highest degree N of gravity harmonics desired, this method involves integration of N adjoint solutions as compared to integration of N2 partials of the forward solution with respect to gravity harmonics in the conventional approach. Thus, for higher resolution gravity models, this approach becomes increasingly more effective in terms of computer resources as compared to the approach based on the solution of the forward problem of orbital dynamics.

  13. Parallelization of implicit finite difference schemes in computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Decker, Naomi H.; Naik, Vijay K.; Nicoules, Michel

    1990-01-01

    Implicit finite difference schemes are often the preferred numerical schemes in computational fluid dynamics, requiring less stringent stability bounds than the explicit schemes. Each iteration in an implicit scheme involves global data dependencies in the form of second and higher order recurrences. Efficient parallel implementations of such iterative methods are considerably more difficult and non-intuitive. The parallelization of the implicit schemes that are used for solving the Euler and the thin layer Navier-Stokes equations and that require inversions of large linear systems in the form of block tri-diagonal and/or block penta-diagonal matrices is discussed. Three-dimensional cases are emphasized and schemes that minimize the total execution time are presented. Partitioning and scheduling schemes for alleviating the effects of the global data dependencies are described. An analysis of the communication and the computation aspects of these methods is presented. The effect of the boundary conditions on the parallel schemes is also discussed.

  14. A Volunteer Computing Project for Solving Geoacoustic Inversion Problems

    NASA Astrophysics Data System (ADS)

    Zaikin, Oleg; Petrov, Pavel; Posypkin, Mikhail; Bulavintsev, Vadim; Kurochkin, Ilya

    2017-12-01

    A volunteer computing project aimed at solving computationally hard inverse problems in underwater acoustics is described. This project was used to study the possibilities of the sound speed profile reconstruction in a shallow-water waveguide using a dispersion-based geoacoustic inversion scheme. The computational capabilities provided by the project allowed us to investigate the accuracy of the inversion for different mesh sizes of the sound speed profile discretization grid. This problem suits well for volunteer computing because it can be easily decomposed into independent simpler subproblems.

  15. TOPEX/POSEIDON tides estimated using a global inverse model

    NASA Technical Reports Server (NTRS)

    Egbert, Gary D.; Bennett, Andrew F.; Foreman, Michael G. G.

    1994-01-01

    Altimetric data from the TOPEX/POSEIDON mission will be used for studies of global ocean circulation and marine geophysics. However, it is first necessary to remove the ocean tides, which are aliased in the raw data. The tides are constrained by the two distinct types of information: the hydrodynamic equations which the tidal fields of elevations and velocities must satisfy, and direct observational data from tide gauges and satellite altimetry. Here we develop and apply a generalized inverse method, which allows us to combine rationally all of this information into global tidal fields best fitting both the data and the dynamics, in a least squares sense. The resulting inverse solution is a sum of the direct solution to the astronomically forced Laplace tidal equations and a linear combination of the representers for the data functionals. The representer functions (one for each datum) are determined by the dynamical equations, and by our prior estimates of the statistics or errors in these equations. Our major task is a direct numerical calculation of these representers. This task is computationally intensive, but well suited to massively parallel processing. By calculating the representers we reduce the full (infinite dimensional) problem to a relatively low-dimensional problem at the outset, allowing full control over the conditioning and hence the stability of the inverse solution. With the representers calculated we can easily update our model as additional TOPEX/POSEIDON data become available. As an initial illustration we invert harmonic constants from a set of 80 open-ocean tide gauges. We then present a practical scheme for direct inversion of TOPEX/POSEIDON crossover data. We apply this method to 38 cycles of geophysical data records (GDR) data, computing preliminary global estimates of the four principal tidal constituents, M(sub 2), S(sub 2), K(sub 1) and O(sub 1). The inverse solution yields tidal fields which are simultaneously smoother, and in better agreement with altimetric and ground truth data, than previously proposed tidal models. Relative to the 'default' tidal corrections provided with the TOPEX/POSEIDON GDR, the inverse solution reduces crossover difference variances significantly (approximately 20-30%), even though only a small number of free parameters (approximately equal to 1000) are actually fit to the crossover data.

  16. Non-recursive augmented Lagrangian algorithms for the forward and inverse dynamics of constrained flexible multibodies

    NASA Technical Reports Server (NTRS)

    Bayo, Eduardo; Ledesma, Ragnar

    1993-01-01

    A technique is presented for solving the inverse dynamics of flexible planar multibody systems. This technique yields the non-causal joint efforts (inverse dynamics) as well as the internal states (inverse kinematics) that produce a prescribed nominal trajectory of the end effector. A non-recursive global Lagrangian approach is used in formulating the equations for motion as well as in solving the inverse dynamics equations. Contrary to the recursive method previously presented, the proposed method solves the inverse problem in a systematic and direct manner for both open-chain as well as closed-chain configurations. Numerical simulation shows that the proposed procedure provides an excellent tracking of the desired end effector trajectory.

  17. Analysis of Raman lasing without inversion

    NASA Astrophysics Data System (ADS)

    Sheldon, Paul Martin

    1999-12-01

    Properties of lasing without inversion were studied analytically and numerically using Maple computer assisted algebra software. Gain for probe electromagnetic field without population inversion in detuned three level atomic schemes has been found. Matter density matrix dynamics and coherence is explored using Pauli matrices in 2-level systems and Gell-Mann matrices in 3-level systems. It is shown that extreme inversion produces no coherence and hence no lasing. Unitary transformation from the strict field-matter Hamiltonian to an effective two-photon Raman Hamiltonian for multilevel systems has been derived. Feynman diagrams inherent in the derivation show interesting physics. An additional picture change was achieved and showed cw gain possible. Properties of a Raman-like laser based on injection of 3- level coherently driven Λ-type atoms whose Hamiltonian contains the Raman Hamiltonian and microwave coupling the two bottom states have been studied in the limits of small and big photon numbers in the drive field. Another picture change removed the microwave coupler to all orders and simplified analysis. New possibilities of inversionless generation were found.

  18. A fast new algorithm for a robot neurocontroller using inverse QR decomposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morris, A.S.; Khemaissia, S.

    2000-01-01

    A new adaptive neural network controller for robots is presented. The controller is based on direct adaptive techniques. Unlike many neural network controllers in the literature, inverse dynamical model evaluation is not required. A numerically robust, computationally efficient processing scheme for neutral network weight estimation is described, namely, the inverse QR decomposition (INVQR). The inverse QR decomposition and a weighted recursive least-squares (WRLS) method for neural network weight estimation is derived using Cholesky factorization of the data matrix. The algorithm that performs the efficient INVQR of the underlying space-time data matrix may be implemented in parallel on a triangular array.more » Furthermore, its systolic architecture is well suited for VLSI implementation. Another important benefit is well suited for VLSI implementation. Another important benefit of the INVQR decomposition is that it solves directly for the time-recursive least-squares filter vector, while avoiding the sequential back-substitution step required by the QR decomposition approaches.« less

  19. A Computationally Efficient Parallel Levenberg-Marquardt Algorithm for Large-Scale Big-Data Inversion

    NASA Astrophysics Data System (ADS)

    Lin, Y.; O'Malley, D.; Vesselinov, V. V.

    2015-12-01

    Inverse modeling seeks model parameters given a set of observed state variables. However, for many practical problems due to the facts that the observed data sets are often large and model parameters are often numerous, conventional methods for solving the inverse modeling can be computationally expensive. We have developed a new, computationally-efficient Levenberg-Marquardt method for solving large-scale inverse modeling. Levenberg-Marquardt methods require the solution of a dense linear system of equations which can be prohibitively expensive to compute for large-scale inverse problems. Our novel method projects the original large-scale linear problem down to a Krylov subspace, such that the dimensionality of the measurements can be significantly reduced. Furthermore, instead of solving the linear system for every Levenberg-Marquardt damping parameter, we store the Krylov subspace computed when solving the first damping parameter and recycle it for all the following damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved by using these computational techniques. We apply this new inverse modeling method to invert for a random transitivity field. Our algorithm is fast enough to solve for the distributed model parameters (transitivity) at each computational node in the model domain. The inversion is also aided by the use regularization techniques. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). Julia is an advanced high-level scientific programing language that allows for efficient memory management and utilization of high-performance computational resources. By comparing with a Levenberg-Marquardt method using standard linear inversion techniques, our Levenberg-Marquardt method yields speed-up ratio of 15 in a multi-core computational environment and a speed-up ratio of 45 in a single-core computational environment. Therefore, our new inverse modeling method is a powerful tool for large-scale applications.

  20. Dynamic analysis of astronaut motions in microgravity: Applications for Extravehicular Activity (EVA)

    NASA Technical Reports Server (NTRS)

    Newman, Dava J.

    1995-01-01

    Simulations of astronaut motions during extravehicular activity (EVA) tasks were performed using computational multibody dynamics methods. The application of computational dynamic simulation to EVA was prompted by the realization that physical microgravity simulators have inherent limitations: viscosity in neutral buoyancy tanks; friction in air bearing floors; short duration for parabolic aircraft; and inertia and friction in suspension mechanisms. These limitations can mask critical dynamic effects that later cause problems during actual EVA's performed in space. Methods of formulating dynamic equations of motion for multibody systems are discussed with emphasis on Kane's method, which forms the basis of the simulations presented herein. Formulation of the equations of motion for a two degree of freedom arm is presented as an explicit example. The four basic steps in creating the computational simulations were: system description, in which the geometry, mass properties, and interconnection of system bodies are input to the computer; equation formulation based on the system description; inverse kinematics, in which the angles, velocities, and accelerations of joints are calculated for prescribed motion of the endpoint (hand) of the arm; and inverse dynamics, in which joint torques are calculated for a prescribed motion. A graphical animation and data plotting program, EVADS (EVA Dynamics Simulation), was developed and used to analyze the results of the simulations that were performed on a Silicon Graphics Indigo2 computer. EVA tasks involving manipulation of the Spartan 204 free flying astronomy payload, as performed during Space Shuttle mission STS-63 (February 1995), served as the subject for two dynamic simulations. An EVA crewmember was modeled as a seven segment system with an eighth segment representing the massive payload attached to the hand. For both simulations, the initial configuration of the lower body (trunk, upper leg, and lower leg) was a neutral microgravity posture. In the first simulation, the payload was manipulated around a circular trajectory of 0.15 m radius in 10 seconds. It was found that the wrist joint theoretically exceeded its ulnal deviation limit by as much as 49. 8 deg and was required to exert torques as high as 26 N-m to accomplish the task, well in excess of the wrist physiological limit of 12 N-m. The largest torque in the first simulation, 52 N-m, occurred in the ankle joint. To avoid these problems, the second simulation placed the arm in a more comfortable initial position and the radius and speed of the circular trajectory were reduced by half. As a result, the joint angles and torques were reduced to values well within their physiological limits. In particular, the maximum wrist torque for the second simulation was only 3 N-m and the maximum ankle torque was only 6 N-m.

  1. Learn the Lagrangian: A Vector-Valued RKHS Approach to Identifying Lagrangian Systems.

    PubMed

    Cheng, Ching-An; Huang, Han-Pang

    2016-12-01

    We study the modeling of Lagrangian systems with multiple degrees of freedom. Based on system dynamics, canonical parametric models require ad hoc derivations and sometimes simplification for a computable solution; on the other hand, due to the lack of prior knowledge in the system's structure, modern nonparametric models in machine learning face the curse of dimensionality, especially in learning large systems. In this paper, we bridge this gap by unifying the theories of Lagrangian systems and vector-valued reproducing kernel Hilbert space. We reformulate Lagrangian systems with kernels that embed the governing Euler-Lagrange equation-the Lagrangian kernels-and show that these kernels span a subspace capturing the Lagrangian's projection as inverse dynamics. By such property, our model uses only inputs and outputs as in machine learning and inherits the structured form as in system dynamics, thereby removing the need for the mundane derivations for new systems as well as the generalization problem in learning from scratches. In effect, it learns the system's Lagrangian, a simpler task than directly learning the dynamics. To demonstrate, we applied the proposed kernel to identify the robot inverse dynamics in simulations and experiments. Our results present a competitive novel approach to identifying Lagrangian systems, despite using only inputs and outputs.

  2. Kinematics, controls, and path planning results for a redundant manipulator

    NASA Technical Reports Server (NTRS)

    Gretz, Bruce; Tilley, Scott W.

    1989-01-01

    The inverse kinematics solution, a modal position control algorithm, and path planning results for a 7 degree of freedom manipulator are presented. The redundant arm consists of two links with shoulder and elbow joints and a spherical wrist. The inverse kinematics problem for tip position is solved and the redundant joint is identified. It is also shown that a locus of tip positions exists in which there are kinematic limitations on self-motion. A computationally simple modal position control algorithm has been developed which guarantees a nearly constant closed-loop dynamic response throughout the workspace. If all closed-loop poles are assigned to the same location, the algorithm can be implemented with very little computation. To further reduce the required computation, the modal gains are updated only at discrete time intervals. Criteria are developed for the frequency of these updates. For commanding manipulator movements, a 5th-order spline which minimizes jerk provides a smooth tip-space path. Schemes for deriving a corresponding joint-space trajectory are discussed. Modifying the trajectory to avoid joint torque saturation when a tip payload is added is also considered. Simulation results are presented.

  3. Recursive inverse kinematics for robot arms via Kalman filtering and Bryson-Frazier smoothing

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.; Scheid, R. E., Jr.

    1987-01-01

    This paper applies linear filtering and smoothing theory to solve recursively the inverse kinematics problem for serial multilink manipulators. This problem is to find a set of joint angles that achieve a prescribed tip position and/or orientation. A widely applicable numerical search solution is presented. The approach finds the minimum of a generalized distance between the desired and the actual manipulator tip position and/or orientation. Both a first-order steepest-descent gradient search and a second-order Newton-Raphson search are developed. The optimal relaxation factor required for the steepest descent method is computed recursively using an outward/inward procedure similar to those used typically for recursive inverse dynamics calculations. The second-order search requires evaluation of a gradient and an approximate Hessian. A Gauss-Markov approach is used to approximate the Hessian matrix in terms of products of first-order derivatives. This matrix is inverted recursively using a two-stage process of inward Kalman filtering followed by outward smoothing. This two-stage process is analogous to that recently developed by the author to solve by means of spatial filtering and smoothing the forward dynamics problem for serial manipulators.

  4. Cohesive phase-field fracture and a PDE constrained optimization approach to fracture inverse problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tupek, Michael R.

    2016-06-30

    In recent years there has been a proliferation of modeling techniques for forward predictions of crack propagation in brittle materials, including: phase-field/gradient damage models, peridynamics, cohesive-zone models, and G/XFEM enrichment techniques. However, progress on the corresponding inverse problems has been relatively lacking. Taking advantage of key features of existing modeling approaches, we propose a parabolic regularization of Barenblatt cohesive models which borrows extensively from previous phase-field and gradient damage formulations. An efficient explicit time integration strategy for this type of nonlocal fracture model is then proposed and justified. In addition, we present a C++ computational framework for computing in- putmore » parameter sensitivities efficiently for explicit dynamic problems using the adjoint method. This capability allows for solving inverse problems involving crack propagation to answer interesting engineering questions such as: 1) what is the optimal design topology and material placement for a heterogeneous structure to maximize fracture resistance, 2) what loads must have been applied to a structure for it to have failed in an observed way, 3) what are the existing cracks in a structure given various experimental observations, etc. In this work, we focus on the first of these engineering questions and demonstrate a capability to automatically and efficiently compute optimal designs intended to minimize crack propagation in structures.« less

  5. Characterization of three-dimensional anisotropic heart valve tissue mechanical properties using inverse finite element analysis.

    PubMed

    Abbasi, Mostafa; Barakat, Mohammed S; Vahidkhah, Koohyar; Azadani, Ali N

    2016-09-01

    Computational modeling has an important role in design and assessment of medical devices. In computational simulations, considering accurate constitutive models is of the utmost importance to capture mechanical response of soft tissue and biomedical materials under physiological loading conditions. Lack of comprehensive three-dimensional constitutive models for soft tissue limits the effectiveness of computational modeling in research and development of medical devices. The aim of this study was to use inverse finite element (FE) analysis to determine three-dimensional mechanical properties of bovine pericardial leaflets of a surgical bioprosthesis under dynamic loading condition. Using inverse parameter estimation, 3D anisotropic Fung model parameters were estimated for the leaflets. The FE simulations were validated using experimental in-vitro measurements, and the impact of different constitutive material models was investigated on leaflet stress distribution. The results of this study showed that the anisotropic Fung model accurately simulated the leaflet deformation and coaptation during valve opening and closing. During systole, the peak stress reached to 3.17MPa at the leaflet boundary while during diastole high stress regions were primarily observed in the commissures with the peak stress of 1.17MPa. In addition, the Rayleigh damping coefficient that was introduced to FE simulations to simulate viscous damping effects of surrounding fluid was determined. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Design by Dragging: An Interface for Creative Forward and Inverse Design with Simulation Ensembles

    PubMed Central

    Coffey, Dane; Lin, Chi-Lun; Erdman, Arthur G.; Keefe, Daniel F.

    2014-01-01

    We present an interface for exploring large design spaces as encountered in simulation-based engineering, design of visual effects, and other tasks that require tuning parameters of computationally-intensive simulations and visually evaluating results. The goal is to enable a style of design with simulations that feels as-direct-as-possible so users can concentrate on creative design tasks. The approach integrates forward design via direct manipulation of simulation inputs (e.g., geometric properties, applied forces) in the same visual space with inverse design via “tugging” and reshaping simulation outputs (e.g., scalar fields from finite element analysis (FEA) or computational fluid dynamics (CFD)). The interface includes algorithms for interpreting the intent of users’ drag operations relative to parameterized models, morphing arbitrary scalar fields output from FEA and CFD simulations, and in-place interactive ensemble visualization. The inverse design strategy can be extended to use multi-touch input in combination with an as-rigid-as-possible shape manipulation to support rich visual queries. The potential of this new design approach is confirmed via two applications: medical device engineering of a vacuum-assisted biopsy device and visual effects design using a physically based flame simulation. PMID:24051845

  7. Computational Fluid Dynamics of Whole-Body Aircraft

    NASA Astrophysics Data System (ADS)

    Agarwal, Ramesh

    1999-01-01

    The current state of the art in computational aerodynamics for whole-body aircraft flowfield simulations is described. Recent advances in geometry modeling, surface and volume grid generation, and flow simulation algorithms have led to accurate flowfield predictions for increasingly complex and realistic configurations. As a result, computational aerodynamics has emerged as a crucial enabling technology for the design and development of flight vehicles. Examples illustrating the current capability for the prediction of transport and fighter aircraft flowfields are presented. Unfortunately, accurate modeling of turbulence remains a major difficulty in the analysis of viscosity-dominated flows. In the future, inverse design methods, multidisciplinary design optimization methods, artificial intelligence technology, and massively parallel computer technology will be incorporated into computational aerodynamics, opening up greater opportunities for improved product design at substantially reduced costs.

  8. Simulations of normal and inverse laminar diffusion flames under oxygen enhancement and gravity variation

    NASA Astrophysics Data System (ADS)

    Bhatia, P.; Katta, V. R.; Krishnan, S. S.; Zheng, Y.; Sunderland, P. B.; Gore, J. P.

    2012-10-01

    Steady-state global chemistry calculations for 20 different flames were carried out using an axisymmetric Computational Fluid Dynamics (CFD) code. Computational results for 16 flames were compared with flame images obtained at the NASA Glenn Research Center. The experimental flame data for these 16 flames were taken from Sunderland et al. [4] which included normal and inverse diffusion flames of ethane with varying oxidiser compositions (21, 30, 50, 100% O2 mole fraction in N2) stabilised on a 5.5 mm diameter burner. The test conditions of this reference resulted in highly convective inverse diffusion flames (Froude numbers of the order of 10) and buoyant normal diffusion flames (Froude numbers ∼0.1). Additionally, six flames were simulated to study the effect of oxygen enhancement on normal diffusion flames. The enhancement in oxygen resulted in increased flame temperatures and the presence of gravity led to increased gas velocities. The effect of gravity-variation and oxygen enhancement on flame shape and size of normal diffusion flames was far more pronounced than for inverse diffusion flames. For normal-diffusion flames, their flame-lengths decreased (1 to 2 times) and flames-widths increased (2 to 3 times) when going from earth-gravity to microgravity, and flame height decreased by five times when going from air to a pure oxygen environment.

  9. An approach to quantum-computational hydrologic inverse analysis

    DOE PAGES

    O'Malley, Daniel

    2018-05-02

    Making predictions about flow and transport in an aquifer requires knowledge of the heterogeneous properties of the aquifer such as permeability. Computational methods for inverse analysis are commonly used to infer these properties from quantities that are more readily observable such as hydraulic head. We present a method for computational inverse analysis that utilizes a type of quantum computer called a quantum annealer. While quantum computing is in an early stage compared to classical computing, we demonstrate that it is sufficiently developed that it can be used to solve certain subsurface flow problems. We utilize a D-Wave 2X quantum annealermore » to solve 1D and 2D hydrologic inverse problems that, while small by modern standards, are similar in size and sometimes larger than hydrologic inverse problems that were solved with early classical computers. Our results and the rapid progress being made with quantum computing hardware indicate that the era of quantum-computational hydrology may not be too far in the future.« less

  10. An approach to quantum-computational hydrologic inverse analysis.

    PubMed

    O'Malley, Daniel

    2018-05-02

    Making predictions about flow and transport in an aquifer requires knowledge of the heterogeneous properties of the aquifer such as permeability. Computational methods for inverse analysis are commonly used to infer these properties from quantities that are more readily observable such as hydraulic head. We present a method for computational inverse analysis that utilizes a type of quantum computer called a quantum annealer. While quantum computing is in an early stage compared to classical computing, we demonstrate that it is sufficiently developed that it can be used to solve certain subsurface flow problems. We utilize a D-Wave 2X quantum annealer to solve 1D and 2D hydrologic inverse problems that, while small by modern standards, are similar in size and sometimes larger than hydrologic inverse problems that were solved with early classical computers. Our results and the rapid progress being made with quantum computing hardware indicate that the era of quantum-computational hydrology may not be too far in the future.

  11. An approach to quantum-computational hydrologic inverse analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Malley, Daniel

    Making predictions about flow and transport in an aquifer requires knowledge of the heterogeneous properties of the aquifer such as permeability. Computational methods for inverse analysis are commonly used to infer these properties from quantities that are more readily observable such as hydraulic head. We present a method for computational inverse analysis that utilizes a type of quantum computer called a quantum annealer. While quantum computing is in an early stage compared to classical computing, we demonstrate that it is sufficiently developed that it can be used to solve certain subsurface flow problems. We utilize a D-Wave 2X quantum annealermore » to solve 1D and 2D hydrologic inverse problems that, while small by modern standards, are similar in size and sometimes larger than hydrologic inverse problems that were solved with early classical computers. Our results and the rapid progress being made with quantum computing hardware indicate that the era of quantum-computational hydrology may not be too far in the future.« less

  12. Parallel processing architecture for computing inverse differential kinematic equations of the PUMA arm

    NASA Technical Reports Server (NTRS)

    Hsia, T. C.; Lu, G. Z.; Han, W. H.

    1987-01-01

    In advanced robot control problems, on-line computation of inverse Jacobian solution is frequently required. Parallel processing architecture is an effective way to reduce computation time. A parallel processing architecture is developed for the inverse Jacobian (inverse differential kinematic equation) of the PUMA arm. The proposed pipeline/parallel algorithm can be inplemented on an IC chip using systolic linear arrays. This implementation requires 27 processing cells and 25 time units. Computation time is thus significantly reduced.

  13. Preliminary assessment of the robustness of dynamic inversion based flight control laws

    NASA Technical Reports Server (NTRS)

    Snell, S. A.

    1992-01-01

    Dynamic-inversion-based flight control laws present an attractive alternative to conventional gain-scheduled designs for high angle-of-attack maneuvering, where nonlinearities dominate the dynamics. Dynamic inversion is easily applied to the aircraft dynamics requiring a knowledge of the nonlinear equations of motion alone, rather than an extensive set of linearizations. However, the robustness properties of the dynamic inversion are questionable especially when considering the uncertainties involved with the aerodynamic database during post-stall flight. This paper presents a simple analysis and some preliminary results of simulations with a perturbed database. It is shown that incorporating integrators into the control loops helps to improve the performance in the presence of these perturbations.

  14. Adaptive control of robotic manipulators

    NASA Technical Reports Server (NTRS)

    Seraji, H.

    1987-01-01

    The author presents a novel approach to adaptive control of manipulators to achieve trajectory tracking by the joint angles. The central concept in this approach is the utilization of the manipulator inverse as a feedforward controller. The desired trajectory is applied as an input to the feedforward controller which behaves as the inverse of the manipulator at any operating point; the controller output is used as the driving torque for the manipulator. The controller gains are then updated by an adaptation algorithm derived from MRAC (model reference adaptive control) theory to cope with variations in the manipulator inverse due to changes of the operating point. An adaptive feedback controller and an auxiliary signal are also used to enhance closed-loop stability and to achieve faster adaptation. The proposed control scheme is computationally fast and does not require a priori knowledge of the complex dynamic model or the parameter values of the manipulator or the payload.

  15. Kinematics and dynamics of a six-degree-of-freedom robot manipulator with closed kinematic chain mechanism

    NASA Technical Reports Server (NTRS)

    Nguyen, Charles C.; Pooran, Farhad J.

    1989-01-01

    This paper deals with a class of robot manipulators built based on the kinematic chain mechanism (CKCM). This class of CKCM manipulators consists of a fixed and a moving platform coupled together via a number of in-parallel actuators. A closed-form solution is derived for the inverse kinematic problem of a six-degre-of-freedom CKCM manipulator designed to study robotic applications in space. Iterative Newton-Raphson method is employed to solve the forward kinematic problem. Dynamics of the above manipulator is derived using the Lagrangian approach. Computer simulation of the dynamical equations shows that the actuating forces are strongly dependent on the mass and centroid of the robot links.

  16. A mesostate-space model for EEG and MEG.

    PubMed

    Daunizeau, Jean; Friston, Karl J

    2007-10-15

    We present a multi-scale generative model for EEG, that entails a minimum number of assumptions about evoked brain responses, namely: (1) bioelectric activity is generated by a set of distributed sources, (2) the dynamics of these sources can be modelled as random fluctuations about a small number of mesostates, (3) mesostates evolve in a temporal structured way and are functionally connected (i.e. influence each other), and (4) the number of mesostates engaged by a cognitive task is small (e.g. between one and a few). A Variational Bayesian learning scheme is described that furnishes the posterior density on the models parameters and its evidence. Since the number of meso-sources specifies the model, the model evidence can be used to compare models and find the optimum number of meso-sources. In addition to estimating the dynamics at each cortical dipole, the mesostate-space model and its inversion provide a description of brain activity at the level of the mesostates (i.e. in terms of the dynamics of meso-sources that are distributed over dipoles). The inclusion of a mesostate level allows one to compute posterior probability maps of each dipole being active (i.e. belonging to an active mesostate). Critically, this model accommodates constraints on the number of meso-sources, while retaining the flexibility of distributed source models in explaining data. In short, it bridges the gap between standard distributed and equivalent current dipole models. Furthermore, because it is explicitly spatiotemporal, the model can embed any stochastic dynamical causal model (e.g. a neural mass model) as a Markov process prior on the mesostate dynamics. The approach is evaluated and compared to standard inverse EEG techniques, using synthetic data and real data. The results demonstrate the added-value of the mesostate-space model and its variational inversion.

  17. Use of symbolic computation in robotics education

    NASA Technical Reports Server (NTRS)

    Vira, Naren; Tunstel, Edward

    1992-01-01

    An application of symbolic computation in robotics education is described. A software package is presented which combines generality, user interaction, and user-friendliness with the systematic usage of symbolic computation and artificial intelligence techniques. The software utilizes MACSYMA, a LISP-based symbolic algebra language, to automatically generate closed-form expressions representing forward and inverse kinematics solutions, the Jacobian transformation matrices, robot pose error-compensation models equations, and Lagrange dynamics formulation for N degree-of-freedom, open chain robotic manipulators. The goal of such a package is to aid faculty and students in the robotics course by removing burdensome tasks of mathematical manipulations. The software package has been successfully tested for its accuracy using commercially available robots.

  18. Hybrid-dual-fourier tomographic algorithm for a fast three-dimensionial optical image reconstruction in turbid media

    NASA Technical Reports Server (NTRS)

    Alfano, Robert R. (Inventor); Cai, Wei (Inventor)

    2007-01-01

    A reconstruction technique for reducing computation burden in the 3D image processes, wherein the reconstruction procedure comprises an inverse and a forward model. The inverse model uses a hybrid dual Fourier algorithm that combines a 2D Fourier inversion with a 1D matrix inversion to thereby provide high-speed inverse computations. The inverse algorithm uses a hybrid transfer to provide fast Fourier inversion for data of multiple sources and multiple detectors. The forward model is based on an analytical cumulant solution of a radiative transfer equation. The accurate analytical form of the solution to the radiative transfer equation provides an efficient formalism for fast computation of the forward model.

  19. Continuous analog of multiplicative algebraic reconstruction technique for computed tomography

    NASA Astrophysics Data System (ADS)

    Tateishi, Kiyoko; Yamaguchi, Yusaku; Abou Al-Ola, Omar M.; Kojima, Takeshi; Yoshinaga, Tetsuya

    2016-03-01

    We propose a hybrid dynamical system as a continuous analog to the block-iterative multiplicative algebraic reconstruction technique (BI-MART), which is a well-known iterative image reconstruction algorithm for computed tomography. The hybrid system is described by a switched nonlinear system with a piecewise smooth vector field or differential equation and, for consistent inverse problems, the convergence of non-negatively constrained solutions to a globally stable equilibrium is guaranteed by the Lyapunov theorem. Namely, we can prove theoretically that a weighted Kullback-Leibler divergence measure can be a common Lyapunov function for the switched system. We show that discretizing the differential equation by using the first-order approximation (Euler's method) based on the geometric multiplicative calculus leads to the same iterative formula of the BI-MART with the scaling parameter as a time-step of numerical discretization. The present paper is the first to reveal that a kind of iterative image reconstruction algorithm is constructed by the discretization of a continuous-time dynamical system for solving tomographic inverse problems. Iterative algorithms with not only the Euler method but also the Runge-Kutta methods of lower-orders applied for discretizing the continuous-time system can be used for image reconstruction. A numerical example showing the characteristics of the discretized iterative methods is presented.

  20. A computational procedure for the dynamics of flexible beams within multibody systems. Ph.D. Thesis Final Technical Report

    NASA Technical Reports Server (NTRS)

    Downer, Janice Diane

    1990-01-01

    The dynamic analysis of three dimensional elastic beams which experience large rotational and large deformational motions are examined. The beam motion is modeled using an inertial reference for the translational displacements and a body-fixed reference for the rotational quantities. Finite strain rod theories are then defined in conjunction with the beam kinematic description which accounts for the effects of stretching, bending, torsion, and transverse shear deformations. A convected coordinate representation of the Cauchy stress tensor and a conjugate strain definition is introduced to model the beam deformation. To treat the beam dynamics, a two-stage modification of the central difference algorithm is presented to integrate the translational coordinates and the angular velocity vector. The angular orientation is then obtained from the application of an implicit integration algorithm to the Euler parameter/angular velocity kinematical relation. The combined developments of the objective internal force computation with the dynamic solution procedures result in the computational preservation of total energy for undamped systems. The present methodology is also extended to model the dynamics of deployment/retrieval of the flexible members. A moving spatial grid corresponding to the configuration of a deployed rigid beam is employed as a reference for the dynamic variables. A transient integration scheme which accurately accounts for the deforming spatial grid is derived from a space-time finite element discretization of a Hamiltonian variational statement. The computational results of this general deforming finite element beam formulation are compared to reported results for a planar inverse-spaghetti problem.

  1. The inverse problem of using the information of historical data to estimate model errors is one of the science frontier research topics. In this study, we investigate such a problem using the classic Lorenz (1963) equation as a prediction model.

    NASA Astrophysics Data System (ADS)

    Wan, S.; He, W.

    2016-12-01

    The inverse problem of using the information of historical data to estimate model errors is one of the science frontier research topics. In this study, we investigate such a problem using the classic Lorenz (1963) equation as a prediction model and the Lorenz equation with a periodic evolutionary function as an accurate representation of reality to generate "observational data." On the basis of the intelligent features of evolutionary modeling (EM), including self-organization, self-adaptive and self-learning, the dynamic information contained in the historical data can be identified and extracted by computer automatically. Thereby, a new approach is proposed to estimate model errors based on EM in the present paper. Numerical tests demonstrate the ability of the new approach to correct model structural errors. In fact, it can actualize the combination of the statistics and dynamics to certain extent.

  2. Sedimentation of knotted polymers

    NASA Astrophysics Data System (ADS)

    Piili, J.; Marenduzzo, D.; Kaski, K.; Linna, R. P.

    2013-01-01

    We investigate the sedimentation of knotted polymers by means of stochastic rotation dynamics, a molecular dynamics algorithm that takes hydrodynamics fully into account. We show that the sedimentation coefficient s, related to the terminal velocity of the knotted polymers, increases linearly with the average crossing number nc of the corresponding ideal knot. This provides direct computational confirmation of this relation, postulated on the basis of sedimentation experiments by Rybenkov [J. Mol. Biol.10.1006/jmbi.1996.0876 267, 299 (1997)]. Such a relation was previously shown to hold with simulations for knot electrophoresis. We also show that there is an accurate linear dependence of s on the inverse of the radius of gyration Rg-1, more specifically with the inverse of the Rg component that is perpendicular to the direction along which the polymer sediments. When the polymer sediments in a slab, the walls affect the results appreciably. However, Rg-1 remains to a good precision linearly dependent on nc. Therefore, Rg-1 is a good measure of a knot's complexity.

  3. Inverse problems and computational cell metabolic models: a statistical approach

    NASA Astrophysics Data System (ADS)

    Calvetti, D.; Somersalo, E.

    2008-07-01

    In this article, we give an overview of the Bayesian modelling of metabolic systems at the cellular and subcellular level. The models are based on detailed description of key biochemical reactions occurring in tissue, which may in turn be compartmentalized into cytosol and mitochondria, and of transports between the compartments. The classical deterministic approach which models metabolic systems as dynamical systems with Michaelis-Menten kinetics, is replaced by a stochastic extension where the model parameters are interpreted as random variables with an appropriate probability density. The inverse problem of cell metabolism in this setting consists of estimating the density of the model parameters. After discussing some possible approaches to solving the problem, we address the issue of how to assess the reliability of the predictions of a stochastic model by proposing an output analysis in terms of model uncertainties. Visualization modalities for organizing the large amount of information provided by the Bayesian dynamic sensitivity analysis are also illustrated.

  4. A time domain inverse dynamic method for the end point tracking control of a flexible manipulator

    NASA Technical Reports Server (NTRS)

    Kwon, Dong-Soo; Book, Wayne J.

    1991-01-01

    The inverse dynamic equation of a flexible manipulator was solved in the time domain. By dividing the inverse system equation into the causal part and the anticausal part, we calculated the torque and the trajectories of all state variables for a given end point trajectory. The interpretation of this method in the frequency domain was explained in detail using the two-sided Laplace transform and the convolution integral. The open loop control of the inverse dynamic method shows an excellent result in simulation. For real applications, a practical control strategy is proposed by adding a feedback tracking control loop to the inverse dynamic feedforward control, and its good experimental performance is presented.

  5. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    NASA Astrophysics Data System (ADS)

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    2016-09-01

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace such that the dimensionality of the problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2-D and a random hydraulic conductivity field in 3-D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ˜101 to ˜102 in a multicore computational environment. Therefore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate to large-scale problems.

  6. Network-Physics(NP) Bec DIGITAL(#)-VULNERABILITY Versus Fault-Tolerant Analog

    NASA Astrophysics Data System (ADS)

    Alexander, G. K.; Hathaway, M.; Schmidt, H. E.; Siegel, E.

    2011-03-01

    Siegel[AMS Joint Mtg.(2002)-Abs.973-60-124] digits logarithmic-(Newcomb(1881)-Weyl(1914; 1916)-Benford(1938)-"NeWBe"/"OLDbe")-law algebraic-inversion to ONLY BEQS BEC:Quanta/Bosons= digits: Synthesis reveals EMP-like SEVERE VULNERABILITY of ONLY DIGITAL-networks(VS. FAULT-TOLERANT ANALOG INvulnerability) via Barabasi "Network-Physics" relative-``statics''(VS.dynamics-[Willinger-Alderson-Doyle(Not.AMS(5/09)]-]critique); (so called)"Quantum-computing is simple-arithmetic(sans division/ factorization); algorithmic-complexities: INtractibility/ UNdecidability/ INefficiency/NONcomputability / HARDNESS(so MIScalled) "noise"-induced-phase-transitions(NITS) ACCELERATION: Cook-Levin theorem Reducibility is Renormalization-(Semi)-Group fixed-points; number-Randomness DEFINITION via WHAT? Query(VS. Goldreich[Not.AMS(02)] How? mea culpa)can ONLY be MBCS "hot-plasma" versus digit-clumping NON-random BEC; Modular-arithmetic Congruences= Signal X Noise PRODUCTS = clock-model; NON-Shor[Physica A,341,586(04)] BEC logarithmic-law inversion factorization:Watkins number-thy. U stat.-phys.); P=/=NP TRIVIAL Proof: Euclid!!! [(So Miscalled) computational-complexity J-O obviation via geometry.

  7. Damping mathematical modelling and dynamic responses for FRP laminated composite plates with polymer matrix

    NASA Astrophysics Data System (ADS)

    Liu, Qimao

    2018-02-01

    This paper proposes an assumption that the fibre is elastic material and polymer matrix is viscoelastic material so that the energy dissipation depends only on the polymer matrix in dynamic response process. The damping force vectors in frequency and time domains, of FRP (Fibre-Reinforced Polymer matrix) laminated composite plates, are derived based on this assumption. The governing equations of FRP laminated composite plates are formulated in both frequency and time domains. The direct inversion method and direct time integration method for nonviscously damped systems are employed to solve the governing equations and achieve the dynamic responses in frequency and time domains, respectively. The computational procedure is given in detail. Finally, dynamic responses (frequency responses with nonzero and zero initial conditions, free vibration, forced vibrations with nonzero and zero initial conditions) of a FRP laminated composite plate are computed using the proposed methodology. The proposed methodology in this paper is easy to be inserted into the commercial finite element analysis software. The proposed assumption, based on the theory of material mechanics, needs to be further proved by experiment technique in the future.

  8. Real-time characterization of partially observed epidemics using surrogate models.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Safta, Cosmin; Ray, Jaideep; Lefantzi, Sophia

    We present a statistical method, predicated on the use of surrogate models, for the 'real-time' characterization of partially observed epidemics. Observations consist of counts of symptomatic patients, diagnosed with the disease, that may be available in the early epoch of an ongoing outbreak. Characterization, in this context, refers to estimation of epidemiological parameters that can be used to provide short-term forecasts of the ongoing epidemic, as well as to provide gross information on the dynamics of the etiologic agent in the affected population e.g., the time-dependent infection rate. The characterization problem is formulated as a Bayesian inverse problem, and epidemiologicalmore » parameters are estimated as distributions using a Markov chain Monte Carlo (MCMC) method, thus quantifying the uncertainty in the estimates. In some cases, the inverse problem can be computationally expensive, primarily due to the epidemic simulator used inside the inversion algorithm. We present a method, based on replacing the epidemiological model with computationally inexpensive surrogates, that can reduce the computational time to minutes, without a significant loss of accuracy. The surrogates are created by projecting the output of an epidemiological model on a set of polynomial chaos bases; thereafter, computations involving the surrogate model reduce to evaluations of a polynomial. We find that the epidemic characterizations obtained with the surrogate models is very close to that obtained with the original model. We also find that the number of projections required to construct a surrogate model is O(10)-O(10{sup 2}) less than the number of samples required by the MCMC to construct a stationary posterior distribution; thus, depending upon the epidemiological models in question, it may be possible to omit the offline creation and caching of surrogate models, prior to their use in an inverse problem. The technique is demonstrated on synthetic data as well as observations from the 1918 influenza pandemic collected at Camp Custer, Michigan.« less

  9. Fair and Square Computation of Inverse "Z"-Transforms of Rational Functions

    ERIC Educational Resources Information Center

    Moreira, M. V.; Basilio, J. C.

    2012-01-01

    All methods presented in textbooks for computing inverse "Z"-transforms of rational functions have some limitation: 1) the direct division method does not, in general, provide enough information to derive an analytical expression for the time-domain sequence "x"("k") whose "Z"-transform is "X"("z"); 2) computation using the inversion integral…

  10. Inverse modeling methods for indoor airborne pollutant tracking: literature review and fundamentals.

    PubMed

    Liu, X; Zhai, Z

    2007-12-01

    Reduction in indoor environment quality calls for effective control and improvement measures. Accurate and prompt identification of contaminant sources ensures that they can be quickly removed and contaminated spaces isolated and cleaned. This paper discusses the use of inverse modeling to identify potential indoor pollutant sources with limited pollutant sensor data. The study reviews various inverse modeling methods for advection-dispersion problems and summarizes the methods into three major categories: forward, backward, and probability inverse modeling methods. The adjoint probability inverse modeling method is indicated as an appropriate model for indoor air pollutant tracking because it can quickly find source location, strength and release time without prior information. The paper introduces the principles of the adjoint probability method and establishes the corresponding adjoint equations for both multi-zone airflow models and computational fluid dynamics (CFD) models. The study proposes a two-stage inverse modeling approach integrating both multi-zone and CFD models, which can provide a rapid estimate of indoor pollution status and history for a whole building. Preliminary case study results indicate that the adjoint probability method is feasible for indoor pollutant inverse modeling. The proposed method can help identify contaminant source characteristics (location and release time) with limited sensor outputs. This will ensure an effective and prompt execution of building management strategies and thus achieve a healthy and safe indoor environment. The method can also help design optimal sensor networks.

  11. Waveform inversion with source encoding for breast sound speed reconstruction in ultrasound computed tomography.

    PubMed

    Wang, Kun; Matthews, Thomas; Anis, Fatima; Li, Cuiping; Duric, Neb; Anastasio, Mark A

    2015-03-01

    Ultrasound computed tomography (USCT) holds great promise for improving the detection and management of breast cancer. Because they are based on the acoustic wave equation, waveform inversion-based reconstruction methods can produce images that possess improved spatial resolution properties over those produced by ray-based methods. However, waveform inversion methods are computationally demanding and have not been applied widely in USCT breast imaging. In this work, source encoding concepts are employed to develop an accelerated USCT reconstruction method that circumvents the large computational burden of conventional waveform inversion methods. This method, referred to as the waveform inversion with source encoding (WISE) method, encodes the measurement data using a random encoding vector and determines an estimate of the sound speed distribution by solving a stochastic optimization problem by use of a stochastic gradient descent algorithm. Both computer simulation and experimental phantom studies are conducted to demonstrate the use of the WISE method. The results suggest that the WISE method maintains the high spatial resolution of waveform inversion methods while significantly reducing the computational burden.

  12. Activated dynamics in dense fluids of attractive nonspherical particles. II. Elasticity, barriers, relaxation, fragility, and self-diffusion

    NASA Astrophysics Data System (ADS)

    Tripathy, Mukta; Schweizer, Kenneth S.

    2011-04-01

    In paper II of this series we apply the center-of-mass version of Nonlinear Langevin Equation theory to study how short-range attractive interactions influence the elastic shear modulus, transient localization length, activated dynamics, and kinetic arrest of a variety of nonspherical particle dense fluids (and the spherical analog) as a function of volume fraction and attraction strength. The activation barrier (roughly the natural logarithm of the dimensionless relaxation time) is predicted to be a rich function of particle shape, volume fraction, and attraction strength, and the dynamic fragility varies significantly with particle shape. At fixed volume fraction, the barrier grows in a parabolic manner with inverse temperature nondimensionalized by an onset value, analogous to what has been established for thermal glass-forming liquids. Kinetic arrest boundaries lie at significantly higher volume fractions and attraction strengths relative to their dynamic crossover analogs, but their particle shape dependence remains the same. A limited universality of barrier heights is found based on the concept of an effective mean-square confining force. The mean hopping time and self-diffusion constant in the attractive glass region of the nonequilibrium phase diagram is predicted to vary nonmonotonically with attraction strength or inverse temperature, qualitatively consistent with recent computer simulations and colloid experiments.

  13. Revisiting the body-schema concept in the context of whole-body postural-focal dynamics.

    PubMed

    Morasso, Pietro; Casadio, Maura; Mohan, Vishwanathan; Rea, Francesco; Zenzeri, Jacopo

    2015-01-01

    The body-schema concept is revisited in the context of embodied cognition, further developing the theory formulated by Marc Jeannerod that the motor system is part of a simulation network related to action, whose function is not only to shape the motor system for preparing an action (either overt or covert) but also to provide the self with information on the feasibility and the meaning of potential actions. The proposed computational formulation is based on a dynamical system approach, which is linked to an extension of the equilibrium-point hypothesis, called Passive Motor Paradigm: this dynamical system generates goal-oriented, spatio-temporal, sensorimotor patterns, integrating a direct and inverse internal model in a multi-referential framework. The purpose of such computational model is to operate at the same time as a general synergy formation machinery for planning whole-body actions in humanoid robots and/or for predicting coordinated sensory-motor patterns in human movements. In order to illustrate the computational approach, the integration of simultaneous, even partially conflicting tasks will be analyzed in some detail with regard to postural-focal dynamics, which can be defined as the fusion of a focal task, namely reaching a target with the whole-body, and a postural task, namely maintaining overall stability.

  14. Revisiting the Body-Schema Concept in the Context of Whole-Body Postural-Focal Dynamics

    PubMed Central

    Morasso, Pietro; Casadio, Maura; Mohan, Vishwanathan; Rea, Francesco; Zenzeri, Jacopo

    2015-01-01

    The body-schema concept is revisited in the context of embodied cognition, further developing the theory formulated by Marc Jeannerod that the motor system is part of a simulation network related to action, whose function is not only to shape the motor system for preparing an action (either overt or covert) but also to provide the self with information on the feasibility and the meaning of potential actions. The proposed computational formulation is based on a dynamical system approach, which is linked to an extension of the equilibrium-point hypothesis, called Passive Motor Paradigm: this dynamical system generates goal-oriented, spatio-temporal, sensorimotor patterns, integrating a direct and inverse internal model in a multi-referential framework. The purpose of such computational model is to operate at the same time as a general synergy formation machinery for planning whole-body actions in humanoid robots and/or for predicting coordinated sensory–motor patterns in human movements. In order to illustrate the computational approach, the integration of simultaneous, even partially conflicting tasks will be analyzed in some detail with regard to postural-focal dynamics, which can be defined as the fusion of a focal task, namely reaching a target with the whole-body, and a postural task, namely maintaining overall stability. PMID:25741274

  15. Inverse Optimization: A New Perspective on the Black-Litterman Model.

    PubMed

    Bertsimas, Dimitris; Gupta, Vishal; Paschalidis, Ioannis Ch

    2012-12-11

    The Black-Litterman (BL) model is a widely used asset allocation model in the financial industry. In this paper, we provide a new perspective. The key insight is to replace the statistical framework in the original approach with ideas from inverse optimization. This insight allows us to significantly expand the scope and applicability of the BL model. We provide a richer formulation that, unlike the original model, is flexible enough to incorporate investor information on volatility and market dynamics. Equally importantly, our approach allows us to move beyond the traditional mean-variance paradigm of the original model and construct "BL"-type estimators for more general notions of risk such as coherent risk measures. Computationally, we introduce and study two new "BL"-type estimators and their corresponding portfolios: a Mean Variance Inverse Optimization (MV-IO) portfolio and a Robust Mean Variance Inverse Optimization (RMV-IO) portfolio. These two approaches are motivated by ideas from arbitrage pricing theory and volatility uncertainty. Using numerical simulation and historical backtesting, we show that both methods often demonstrate a better risk-reward tradeoff than their BL counterparts and are more robust to incorrect investor views.

  16. Modified Dynamic Inversion to Control Large Flexible Aircraft: What's Going On?

    NASA Technical Reports Server (NTRS)

    Gregory, Irene M.

    1999-01-01

    High performance aircraft of the future will be designed lighter, more maneuverable, and operate over an ever expanding flight envelope. One of the largest differences from the flight control perspective between current and future advanced aircraft is elasticity. Over the last decade, dynamic inversion methodology has gained considerable popularity in application to highly maneuverable fighter aircraft, which were treated as rigid vehicles. This paper explores dynamic inversion application to an advanced highly flexible aircraft. An initial application has been made to a large flexible supersonic aircraft. In the course of controller design for this advanced vehicle, modifications were made to the standard dynamic inversion methodology. The results of this application were deemed rather promising. An analytical study has been undertaken to better understand the nature of the made modifications and to determine its general applicability. This paper presents the results of this initial analytical look at the modifications to dynamic inversion to control large flexible aircraft.

  17. Entropy-Bayesian Inversion of Time-Lapse Tomographic GPR data for Monitoring Dielectric Permittivity and Soil Moisture Variations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hou, Z; Terry, N; Hubbard, S S

    2013-02-12

    In this study, we evaluate the possibility of monitoring soil moisture variation using tomographic ground penetrating radar travel time data through Bayesian inversion, which is integrated with entropy memory function and pilot point concepts, as well as efficient sampling approaches. It is critical to accurately estimate soil moisture content and variations in vadose zone studies. Many studies have illustrated the promise and value of GPR tomographic data for estimating soil moisture and associated changes, however, challenges still exist in the inversion of GPR tomographic data in a manner that quantifies input and predictive uncertainty, incorporates multiple data types, handles non-uniquenessmore » and nonlinearity, and honors time-lapse tomograms collected in a series. To address these challenges, we develop a minimum relative entropy (MRE)-Bayesian based inverse modeling framework that non-subjectively defines prior probabilities, incorporates information from multiple sources, and quantifies uncertainty. The framework enables us to estimate dielectric permittivity at pilot point locations distributed within the tomogram, as well as the spatial correlation range. In the inversion framework, MRE is first used to derive prior probability distribution functions (pdfs) of dielectric permittivity based on prior information obtained from a straight-ray GPR inversion. The probability distributions are then sampled using a Quasi-Monte Carlo (QMC) approach, and the sample sets provide inputs to a sequential Gaussian simulation (SGSim) algorithm that constructs a highly resolved permittivity/velocity field for evaluation with a curved-ray GPR forward model. The likelihood functions are computed as a function of misfits, and posterior pdfs are constructed using a Gaussian kernel. Inversion of subsequent time-lapse datasets combines the Bayesian estimates from the previous inversion (as a memory function) with new data. The memory function and pilot point design takes advantage of the spatial-temporal correlation of the state variables. We first apply the inversion framework to a static synthetic example and then to a time-lapse GPR tomographic dataset collected during a dynamic experiment conducted at the Hanford Site in Richland, WA. We demonstrate that the MRE-Bayesian inversion enables us to merge various data types, quantify uncertainty, evaluate nonlinear models, and produce more detailed and better resolved estimates than straight-ray based inversion; therefore, it has the potential to improve estimates of inter-wellbore dielectric permittivity and soil moisture content and to monitor their temporal dynamics more accurately.« less

  18. Entropy-Bayesian Inversion of Time-Lapse Tomographic GPR data for Monitoring Dielectric Permittivity and Soil Moisture Variations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hou, Zhangshuan; Terry, Neil C.; Hubbard, Susan S.

    2013-02-22

    In this study, we evaluate the possibility of monitoring soil moisture variation using tomographic ground penetrating radar travel time data through Bayesian inversion, which is integrated with entropy memory function and pilot point concepts, as well as efficient sampling approaches. It is critical to accurately estimate soil moisture content and variations in vadose zone studies. Many studies have illustrated the promise and value of GPR tomographic data for estimating soil moisture and associated changes, however, challenges still exist in the inversion of GPR tomographic data in a manner that quantifies input and predictive uncertainty, incorporates multiple data types, handles non-uniquenessmore » and nonlinearity, and honors time-lapse tomograms collected in a series. To address these challenges, we develop a minimum relative entropy (MRE)-Bayesian based inverse modeling framework that non-subjectively defines prior probabilities, incorporates information from multiple sources, and quantifies uncertainty. The framework enables us to estimate dielectric permittivity at pilot point locations distributed within the tomogram, as well as the spatial correlation range. In the inversion framework, MRE is first used to derive prior probability density functions (pdfs) of dielectric permittivity based on prior information obtained from a straight-ray GPR inversion. The probability distributions are then sampled using a Quasi-Monte Carlo (QMC) approach, and the sample sets provide inputs to a sequential Gaussian simulation (SGSIM) algorithm that constructs a highly resolved permittivity/velocity field for evaluation with a curved-ray GPR forward model. The likelihood functions are computed as a function of misfits, and posterior pdfs are constructed using a Gaussian kernel. Inversion of subsequent time-lapse datasets combines the Bayesian estimates from the previous inversion (as a memory function) with new data. The memory function and pilot point design takes advantage of the spatial-temporal correlation of the state variables. We first apply the inversion framework to a static synthetic example and then to a time-lapse GPR tomographic dataset collected during a dynamic experiment conducted at the Hanford Site in Richland, WA. We demonstrate that the MRE-Bayesian inversion enables us to merge various data types, quantify uncertainty, evaluate nonlinear models, and produce more detailed and better resolved estimates than straight-ray based inversion; therefore, it has the potential to improve estimates of inter-wellbore dielectric permittivity and soil moisture content and to monitor their temporal dynamics more accurately.« less

  19. Automated dynamic analytical model improvement for damped structures

    NASA Technical Reports Server (NTRS)

    Fuh, J. S.; Berman, A.

    1985-01-01

    A method is described to improve a linear nonproportionally damped analytical model of a structure. The procedure finds the smallest changes in the analytical model such that the improved model matches the measured modal parameters. Features of the method are: (1) ability to properly treat complex valued modal parameters of a damped system; (2) applicability to realistically large structural models; and (3) computationally efficiency without involving eigensolutions and inversion of a large matrix.

  20. Parallel conjugate gradient algorithms for manipulator dynamic simulation

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Scheld, Robert E.

    1989-01-01

    Parallel conjugate gradient algorithms for the computation of multibody dynamics are developed for the specialized case of a robot manipulator. For an n-dimensional positive-definite linear system, the Classical Conjugate Gradient (CCG) algorithms are guaranteed to converge in n iterations, each with a computation cost of O(n); this leads to a total computational cost of O(n sq) on a serial processor. A conjugate gradient algorithms is presented that provide greater efficiency using a preconditioner, which reduces the number of iterations required, and by exploiting parallelism, which reduces the cost of each iteration. Two Preconditioned Conjugate Gradient (PCG) algorithms are proposed which respectively use a diagonal and a tridiagonal matrix, composed of the diagonal and tridiagonal elements of the mass matrix, as preconditioners. Parallel algorithms are developed to compute the preconditioners and their inversions in O(log sub 2 n) steps using n processors. A parallel algorithm is also presented which, on the same architecture, achieves the computational time of O(log sub 2 n) for each iteration. Simulation results for a seven degree-of-freedom manipulator are presented. Variants of the proposed algorithms are also developed which can be efficiently implemented on the Robot Mathematics Processor (RMP).

  1. Integration of Visual and Joint Information to Enable Linear Reaching Motions

    NASA Astrophysics Data System (ADS)

    Eberle, Henry; Nasuto, Slawomir J.; Hayashi, Yoshikatsu

    2017-01-01

    A new dynamics-driven control law was developed for a robot arm, based on the feedback control law which uses the linear transformation directly from work space to joint space. This was validated using a simulation of a two-joint planar robot arm and an optimisation algorithm was used to find the optimum matrix to generate straight trajectories of the end-effector in the work space. We found that this linear matrix can be decomposed into the rotation matrix representing the orientation of the goal direction and the joint relation matrix (MJRM) representing the joint response to errors in the Cartesian work space. The decomposition of the linear matrix indicates the separation of path planning in terms of the direction of the reaching motion and the synergies of joint coordination. Once the MJRM is numerically obtained, the feedfoward planning of reaching direction allows us to provide asymptotically stable, linear trajectories in the entire work space through rotational transformation, completely avoiding the use of inverse kinematics. Our dynamics-driven control law suggests an interesting framework for interpreting human reaching motion control alternative to the dominant inverse method based explanations, avoiding expensive computation of the inverse kinematics and the point-to-point control along the desired trajectories.

  2. Simulation of Constrained Musculoskeletal Systems in Task Space.

    PubMed

    Stanev, Dimitar; Moustakas, Konstantinos

    2018-02-01

    This paper proposes an operational task space formalization of constrained musculoskeletal systems, motivated by its promising results in the field of robotics. The change of representation requires different algorithms for solving the inverse and forward dynamics simulation in the task space domain. We propose an extension to the direct marker control and an adaptation of the computed muscle control algorithms for solving the inverse kinematics and muscle redundancy problems, respectively. Experimental evaluation demonstrates that this framework is not only successful in dealing with the inverse dynamics problem, but also provides an intuitive way of studying and designing simulations, facilitating assessment prior to any experimental data collection. The incorporation of constraints in the derivation unveils an important extension of this framework toward addressing systems that use absolute coordinates and topologies that contain closed kinematic chains. Task space projection reveals a more intuitive encoding of the motion planning problem, allows for better correspondence between observed and estimated variables, provides the means to effectively study the role of kinematic redundancy, and most importantly, offers an abstract point of view and control, which can be advantageous toward further integration with high level models of the precommand level. Task-based approaches could be adopted in the design of simulation related to the study of constrained musculoskeletal systems.

  3. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10 1 to ~10 2 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less

  4. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    DOE PAGES

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    2016-09-01

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10 1 to ~10 2 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less

  5. Gradient-free MCMC methods for dynamic causal modelling

    DOE PAGES

    Sengupta, Biswa; Friston, Karl J.; Penny, Will D.

    2015-03-14

    Here, we compare the performance of four gradient-free MCMC samplers (random walk Metropolis sampling, slice-sampling, adaptive MCMC sampling and population-based MCMC sampling with tempering) in terms of the number of independent samples they can produce per unit computational time. For the Bayesian inversion of a single-node neural mass model, both adaptive and population-based samplers are more efficient compared with random walk Metropolis sampler or slice-sampling; yet adaptive MCMC sampling is more promising in terms of compute time. Slice-sampling yields the highest number of independent samples from the target density -- albeit at almost 1000% increase in computational time, in comparisonmore » to the most efficient algorithm (i.e., the adaptive MCMC sampler).« less

  6. Joint Model and Parameter Dimension Reduction for Bayesian Inversion Applied to an Ice Sheet Flow Problem

    NASA Astrophysics Data System (ADS)

    Ghattas, O.; Petra, N.; Cui, T.; Marzouk, Y.; Benjamin, P.; Willcox, K.

    2016-12-01

    Model-based projections of the dynamics of the polar ice sheets play a central role in anticipating future sea level rise. However, a number of mathematical and computational challenges place significant barriers on improving predictability of these models. One such challenge is caused by the unknown model parameters (e.g., in the basal boundary conditions) that must be inferred from heterogeneous observational data, leading to an ill-posed inverse problem and the need to quantify uncertainties in its solution. In this talk we discuss the problem of estimating the uncertainty in the solution of (large-scale) ice sheet inverse problems within the framework of Bayesian inference. Computing the general solution of the inverse problem--i.e., the posterior probability density--is intractable with current methods on today's computers, due to the expense of solving the forward model (3D full Stokes flow with nonlinear rheology) and the high dimensionality of the uncertain parameters (which are discretizations of the basal sliding coefficient field). To overcome these twin computational challenges, it is essential to exploit problem structure (e.g., sensitivity of the data to parameters, the smoothing property of the forward model, and correlations in the prior). To this end, we present a data-informed approach that identifies low-dimensional structure in both parameter space and the forward model state space. This approach exploits the fact that the observations inform only a low-dimensional parameter space and allows us to construct a parameter-reduced posterior. Sampling this parameter-reduced posterior still requires multiple evaluations of the forward problem, therefore we also aim to identify a low dimensional state space to reduce the computational cost. To this end, we apply a proper orthogonal decomposition (POD) approach to approximate the state using a low-dimensional manifold constructed using ``snapshots'' from the parameter reduced posterior, and the discrete empirical interpolation method (DEIM) to approximate the nonlinearity in the forward problem. We show that using only a limited number of forward solves, the resulting subspaces lead to an efficient method to explore the high-dimensional posterior.

  7. Fuzzy logic based robotic controller

    NASA Technical Reports Server (NTRS)

    Attia, F.; Upadhyaya, M.

    1994-01-01

    Existing Proportional-Integral-Derivative (PID) robotic controllers rely on an inverse kinematic model to convert user-specified cartesian trajectory coordinates to joint variables. These joints experience friction, stiction, and gear backlash effects. Due to lack of proper linearization of these effects, modern control theory based on state space methods cannot provide adequate control for robotic systems. In the presence of loads, the dynamic behavior of robotic systems is complex and nonlinear, especially where mathematical modeling is evaluated for real-time operators. Fuzzy Logic Control is a fast emerging alternative to conventional control systems in situations where it may not be feasible to formulate an analytical model of the complex system. Fuzzy logic techniques track a user-defined trajectory without having the host computer to explicitly solve the nonlinear inverse kinematic equations. The goal is to provide a rule-based approach, which is closer to human reasoning. The approach used expresses end-point error, location of manipulator joints, and proximity to obstacles as fuzzy variables. The resulting decisions are based upon linguistic and non-numerical information. This paper presents a solution to the conventional robot controller which is independent of computationally intensive kinematic equations. Computer simulation results of this approach as obtained from software implementation are also discussed.

  8. An inviscid-viscous interaction approach to the calculation of dynamic stall initiation on airfoils

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cebeci, T.; Platzer, M.F.; Jang, H.M.

    An interactive boundary-layer method is described for computing unsteady incompressible flow over airfoils, including the initiation of dynamic stall. The inviscid unsteady panel method developed by Platzer and Teng is extended to include viscous effects. The solutions of the boundary-layer equations are obtained with an inverse finite-difference method employing an interaction law based on the Hilbert integral, and the algebraic eddy-viscosity formulation of Cebeci and Smith. The method is applied to airfoils subject to periodic and ramp-type motions and its abilities are examined for a range of angles of attack, reduced frequency, and pitch rate.

  9. Next Generation Robots for STEM Education andResearch at Huston Tillotson University

    DTIC Science & Technology

    2017-11-10

    dynamics through the following command: roslaunch mtb_lab6_feedback_linearization gravity_compensation.launch Part B: Gravity Inversion : After...understood the system’s natural dynamics. roslaunch mtb_lab6_feedback_linearization gravity_compensation.launch Part B: Gravity Inversion ...is created using the following command: roslaunch mtb_lab6_feedback_linearization gravity_inversion.launch Gravity inversion is just one

  10. Simulation of the Tsunami Resulting from the M 9.2 2004 Sumatra-Andaman Earthquake - Dynamic Rupture vs. Seismic Inversion Source Model

    NASA Astrophysics Data System (ADS)

    Vater, Stefan; Behrens, Jörn

    2017-04-01

    Simulations of historic tsunami events such as the 2004 Sumatra or the 2011 Tohoku event are usually initialized using earthquake sources resulting from inversion of seismic data. Also, other data from ocean buoys etc. is sometimes included in the derivation of the source model. The associated tsunami event can often be well simulated in this way, and the results show high correlation with measured data. However, it is unclear how the derived source model compares to the particular earthquake event. In this study we use the results from dynamic rupture simulations obtained with SeisSol, a software package based on an ADER-DG discretization solving the spontaneous dynamic earthquake rupture problem with high-order accuracy in space and time. The tsunami model is based on a second-order Runge-Kutta discontinuous Galerkin (RKDG) scheme on triangular grids and features a robust wetting and drying scheme for the simulation of inundation events at the coast. Adaptive mesh refinement enables the efficient computation of large domains, while at the same time it allows for high local resolution and geometric accuracy. The results are compared to measured data and results using earthquake sources based on inversion. With the approach of using the output of actual dynamic rupture simulations, we can estimate the influence of different earthquake parameters. Furthermore, the comparison to other source models enables a thorough comparison and validation of important tsunami parameters, such as the runup at the coast. This work is part of the ASCETE (Advanced Simulation of Coupled Earthquake and Tsunami Events) project, which aims at an improved understanding of the coupling between the earthquake and the generated tsunami event.

  11. Breast ultrasound computed tomography using waveform inversion with source encoding

    NASA Astrophysics Data System (ADS)

    Wang, Kun; Matthews, Thomas; Anis, Fatima; Li, Cuiping; Duric, Neb; Anastasio, Mark A.

    2015-03-01

    Ultrasound computed tomography (USCT) holds great promise for improving the detection and management of breast cancer. Because they are based on the acoustic wave equation, waveform inversion-based reconstruction methods can produce images that possess improved spatial resolution properties over those produced by ray-based methods. However, waveform inversion methods are computationally demanding and have not been applied widely in USCT breast imaging. In this work, source encoding concepts are employed to develop an accelerated USCT reconstruction method that circumvents the large computational burden of conventional waveform inversion methods. This method, referred to as the waveform inversion with source encoding (WISE) method, encodes the measurement data using a random encoding vector and determines an estimate of the speed-of-sound distribution by solving a stochastic optimization problem by use of a stochastic gradient descent algorithm. Computer-simulation studies are conducted to demonstrate the use of the WISE method. Using a single graphics processing unit card, each iteration can be completed within 25 seconds for a 128 × 128 mm2 reconstruction region. The results suggest that the WISE method maintains the high spatial resolution of waveform inversion methods while significantly reducing the computational burden.

  12. Using a pseudo-dynamic source inversion approach to improve earthquake source imaging

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Song, S. G.; Dalguer, L. A.; Clinton, J. F.

    2014-12-01

    Imaging a high-resolution spatio-temporal slip distribution of an earthquake rupture is a core research goal in seismology. In general we expect to obtain a higher quality source image by improving the observational input data (e.g. using more higher quality near-source stations). However, recent studies show that increasing the surface station density alone does not significantly improve source inversion results (Custodio et al. 2005; Zhang et al. 2014). We introduce correlation structures between the kinematic source parameters: slip, rupture velocity, and peak slip velocity (Song et al. 2009; Song and Dalguer 2013) in the non-linear source inversion. The correlation structures are physical constraints derived from rupture dynamics that effectively regularize the model space and may improve source imaging. We name this approach pseudo-dynamic source inversion. We investigate the effectiveness of this pseudo-dynamic source inversion method by inverting low frequency velocity waveforms from a synthetic dynamic rupture model of a buried vertical strike-slip event (Mw 6.5) in a homogeneous half space. In the inversion, we use a genetic algorithm in a Bayesian framework (Moneli et al. 2008), and a dynamically consistent regularized Yoffe function (Tinti, et al. 2005) was used for a single-window slip velocity function. We search for local rupture velocity directly in the inversion, and calculate the rupture time using a ray-tracing technique. We implement both auto- and cross-correlation of slip, rupture velocity, and peak slip velocity in the prior distribution. Our results suggest that kinematic source model estimates capture the major features of the target dynamic model. The estimated rupture velocity closely matches the target distribution from the dynamic rupture model, and the derived rupture time is smoother than the one we searched directly. By implementing both auto- and cross-correlation of kinematic source parameters, in comparison to traditional smoothing constraints, we are in effect regularizing the model space in a more physics-based manner without loosing resolution of the source image. Further investigation is needed to tune the related parameters of pseudo-dynamic source inversion and relative weighting between the prior and the likelihood function in the Bayesian inversion.

  13. Superelement model based parallel algorithm for vehicle dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agrawal, O.P.; Danhof, K.J.; Kumar, R.

    1994-05-01

    This paper presents a superelement model based parallel algorithm for a planar vehicle dynamics. The vehicle model is made up of a chassis and two suspension systems each of which consists of an axle-wheel assembly and two trailing arms. In this model, the chassis is treated as a Cartesian element and each suspension system is treated as a superelement. The parameters associated with the superelements are computed using an inverse dynamics technique. Suspension shock absorbers and the tires are modeled by nonlinear springs and dampers. The Euler-Lagrange approach is used to develop the system equations of motion. This leads tomore » a system of differential and algebraic equations in which the constraints internal to superelements appear only explicitly. The above formulation is implemented on a multiprocessor machine. The numerical flow chart is divided into modules and the computation of several modules is performed in parallel to gain computational efficiency. In this implementation, the master (parent processor) creates a pool of slaves (child processors) at the beginning of the program. The slaves remain in the pool until they are needed to perform certain tasks. Upon completion of a particular task, a slave returns to the pool. This improves the overall response time of the algorithm. The formulation presented is general which makes it attractive for a general purpose code development. Speedups obtained in the different modules of the dynamic analysis computation are also presented. Results show that the superelement model based parallel algorithm can significantly reduce the vehicle dynamics simulation time. 52 refs.« less

  14. Approximated Stable Inversion for Nonlinear Systems with Nonhyperbolic Internal Dynamics. Revised

    NASA Technical Reports Server (NTRS)

    Devasia, Santosh

    1999-01-01

    A technique to achieve output tracking for nonminimum phase nonlinear systems with non- hyperbolic internal dynamics is presented. The present paper integrates stable inversion techniques (that achieve exact-tracking) with approximation techniques (that modify the internal dynamics) to circumvent the nonhyperbolicity of the internal dynamics - this nonhyperbolicity is an obstruction to applying presently available stable inversion techniques. The theory is developed for nonlinear systems and the method is applied to a two-cart with inverted-pendulum example.

  15. A parallel algorithm for 2D visco-acoustic frequency-domain full-waveform inversion: application to a dense OBS data set

    NASA Astrophysics Data System (ADS)

    Sourbier, F.; Operto, S.; Virieux, J.

    2006-12-01

    We present a distributed-memory parallel algorithm for 2D visco-acoustic full-waveform inversion of wide-angle seismic data. Our code is written in fortran90 and use MPI for parallelism. The algorithm was applied to real wide-angle data set recorded by 100 OBSs with a 1-km spacing in the eastern-Nankai trough (Japan) to image the deep structure of the subduction zone. Full-waveform inversion is applied sequentially to discrete frequencies by proceeding from the low to the high frequencies. The inverse problem is solved with a classic gradient method. Full-waveform modeling is performed with a frequency-domain finite-difference method. In the frequency-domain, solving the wave equation requires resolution of a large unsymmetric system of linear equations. We use the massively parallel direct solver MUMPS (http://www.enseeiht.fr/irit/apo/MUMPS) for distributed-memory computer to solve this system. The MUMPS solver is based on a multifrontal method for the parallel factorization. The MUMPS algorithm is subdivided in 3 main steps: a symbolic analysis step that performs re-ordering of the matrix coefficients to minimize the fill-in of the matrix during the subsequent factorization and an estimation of the assembly tree of the matrix. Second, the factorization is performed with dynamic scheduling to accomodate numerical pivoting and provides the LU factors distributed over all the processors. Third, the resolution is performed for multiple sources. To compute the gradient of the cost function, 2 simulations per shot are required (one to compute the forward wavefield and one to back-propagate residuals). The multi-source resolutions can be performed in parallel with MUMPS. In the end, each processor stores in core a sub-domain of all the solutions. These distributed solutions can be exploited to compute in parallel the gradient of the cost function. Since the gradient of the cost function is a weighted stack of the shot and residual solutions of MUMPS, each processor computes the corresponding sub-domain of the gradient. In the end, the gradient is centralized on the master processor using a collective communation. The gradient is scaled by the diagonal elements of the Hessian matrix. This scaling is computed only once per frequency before the first iteration of the inversion. Estimation of the diagonal terms of the Hessian requires performing one simulation per non redondant shot and receiver position. The same strategy that the one used for the gradient is used to compute the diagonal Hessian in parallel. This algorithm was applied to a dense wide-angle data set recorded by 100 OBSs in the eastern Nankai trough, offshore Japan. Thirteen frequencies ranging from 3 and 15 Hz were inverted. Tweny iterations per frequency were computed leading to 260 tomographic velocity models of increasing resolution. The velocity model dimensions are 105 km x 25 km corresponding to a finite-difference grid of 4201 x 1001 grid with a 25-m grid interval. The number of shot was 1005 and the number of inverted OBS gathers was 93. The inversion requires 20 days on 6 32-bits bi-processor nodes with 4 Gbytes of RAM memory per node when only the LU factorization is performed in parallel. Preliminary estimations of the time required to perform the inversion with the fully-parallelized code is 6 and 4 days using 20 and 50 processors respectively.

  16. Computational fluid dynamics of airfoils and wings

    NASA Technical Reports Server (NTRS)

    Garabedian, P.; Mcfadden, G.

    1982-01-01

    It is pointed out that transonic flow is one of the fields where computational fluid dynamics turns out to be most effective. Codes for the design and analysis of supercritical airfoils and wings have become standard tools of the aircraft industry. The present investigation is concerned with mathematical models and theorems which account for some of the progress that has been made. The most successful aerodynamics codes are those for the analysis of flow at off-design conditions where weak shock waves appear. A major breakthrough was achieved by Murman and Cole (1971), who conceived of a retarded difference scheme which incorporates artificial viscosity to capture shocks in the supersonic zone. This concept has been used to develop codes for the analysis of transonic flow past a swept wing. Attention is given to the trailing edge and the boundary layer, entropy inequalities and wave drag, shockless airfoils, and the inverse swept wing code.

  17. Cellular Automata

    NASA Astrophysics Data System (ADS)

    Gutowitz, Howard

    1991-08-01

    Cellular automata, dynamic systems in which space and time are discrete, are yielding interesting applications in both the physical and natural sciences. The thirty four contributions in this book cover many aspects of contemporary studies on cellular automata and include reviews, research reports, and guides to recent literature and available software. Chapters cover mathematical analysis, the structure of the space of cellular automata, learning rules with specified properties: cellular automata in biology, physics, chemistry, and computation theory; and generalizations of cellular automata in neural nets, Boolean nets, and coupled map lattices. Current work on cellular automata may be viewed as revolving around two central and closely related problems: the forward problem and the inverse problem. The forward problem concerns the description of properties of given cellular automata. Properties considered include reversibility, invariants, criticality, fractal dimension, and computational power. The role of cellular automata in computation theory is seen as a particularly exciting venue for exploring parallel computers as theoretical and practical tools in mathematical physics. The inverse problem, an area of study gaining prominence particularly in the natural sciences, involves designing rules that possess specified properties or perform specified task. A long-term goal is to develop a set of techniques that can find a rule or set of rules that can reproduce quantitative observations of a physical system. Studies of the inverse problem take up the organization and structure of the set of automata, in particular the parameterization of the space of cellular automata. Optimization and learning techniques, like the genetic algorithm and adaptive stochastic cellular automata are applied to find cellular automaton rules that model such physical phenomena as crystal growth or perform such adaptive-learning tasks as balancing an inverted pole. Howard Gutowitz is Collaborateur in the Service de Physique du Solide et Résonance Magnetique, Commissariat a I'Energie Atomique, Saclay, France.

  18. A spatial operator algebra for manipulator modeling and control

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.; Kreutz, K.; Milman, M.

    1988-01-01

    A powerful new spatial operator algebra for modeling, control, and trajectory design of manipulators is discussed along with its implementation in the Ada programming language. Applications of this algebra to robotics include an operator representation of the manipulator Jacobian matrix; the robot dynamical equations formulated in terms of the spatial algebra, showing the complete equivalence between the recursive Newton-Euler formulations to robot dynamics; the operator factorization and inversion of the manipulator mass matrix which immediately results in O(N) recursive forward dynamics algorithms; the joint accelerations of a manipulator due to a tip contact force; the recursive computation of the equivalent mass matrix as seen at the tip of a manipulator; and recursive forward dynamics of a closed chain system. Finally, additional applications and current research involving the use of the spatial operator algebra are discussed in general terms.

  19. An inverse dynamics approach to trajectory optimization and guidance for an aerospace plane

    NASA Technical Reports Server (NTRS)

    Lu, Ping

    1992-01-01

    The optimal ascent problem for an aerospace planes is formulated as an optimal inverse dynamic problem. Both minimum-fuel and minimax type of performance indices are considered. Some important features of the optimal trajectory and controls are used to construct a nonlinear feedback midcourse controller, which not only greatly simplifies the difficult constrained optimization problem and yields improved solutions, but is also suited for onboard implementation. Robust ascent guidance is obtained by using combination of feedback compensation and onboard generation of control through the inverse dynamics approach. Accurate orbital insertion can be achieved with near-optimal control of the rocket through inverse dynamics even in the presence of disturbances.

  20. Development of automation and robotics for space via computer graphic simulation methods

    NASA Technical Reports Server (NTRS)

    Fernandez, Ken

    1988-01-01

    A robot simulation system, has been developed to perform automation and robotics system design studies. The system uses a procedure-oriented solid modeling language to produce a model of the robotic mechanism. The simulator generates the kinematics, inverse kinematics, dynamics, control, and real-time graphic simulations needed to evaluate the performance of the model. Simulation examples are presented, including simulation of the Space Station and the design of telerobotics for the Orbital Maneuvering Vehicle.

  1. A New Paradigm for Satellite Retrieval of Hydrologic Variables: The CDRD Methodology

    NASA Astrophysics Data System (ADS)

    Smith, E. A.; Mugnai, A.; Tripoli, G. J.

    2009-09-01

    Historically, retrieval of thermodynamically active geophysical variables in the atmosphere (e.g., temperature, moisture, precipitation) involved some time of inversion scheme - embedded within the retrieval algorithm - to transform radiometric observations (a vector) to the desired geophysical parameter(s) (either a scalar or a vector). Inversion is fundamentally a mathematical operation involving some type of integral-differential radiative transfer equation - often resisting a straightforward algebraic solution - in which the integral side of the equation (typically the right-hand side) contains the desired geophysical vector, while the left-hand side contains the radiative measurement vector often free of operators. Inversion was considered more desirable than forward modeling because the forward model solution had to be selected from a generally unmanageable set of parameter-observation relationships. However, in the classical inversion problem for retrieval of temperature using multiple radiative frequencies along the wing of an absorption band (or line) of a well-mixed radiatively active gas, in either the infrared or microwave spectrums, the inversion equation to be solved consists of a Fredholm integral equation of the 2nd kind - a specific type of transform problem in which there are an infinite number of solutions. This meant that special treatment of the transform process was required in order to obtain a single solution. Inversion had become the method of choice for retrieval in the 1950s because it appealed to the use of mathematical elegance, and because the numerical approaches used to solve the problems (typically some type of relaxation or perturbation scheme) were computationally fast in an age when computers speeds were slow. Like many solution schemes, inversion has lingered on regardless of the fact that computer speeds have increased many orders of magnitude and forward modeling itself has become far more elegant in combination with Bayesian averaging procedures given that the a priori probabilities of occurrence in the true environment of the parameter(s) in question can be approximated (or are actually known). In this presentation, the theory of the more modern retrieval approach using a combination of cloud, radiation and other specialized forward models in conjunction with Bayesian weighted averaging will be reviewed in light of a brief history of inversion. The application of the theory will be cast in the framework of what we call the Cloud-Dynamics-Radiation-Database (CDRD) methodology - which we now use for the retrieval of precipitation from spaceborne passive microwave radiometers. In a companion presentation, we will specifically describe the CDRD methodology and present results for its application within the Mediterranean basin.

  2. A MATLAB based 3D modeling and inversion code for MT data

    NASA Astrophysics Data System (ADS)

    Singh, Arun; Dehiya, Rahul; Gupta, Pravin K.; Israil, M.

    2017-07-01

    The development of a MATLAB based computer code, AP3DMT, for modeling and inversion of 3D Magnetotelluric (MT) data is presented. The code comprises two independent components: grid generator code and modeling/inversion code. The grid generator code performs model discretization and acts as an interface by generating various I/O files. The inversion code performs core computations in modular form - forward modeling, data functionals, sensitivity computations and regularization. These modules can be readily extended to other similar inverse problems like Controlled-Source EM (CSEM). The modular structure of the code provides a framework useful for implementation of new applications and inversion algorithms. The use of MATLAB and its libraries makes it more compact and user friendly. The code has been validated on several published models. To demonstrate its versatility and capabilities the results of inversion for two complex models are presented.

  3. Tensor methodology and computational geometry in direct computational experiments in fluid mechanics

    NASA Astrophysics Data System (ADS)

    Degtyarev, Alexander; Khramushin, Vasily; Shichkina, Julia

    2017-07-01

    The paper considers a generalized functional and algorithmic construction of direct computational experiments in fluid dynamics. Notation of tensor mathematics is naturally embedded in the finite - element operation in the construction of numerical schemes. Large fluid particle, which have a finite size, its own weight, internal displacement and deformation is considered as an elementary computing object. Tensor representation of computational objects becomes strait linear and uniquely approximation of elementary volumes and fluid particles inside them. The proposed approach allows the use of explicit numerical scheme, which is an important condition for increasing the efficiency of the algorithms developed by numerical procedures with natural parallelism. It is shown that advantages of the proposed approach are achieved among them by considering representation of large particles of a continuous medium motion in dual coordinate systems and computing operations in the projections of these two coordinate systems with direct and inverse transformations. So new method for mathematical representation and synthesis of computational experiment based on large particle method is proposed.

  4. On one-dimensional stretching functions for finite-difference calculations. [computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Vinokur, M.

    1979-01-01

    The class of one-dimensional stretching functions used in finite-difference calculations is studied. For solutions containing a highly localized region of rapid variation, simple criteria for a stretching function are derived using a truncation error analysis. These criteria are used to investigate two types of stretching functions. One is an interior stretching function, for which the location and slope of an interior clustering region are specified. The simplest such function satisfying the criteria is found to be one based on the inverse hyperbolic sine. The other type of function is a two-sided stretching function, for which the arbitrary slopes at the two ends of the one-dimensional interval are specified. The simplest such general function is found to be one based on the inverse tangent.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    None, None

    The Second SIAM Conference on Computational Science and Engineering was held in San Diego from February 10-12, 2003. Total conference attendance was 553. This is a 23% increase in attendance over the first conference. The focus of this conference was to draw attention to the tremendous range of major computational efforts on large problems in science and engineering, to promote the interdisciplinary culture required to meet these large-scale challenges, and to encourage the training of the next generation of computational scientists. Computational Science & Engineering (CS&E) is now widely accepted, along with theory and experiment, as a crucial third modemore » of scientific investigation and engineering design. Aerospace, automotive, biological, chemical, semiconductor, and other industrial sectors now rely on simulation for technical decision support. For federal agencies also, CS&E has become an essential support for decisions on resources, transportation, and defense. CS&E is, by nature, interdisciplinary. It grows out of physical applications and it depends on computer architecture, but at its heart are powerful numerical algorithms and sophisticated computer science techniques. From an applied mathematics perspective, much of CS&E has involved analysis, but the future surely includes optimization and design, especially in the presence of uncertainty. Another mathematical frontier is the assimilation of very large data sets through such techniques as adaptive multi-resolution, automated feature search, and low-dimensional parameterization. The themes of the 2003 conference included, but were not limited to: Advanced Discretization Methods; Computational Biology and Bioinformatics; Computational Chemistry and Chemical Engineering; Computational Earth and Atmospheric Sciences; Computational Electromagnetics; Computational Fluid Dynamics; Computational Medicine and Bioengineering; Computational Physics and Astrophysics; Computational Solid Mechanics and Materials; CS&E Education; Meshing and Adaptivity; Multiscale and Multiphysics Problems; Numerical Algorithms for CS&E; Discrete and Combinatorial Algorithms for CS&E; Inverse Problems; Optimal Design, Optimal Control, and Inverse Problems; Parallel and Distributed Computing; Problem-Solving Environments; Software and Wddleware Systems; Uncertainty Estimation and Sensitivity Analysis; and Visualization and Computer Graphics.« less

  6. On the value of incorporating spatial statistics in large-scale geophysical inversions: the SABRe case

    NASA Astrophysics Data System (ADS)

    Kokkinaki, A.; Sleep, B. E.; Chambers, J. E.; Cirpka, O. A.; Nowak, W.

    2010-12-01

    Electrical Resistance Tomography (ERT) is a popular method for investigating subsurface heterogeneity. The method relies on measuring electrical potential differences and obtaining, through inverse modeling, the underlying electrical conductivity field, which can be related to hydraulic conductivities. The quality of site characterization strongly depends on the utilized inversion technique. Standard ERT inversion methods, though highly computationally efficient, do not consider spatial correlation of soil properties; as a result, they often underestimate the spatial variability observed in earth materials, thereby producing unrealistic subsurface models. Also, these methods do not quantify the uncertainty of the estimated properties, thus limiting their use in subsequent investigations. Geostatistical inverse methods can be used to overcome both these limitations; however, they are computationally expensive, which has hindered their wide use in practice. In this work, we compare a standard Gauss-Newton smoothness constrained least squares inversion method against the quasi-linear geostatistical approach using the three-dimensional ERT dataset of the SABRe (Source Area Bioremediation) project. The two methods are evaluated for their ability to: a) produce physically realistic electrical conductivity fields that agree with the wide range of data available for the SABRe site while being computationally efficient, and b) provide information on the spatial statistics of other parameters of interest, such as hydraulic conductivity. To explore the trade-off between inversion quality and computational efficiency, we also employ a 2.5-D forward model with corrections for boundary conditions and source singularities. The 2.5-D model accelerates the 3-D geostatistical inversion method. New adjoint equations are developed for the 2.5-D forward model for the efficient calculation of sensitivities. Our work shows that spatial statistics can be incorporated in large-scale ERT inversions to improve the inversion results without making them computationally prohibitive.

  7. Source parameter inversion of compound earthquakes on GPU/CPU hybrid platform

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Ni, S.; Chen, W.

    2012-12-01

    Source parameter of earthquakes is essential problem in seismology. Accurate and timely determination of the earthquake parameters (such as moment, depth, strike, dip and rake of fault planes) is significant for both the rupture dynamics and ground motion prediction or simulation. And the rupture process study, especially for the moderate and large earthquakes, is essential as the more detailed kinematic study has became the routine work of seismologists. However, among these events, some events behave very specially and intrigue seismologists. These earthquakes usually consist of two similar size sub-events which occurred with very little time interval, such as mb4.5 Dec.9, 2003 in Virginia. The studying of these special events including the source parameter determination of each sub-events will be helpful to the understanding of earthquake dynamics. However, seismic signals of two distinctive sources are mixed up bringing in the difficulty of inversion. As to common events, the method(Cut and Paste) has been proven effective for resolving source parameters, which jointly use body wave and surface wave with independent time shift and weights. CAP could resolve fault orientation and focal depth using a grid search algorithm. Based on this method, we developed an algorithm(MUL_CAP) to simultaneously acquire parameters of two distinctive events. However, the simultaneous inversion of both sub-events make the computation very time consuming, so we develop a hybrid GPU and CPU version of CAP(HYBRID_CAP) to improve the computation efficiency. Thanks to advantages on multiple dimension storage and processing in GPU, we obtain excellent performance of the revised code on GPU-CPU combined architecture and the speedup factors can be as high as 40x-90x compared to classical cap on traditional CPU architecture.As the benchmark, we take the synthetics as observation and inverse the source parameters of two given sub-events and the inversion results are very consistent with the true parameters. For the events in Virginia, USA on 9 Dec, 2003, we re-invert source parameters and detailed analysis of regional waveform indicates that Virginia earthquake included two sub-events which are Mw4.05 and Mw4.25 at the same depth of 10km with focal mechanism of strike65/dip32/rake135, which are consistent with previous study. Moreover, compared to traditional two-source model method, MUL_CAP is more automatic with no need for human intervention.

  8. Cloud-based simulations on Google Exacycle reveal ligand modulation of GPCR activation pathways

    NASA Astrophysics Data System (ADS)

    Kohlhoff, Kai J.; Shukla, Diwakar; Lawrenz, Morgan; Bowman, Gregory R.; Konerding, David E.; Belov, Dan; Altman, Russ B.; Pande, Vijay S.

    2014-01-01

    Simulations can provide tremendous insight into the atomistic details of biological mechanisms, but micro- to millisecond timescales are historically only accessible on dedicated supercomputers. We demonstrate that cloud computing is a viable alternative that brings long-timescale processes within reach of a broader community. We used Google's Exacycle cloud-computing platform to simulate two milliseconds of dynamics of a major drug target, the G-protein-coupled receptor β2AR. Markov state models aggregate independent simulations into a single statistical model that is validated by previous computational and experimental results. Moreover, our models provide an atomistic description of the activation of a G-protein-coupled receptor and reveal multiple activation pathways. Agonists and inverse agonists interact differentially with these pathways, with profound implications for drug design.

  9. Gradient-free MCMC methods for dynamic causal modelling.

    PubMed

    Sengupta, Biswa; Friston, Karl J; Penny, Will D

    2015-05-15

    In this technical note we compare the performance of four gradient-free MCMC samplers (random walk Metropolis sampling, slice-sampling, adaptive MCMC sampling and population-based MCMC sampling with tempering) in terms of the number of independent samples they can produce per unit computational time. For the Bayesian inversion of a single-node neural mass model, both adaptive and population-based samplers are more efficient compared with random walk Metropolis sampler or slice-sampling; yet adaptive MCMC sampling is more promising in terms of compute time. Slice-sampling yields the highest number of independent samples from the target density - albeit at almost 1000% increase in computational time, in comparison to the most efficient algorithm (i.e., the adaptive MCMC sampler). Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  10. Algorithms and programs complex for solving inverse problems of artificial Earth satellite dynamics with using parallel computation. (Russian Title: Программно-математическое обеспечение для решения обратных задач динамики ИСЗ с использованием параллельных вычислений )

    NASA Astrophysics Data System (ADS)

    Chuvashov, I. N.

    2011-07-01

    In this paper complex of algorithms and programs for solving inverse problems of artificial earth satellite dynamics is described. Complex has been intended for satellite orbit improvement, calculation of motion model parameters and etc. Programs complex has been worked up for cluster "Skiff Cyberia". Results of numerical experiments obtained by using new complex in common the program "Numerical model of the system artificial satellites motion" is presented in this paper.

  11. Large-scale inverse model analyses employing fast randomized data reduction

    NASA Astrophysics Data System (ADS)

    Lin, Youzuo; Le, Ellen B.; O'Malley, Daniel; Vesselinov, Velimir V.; Bui-Thanh, Tan

    2017-08-01

    When the number of observations is large, it is computationally challenging to apply classical inverse modeling techniques. We have developed a new computationally efficient technique for solving inverse problems with a large number of observations (e.g., on the order of 107 or greater). Our method, which we call the randomized geostatistical approach (RGA), is built upon the principal component geostatistical approach (PCGA). We employ a data reduction technique combined with the PCGA to improve the computational efficiency and reduce the memory usage. Specifically, we employ a randomized numerical linear algebra technique based on a so-called "sketching" matrix to effectively reduce the dimension of the observations without losing the information content needed for the inverse analysis. In this way, the computational and memory costs for RGA scale with the information content rather than the size of the calibration data. Our algorithm is coded in Julia and implemented in the MADS open-source high-performance computational framework (http://mads.lanl.gov). We apply our new inverse modeling method to invert for a synthetic transmissivity field. Compared to a standard geostatistical approach (GA), our method is more efficient when the number of observations is large. Most importantly, our method is capable of solving larger inverse problems than the standard GA and PCGA approaches. Therefore, our new model inversion method is a powerful tool for solving large-scale inverse problems. The method can be applied in any field and is not limited to hydrogeological applications such as the characterization of aquifer heterogeneity.

  12. Inverse Optimization: A New Perspective on the Black-Litterman Model

    PubMed Central

    Bertsimas, Dimitris; Gupta, Vishal; Paschalidis, Ioannis Ch.

    2014-01-01

    The Black-Litterman (BL) model is a widely used asset allocation model in the financial industry. In this paper, we provide a new perspective. The key insight is to replace the statistical framework in the original approach with ideas from inverse optimization. This insight allows us to significantly expand the scope and applicability of the BL model. We provide a richer formulation that, unlike the original model, is flexible enough to incorporate investor information on volatility and market dynamics. Equally importantly, our approach allows us to move beyond the traditional mean-variance paradigm of the original model and construct “BL”-type estimators for more general notions of risk such as coherent risk measures. Computationally, we introduce and study two new “BL”-type estimators and their corresponding portfolios: a Mean Variance Inverse Optimization (MV-IO) portfolio and a Robust Mean Variance Inverse Optimization (RMV-IO) portfolio. These two approaches are motivated by ideas from arbitrage pricing theory and volatility uncertainty. Using numerical simulation and historical backtesting, we show that both methods often demonstrate a better risk-reward tradeoff than their BL counterparts and are more robust to incorrect investor views. PMID:25382873

  13. Enhancing Autonomy of Aerial Systems Via Integration of Visual Sensors into Their Avionics Suite

    DTIC Science & Technology

    2016-09-01

    aerial platform for subsequent visual sensor integration. 14. SUBJECT TERMS autonomous system, quadrotors, direct method, inverse ...CONTROLLER ARCHITECTURE .....................................................43 B. INVERSE DYNAMICS IN THE VIRTUAL DOMAIN ......................45 1...control station GPS Global-Positioning System IDVD inverse dynamics in the virtual domain ILP integer linear program INS inertial-navigation system

  14. Adaptive online inverse control of a shape memory alloy wire actuator using a dynamic neural network

    NASA Astrophysics Data System (ADS)

    Mai, Huanhuan; Song, Gangbing; Liao, Xiaofeng

    2013-01-01

    Shape memory alloy (SMA) actuators exhibit severe hysteresis, a nonlinear behavior, which complicates control strategies and limits their applications. This paper presents a new approach to controlling an SMA actuator through an adaptive inverse model based controller that consists of a dynamic neural network (DNN) identifier, a copy dynamic neural network (CDNN) feedforward term and a proportional (P) feedback action. Unlike fixed hysteresis models used in most inverse controllers, the proposed one uses a DNN to identify online the relationship between the applied voltage to the actuator and the displacement (the inverse model). Even without a priori knowledge of the SMA hysteresis and without pre-training, the proposed controller can precisely control the SMA wire actuator in various tracking tasks by identifying online the inverse model of the SMA actuator. Experiments were conducted, and experimental results demonstrated real-time modeling capabilities of DNN and the performance of the adaptive inverse controller.

  15. Space robot simulator vehicle

    NASA Technical Reports Server (NTRS)

    Cannon, R. H., Jr.; Alexander, H.

    1985-01-01

    A Space Robot Simulator Vehicle (SRSV) was constructed to model a free-flying robot capable of doing construction, manipulation and repair work in space. The SRSV is intended as a test bed for development of dynamic and static control methods for space robots. The vehicle is built around a two-foot-diameter air-cushion vehicle that carries batteries, power supplies, gas tanks, computer, reaction jets and radio equipment. It is fitted with one or two two-link manipulators, which may be of many possible designs, including flexible-link versions. Both the vehicle body and its first arm are nearly complete. Inverse dynamic control of the robot's manipulator has been successfully simulated using equations generated by the dynamic simulation package SDEXACT. In this mode, the position of the manipulator tip is controlled not by fixing the vehicle base through thruster operation, but by controlling the manipulator joint torques to achieve the desired tip motion, while allowing for the free motion of the vehicle base. One of the primary goals is to minimize use of the thrusters in favor of intelligent control of the manipulator. Ways to reduce the computational burden of control are described.

  16. A Localized Ensemble Kalman Smoother

    NASA Technical Reports Server (NTRS)

    Butala, Mark D.

    2012-01-01

    Numerous geophysical inverse problems prove difficult because the available measurements are indirectly related to the underlying unknown dynamic state and the physics governing the system may involve imperfect models or unobserved parameters. Data assimilation addresses these difficulties by combining the measurements and physical knowledge. The main challenge in such problems usually involves their high dimensionality and the standard statistical methods prove computationally intractable. This paper develops and addresses the theoretical convergence of a new high-dimensional Monte-Carlo approach called the localized ensemble Kalman smoother.

  17. On wings and keels (2)

    NASA Astrophysics Data System (ADS)

    Slooff, J. W.

    1985-05-01

    The physical mechanisms governing the hydrodynamics of sailing yacht keels and the parameters that, through these mechanisms, determine keel performance are discussed. It is concluded that due to the presence of the free water surface optimum keel shapes differ from optimum shapes for aircraft wings. Utilizing computational fluid dynamics analysis and optimization it is found that the performance of conventional keels can be improved significantly by reducing taper or even applying inverse taper (upside-down keel) and that decisive improvements in performance can be realized through keels with winglets.

  18. Resolving Dynamic Properties of Polymers through Coarse-Grained Computational Studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salerno, K. Michael; Agrawal, Anupriya; Perahia, Dvora

    2016-02-05

    Coupled length and time scales determine the dynamic behavior of polymers and underlie their unique viscoelastic properties. To resolve the long-time dynamics it is imperative to determine which time and length scales must be correctly modeled. In this paper, we probe the degree of coarse graining required to simultaneously retain significant atomistic details and access large length and time scales. The degree of coarse graining in turn sets the minimum length scale instrumental in defining polymer properties and dynamics. Using linear polyethylene as a model system, we probe how the coarse-graining scale affects the measured dynamics. Iterative Boltzmann inversion ismore » used to derive coarse-grained potentials with 2–6 methylene groups per coarse-grained bead from a fully atomistic melt simulation. We show that atomistic detail is critical to capturing large-scale dynamics. Finally, using these models we simulate polyethylene melts for times over 500 μs to study the viscoelastic properties of well-entangled polymer melts.« less

  19. A comprehensive inversion approach for feedforward compensation of piezoactuator system at high frequency

    NASA Astrophysics Data System (ADS)

    Tian, Lizhi; Xiong, Zhenhua; Wu, Jianhua; Ding, Han

    2016-09-01

    Motion control of the piezoactuator system over broadband frequencies is limited due to its inherent hysteresis and system dynamics. One of the suggested ways is to use feedforward controller to linearize the input-output relationship of the piezoactuator system. Although there have been many feedforward approaches, it is still a challenge to develop feedforward controller for the piezoactuator system at high frequency. Hence, this paper presents a comprehensive inversion approach in consideration of the coupling of hysteresis and dynamics. In this work, the influence of dynamics compensation on the input-output relationship of the piezoactuator system is investigated first. With system dynamics compensation, the input-output relationship of the piezoactuator system will be further represented as rate-dependent nonlinearity due to the inevitable dynamics compensation error, especially at high frequency. Base on this result, the feedforward controller composed by a cascade of linear dynamics inversion and rate-dependent nonlinearity inversion is developed. Then, the system identification of the comprehensive inversion approach is proposed. Finally, experimental results show that the proposed approach can improve the performance on tracking of both periodic and non-periodic trajectories at medium and high frequency compared with the conventional feedforward approaches.

  20. Trajectory Tracking of a Planer Parallel Manipulator by Using Computed Force Control Method

    NASA Astrophysics Data System (ADS)

    Bayram, Atilla

    2017-03-01

    Despite small workspace, parallel manipulators have some advantages over their serial counterparts in terms of higher speed, acceleration, rigidity, accuracy, manufacturing cost and payload. Accordingly, this type of manipulators can be used in many applications such as in high-speed machine tools, tuning machine for feeding, sensitive cutting, assembly and packaging. This paper presents a special type of planar parallel manipulator with three degrees of freedom. It is constructed as a variable geometry truss generally known planar Stewart platform. The reachable and orientation workspaces are obtained for this manipulator. The inverse kinematic analysis is solved for the trajectory tracking according to the redundancy and joint limit avoidance. Then, the dynamics model of the manipulator is established by using Virtual Work method. The simulations are performed to follow the given planar trajectories by using the dynamic equations of the variable geometry truss manipulator and computed force control method. In computed force control method, the feedback gain matrices for PD control are tuned with fixed matrices by trail end error and variable ones by means of optimization with genetic algorithm.

  1. Joint Loads and Cartilage Stress in Intact Joints of Military Transtibial Amputees: Enhancing Quality of Life

    DTIC Science & Technology

    2017-04-01

    crosstalk); analysis of tested subjects underway. 4) Developed analytical methods to obtain knee joint loads using EMG-driven inverse dynamics; analysis of...13/2018. Completion %: 40. Task 1.3: EMG-driven inverse dynamic (ID) analyses with OpenSim for amputee and control group subjects. Target date: 1...predicted by EMG-driven inverse dynamics. Two-three conference papers are being prepared for submission in February 2017. Other achievements. None

  2. Dynamic stress changes during earthquake rupture

    USGS Publications Warehouse

    Day, S.M.; Yu, G.; Wald, D.J.

    1998-01-01

    We assess two competing dynamic interpretations that have been proposed for the short slip durations characteristic of kinematic earthquake models derived by inversion of earthquake waveform and geodetic data. The first interpretation would require a fault constitutive relationship in which rapid dynamic restrengthening of the fault surface occurs after passage of the rupture front, a hypothesized mechanical behavior that has been referred to as "self-healing." The second interpretation would require sufficient spatial heterogeneity of stress drop to permit rapid equilibration of elastic stresses with the residual dynamic friction level, a condition we refer to as "geometrical constraint." These interpretations imply contrasting predictions for the time dependence of the fault-plane shear stresses. We compare these predictions with dynamic shear stress changes for the 1992 Landers (M 7.3), 1994 Northridge (M 6.7), and 1995 Kobe (M 6.9) earthquakes. Stress changes are computed from kinematic slip models of these earthquakes, using a finite-difference method. For each event, static stress drop is highly variable spatially, with high stress-drop patches embedded in a background of low, and largely negative, stress drop. The time histories of stress change show predominantly monotonic stress change after passage of the rupture front, settling to a residual level, without significant evidence for dynamic restrengthening. The stress change at the rupture front is usually gradual rather than abrupt, probably reflecting the limited resolution inherent in the underlying kinematic inversions. On the basis of this analysis, as well as recent similar results obtained independently for the Kobe and Morgan Hill earthquakes, we conclude that, at the present time, the self-healing hypothesis is unnecessary to explain earthquake kinematics.

  3. Sequential estimation and satellite data assimilation in meteorology and oceanography

    NASA Technical Reports Server (NTRS)

    Ghil, M.

    1986-01-01

    The central theme of this review article is the role that dynamics plays in estimating the state of the atmosphere and of the ocean from incomplete and noisy data. Objective analysis and inverse methods represent an attempt at relying mostly on the data and minimizing the role of dynamics in the estimation. Four-dimensional data assimilation tries to balance properly the roles of dynamical and observational information. Sequential estimation is presented as the proper framework for understanding this balance, and the Kalman filter as the ideal, optimal procedure for data assimilation. The optimal filter computes forecast error covariances of a given atmospheric or oceanic model exactly, and hence data assimilation should be closely connected with predictability studies. This connection is described, and consequences drawn for currently active areas of the atmospheric and oceanic sciences, namely, mesoscale meteorology, medium and long-range forecasting, and upper-ocean dynamics.

  4. Dynamical multiferroicity

    NASA Astrophysics Data System (ADS)

    Juraschek, Dominik M.; Fechner, Michael; Balatsky, Alexander V.; Spaldin, Nicola A.

    2017-06-01

    An appealing mechanism for inducing multiferroicity in materials is the generation of electric polarization by a spatially varying magnetization that is coupled to the lattice through the spin-orbit interaction. Here we describe the reciprocal effect, in which a time-dependent electric polarization induces magnetization even in materials with no existing spin structure. We develop a formalism for this dynamical multiferroic effect in the case for which the polarization derives from optical phonons, and compute the strength of the phonon Zeeman effect, which is the solid-state equivalent of the well-established vibrational Zeeman effect in molecules, using density functional theory. We further show that a recently observed behavior—the resonant excitation of a magnon by optically driven phonons—is described by the formalism. Finally, we discuss examples of scenarios that are not driven by lattice dynamics and interpret the excitation of Dzyaloshinskii-Moriya-type electromagnons and the inverse Faraday effect from the viewpoint of dynamical multiferroicity.

  5. Dynamic displacement measurement of large-scale structures based on the Lucas-Kanade template tracking algorithm

    NASA Astrophysics Data System (ADS)

    Guo, Jie; Zhu, Chang`an

    2016-01-01

    The development of optics and computer technologies enables the application of the vision-based technique that uses digital cameras to the displacement measurement of large-scale structures. Compared with traditional contact measurements, vision-based technique allows for remote measurement, has a non-intrusive characteristic, and does not necessitate mass introduction. In this study, a high-speed camera system is developed to complete the displacement measurement in real time. The system consists of a high-speed camera and a notebook computer. The high-speed camera can capture images at a speed of hundreds of frames per second. To process the captured images in computer, the Lucas-Kanade template tracking algorithm in the field of computer vision is introduced. Additionally, a modified inverse compositional algorithm is proposed to reduce the computing time of the original algorithm and improve the efficiency further. The modified algorithm can rapidly accomplish one displacement extraction within 1 ms without having to install any pre-designed target panel onto the structures in advance. The accuracy and the efficiency of the system in the remote measurement of dynamic displacement are demonstrated in the experiments on motion platform and sound barrier on suspension viaduct. Experimental results show that the proposed algorithm can extract accurate displacement signal and accomplish the vibration measurement of large-scale structures.

  6. From systems biology to dynamical neuropharmacology: proposal for a new methodology.

    PubMed

    Erdi, P; Kiss, T; Tóth, J; Ujfalussy, B; Zalányi, L

    2006-07-01

    The concepts and methods of systems biology are extended to neuropharmacology in order to test and design drugs for the treatment of neurological and psychiatric disorders. Computational modelling by integrating compartmental neural modelling techniques and detailed kinetic descriptions of pharmacological modulation of transmitter-receptor interaction is offered as a method to test the electrophysiological and behavioural effects of putative drugs. Even more, an inverse method is suggested as a method for controlling a neural system to realise a prescribed temporal pattern. In particular, as an application of the proposed new methodology, a computational platform is offered to analyse the generation and pharmacological modulation of theta rhythm related to anxiety.

  7. A Joint Method of Envelope Inversion Combined with Hybrid-domain Full Waveform Inversion

    NASA Astrophysics Data System (ADS)

    CUI, C.; Hou, W.

    2017-12-01

    Full waveform inversion (FWI) aims to construct high-precision subsurface models by fully using the information in seismic records, including amplitude, travel time, phase and so on. However, high non-linearity and the absence of low frequency information in seismic data lead to the well-known cycle skipping problem and make inversion easily fall into local minima. In addition, those 3D inversion methods that are based on acoustic approximation ignore the elastic effects in real seismic field, and make inversion harder. As a result, the accuracy of final inversion results highly relies on the quality of initial model. In order to improve stability and quality of inversion results, multi-scale inversion that reconstructs subsurface model from low to high frequency are applied. But, the absence of very low frequencies (< 3Hz) in field data is still bottleneck in the FWI. By extracting ultra low-frequency data from field data, envelope inversion is able to recover low wavenumber model with a demodulation operator (envelope operator), though the low frequency data does not really exist in field data. To improve the efficiency and viability of the inversion, in this study, we proposed a joint method of envelope inversion combined with hybrid-domain FWI. First, we developed 3D elastic envelope inversion, and the misfit function and the corresponding gradient operator were derived. Then we performed hybrid-domain FWI with envelope inversion result as initial model which provides low wavenumber component of model. Here, forward modeling is implemented in the time domain and inversion in the frequency domain. To accelerate the inversion, we adopt CPU/GPU heterogeneous computing techniques. There were two levels of parallelism. In the first level, the inversion tasks are decomposed and assigned to each computation node by shot number. In the second level, GPU multithreaded programming is used for the computation tasks in each node, including forward modeling, envelope extraction, DFT (discrete Fourier transform) calculation and gradients calculation. Numerical tests demonstrated that the combined envelope inversion + hybrid-domain FWI could obtain much faithful and accurate result than conventional hybrid-domain FWI. The CPU/GPU heterogeneous parallel computation could improve the performance speed.

  8. High-level ab initio potential energy surface and dynamics of the F- + CH3I SN2 and proton-transfer reactions.

    PubMed

    Olasz, Balázs; Szabó, István; Czakó, Gábor

    2017-04-01

    Bimolecular nucleophilic substitution (S N 2) and proton transfer are fundamental processes in chemistry and F - + CH 3 I is an important prototype of these reactions. Here we develop the first full-dimensional ab initio analytical potential energy surface (PES) for the F - + CH 3 I system using a permutationally invariant fit of high-level composite energies obtained with the combination of the explicitly-correlated CCSD(T)-F12b method, the aug-cc-pVTZ basis, core electron correlation effects, and a relativistic effective core potential for iodine. The PES accurately describes the S N 2 channel producing I - + CH 3 F via Walden-inversion, front-side attack, and double-inversion pathways as well as the proton-transfer channel leading to HF + CH 2 I - . The relative energies of the stationary points on the PES agree well with the new explicitly-correlated all-electron CCSD(T)-F12b/QZ-quality benchmark values. Quasiclassical trajectory computations on the PES show that the proton transfer becomes significant at high collision energies and double-inversion as well as front-side attack trajectories can occur. The computed broad angular distributions and hot internal energy distributions indicate the dominance of indirect mechanisms at lower collision energies, which is confirmed by analyzing the integration time and leaving group velocity distributions. Comparison with available crossed-beam experiments shows usually good agreement.

  9. Computing rates of Markov models of voltage-gated ion channels by inverting partial differential equations governing the probability density functions of the conducting and non-conducting states.

    PubMed

    Tveito, Aslak; Lines, Glenn T; Edwards, Andrew G; McCulloch, Andrew

    2016-07-01

    Markov models are ubiquitously used to represent the function of single ion channels. However, solving the inverse problem to construct a Markov model of single channel dynamics from bilayer or patch-clamp recordings remains challenging, particularly for channels involving complex gating processes. Methods for solving the inverse problem are generally based on data from voltage clamp measurements. Here, we describe an alternative approach to this problem based on measurements of voltage traces. The voltage traces define probability density functions of the functional states of an ion channel. These probability density functions can also be computed by solving a deterministic system of partial differential equations. The inversion is based on tuning the rates of the Markov models used in the deterministic system of partial differential equations such that the solution mimics the properties of the probability density function gathered from (pseudo) experimental data as well as possible. The optimization is done by defining a cost function to measure the difference between the deterministic solution and the solution based on experimental data. By evoking the properties of this function, it is possible to infer whether the rates of the Markov model are identifiable by our method. We present applications to Markov model well-known from the literature. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  10. Multiscale solvers and systematic upscaling in computational physics

    NASA Astrophysics Data System (ADS)

    Brandt, A.

    2005-07-01

    Multiscale algorithms can overcome the scale-born bottlenecks that plague most computations in physics. These algorithms employ separate processing at each scale of the physical space, combined with interscale iterative interactions, in ways which use finer scales very sparingly. Having been developed first and well known as multigrid solvers for partial differential equations, highly efficient multiscale techniques have more recently been developed for many other types of computational tasks, including: inverse PDE problems; highly indefinite (e.g., standing wave) equations; Dirac equations in disordered gauge fields; fast computation and updating of large determinants (as needed in QCD); fast integral transforms; integral equations; astrophysics; molecular dynamics of macromolecules and fluids; many-atom electronic structures; global and discrete-state optimization; practical graph problems; image segmentation and recognition; tomography (medical imaging); fast Monte-Carlo sampling in statistical physics; and general, systematic methods of upscaling (accurate numerical derivation of large-scale equations from microscopic laws).

  11. Dynamic Inversion based Control of a Docking Mechanism

    NASA Technical Reports Server (NTRS)

    Kulkarni, Nilesh V.; Ippolito, Corey; Krishnakumar, Kalmanje

    2006-01-01

    The problem of position and attitude control of the Stewart platform based docking mechanism is considered motivated by its future application in space missions requiring the autonomous docking capability. The control design is initiated based on the framework of the intelligent flight control architecture being developed at NASA Ames Research Center. In this paper, the baseline position and attitude control system is designed using dynamic inversion with proportional-integral augmentation. The inverse dynamics uses a Newton-Euler formulation that includes the platform dynamics, the dynamics of the individual legs along with viscous friction in the joints. Simulation results are presented using forward dynamics simulated by a commercial physics engine that builds the system as individual elements with appropriate joints and uses constrained numerical integration,

  12. Children's strategies to solving additive inverse problems: a preliminary analysis

    NASA Astrophysics Data System (ADS)

    Ding, Meixia; Auxter, Abbey E.

    2017-03-01

    Prior studies show that elementary school children generally "lack" formal understanding of inverse relations. This study goes beyond lack to explore what children might "have" in their existing conception. A total of 281 students, kindergarten to third grade, were recruited to respond to a questionnaire that involved both contextual and non-contextual tasks on inverse relations, requiring both computational and explanatory skills. Results showed that children demonstrated better performance in computation than explanation. However, many students' explanations indicated that they did not necessarily utilize inverse relations for computation. Rather, they appeared to possess partial understanding, as evidenced by their use of part-whole structure, which is a key to understanding inverse relations. A close inspection of children's solution strategies further revealed that the sophistication of children's conception of part-whole structure varied in representation use and unknown quantity recognition, which suggests rich opportunities to develop students' understanding of inverse relations in lower elementary classrooms.

  13. Viscous Design of TCA Configuration

    NASA Technical Reports Server (NTRS)

    Krist, Steven E.; Bauer, Steven X. S.; Campbell, Richard L.

    1999-01-01

    The goal in this effort is to redesign the baseline TCA configuration for improved performance at both supersonic and transonic cruise. Viscous analyses are conducted with OVERFLOW, a Navier-Stokes code for overset grids, using PEGSUS to compute the interpolations between overset grids. Viscous designs are conducted with OVERDISC, a script which couples OVERFLOW with the Constrained Direct Iterative Surface Curvature (CDISC) inverse design method. The successful execution of any computational fluid dynamics (CFD) based aerodynamic design method for complex configurations requires an efficient method for regenerating the computational grids to account for modifications to the configuration shape. The first section of this presentation deals with the automated regridding procedure used to generate overset grids for the fuselage/wing/diverter/nacelle configurations analysed in this effort. The second section outlines the procedures utilized to conduct OVERDISC inverse designs. The third section briefly covers the work conducted by Dick Campbell, in which a dual-point design at Mach 2.4 and 0.9 was attempted using OVERDISC; the initial configuration from which this design effort was started is an early version of the optimized shape for the TCA configuration developed by the Boeing Commercial Airplane Group (BCAG), which eventually evolved into the NCV design. The final section presents results from application of the Natural Flow Wing design philosophy to the TCA configuration.

  14. Robust state preparation in quantum simulations of Dirac dynamics

    NASA Astrophysics Data System (ADS)

    Song, Xue-Ke; Deng, Fu-Guo; Lamata, Lucas; Muga, J. G.

    2017-02-01

    A nonrelativistic system such as an ultracold trapped ion may perform a quantum simulation of a Dirac equation dynamics under specific conditions. The resulting Hamiltonian and dynamics are highly controllable, but the coupling between momentum and internal levels poses some difficulties to manipulate the internal states accurately in wave packets. We use invariants of motion to inverse engineer robust population inversion processes with a homogeneous, time-dependent simulated electric field. This exemplifies the usefulness of inverse-engineering techniques to improve the performance of quantum simulation protocols.

  15. Dynamic inverse models in human-cyber-physical systems

    NASA Astrophysics Data System (ADS)

    Robinson, Ryan M.; Scobee, Dexter R. R.; Burden, Samuel A.; Sastry, S. Shankar

    2016-05-01

    Human interaction with the physical world is increasingly mediated by automation. This interaction is characterized by dynamic coupling between robotic (i.e. cyber) and neuromechanical (i.e. human) decision-making agents. Guaranteeing performance of such human-cyber-physical systems will require predictive mathematical models of this dynamic coupling. Toward this end, we propose a rapprochement between robotics and neuromechanics premised on the existence of internal forward and inverse models in the human agent. We hypothesize that, in tele-robotic applications of interest, a human operator learns to invert automation dynamics, directly translating from desired task to required control input. By formulating the model inversion problem in the context of a tracking task for a nonlinear control system in control-a_ne form, we derive criteria for exponential tracking and show that the resulting dynamic inverse model generally renders a portion of the physical system state (i.e., the internal dynamics) unobservable from the human operator's perspective. Under stability conditions, we show that the human can achieve exponential tracking without formulating an estimate of the system's state so long as they possess an accurate model of the system's dynamics. These theoretical results are illustrated using a planar quadrotor example. We then demonstrate that the automation can intervene to improve performance of the tracking task by solving an optimal control problem. Performance is guaranteed to improve under the assumption that the human learns and inverts the dynamic model of the altered system. We conclude with a discussion of practical limitations that may hinder exact dynamic model inversion.

  16. A Modular Environment for Geophysical Inversion and Run-time Autotuning using Heterogeneous Computing Systems

    NASA Astrophysics Data System (ADS)

    Myre, Joseph M.

    Heterogeneous computing systems have recently come to the forefront of the High-Performance Computing (HPC) community's interest. HPC computer systems that incorporate special purpose accelerators, such as Graphics Processing Units (GPUs), are said to be heterogeneous. Large scale heterogeneous computing systems have consistently ranked highly on the Top500 list since the beginning of the heterogeneous computing trend. By using heterogeneous computing systems that consist of both general purpose processors and special- purpose accelerators, the speed and problem size of many simulations could be dramatically increased. Ultimately this results in enhanced simulation capabilities that allows, in some cases for the first time, the execution of parameter space and uncertainty analyses, model optimizations, and other inverse modeling techniques that are critical for scientific discovery and engineering analysis. However, simplifying the usage and optimization of codes for heterogeneous computing systems remains a challenge. This is particularly true for scientists and engineers for whom understanding HPC architectures and undertaking performance analysis may not be primary research objectives. To enable scientists and engineers to remain focused on their primary research objectives, a modular environment for geophysical inversion and run-time autotuning on heterogeneous computing systems is presented. This environment is composed of three major components: 1) CUSH---a framework for reducing the complexity of programming heterogeneous computer systems, 2) geophysical inversion routines which can be used to characterize physical systems, and 3) run-time autotuning routines designed to determine configurations of heterogeneous computing systems in an attempt to maximize the performance of scientific and engineering codes. Using three case studies, a lattice-Boltzmann method, a non-negative least squares inversion, and a finite-difference fluid flow method, it is shown that this environment provides scientists and engineers with means to reduce the programmatic complexity of their applications, to perform geophysical inversions for characterizing physical systems, and to determine high-performing run-time configurations of heterogeneous computing systems using a run-time autotuner.

  17. Design optimization of axial flow hydraulic turbine runner: Part I - an improved Q3D inverse method

    NASA Astrophysics Data System (ADS)

    Peng, Guoyi; Cao, Shuliang; Ishizuka, Masaru; Hayama, Shinji

    2002-06-01

    With the aim of constructing a comprehensive design optimization procedure of axial flow hydraulic turbine, an improved quasi-three-dimensional inverse method has been proposed from the viewpoint of system and a set of rotational flow governing equations as well as a blade geometry design equation has been derived. The computation domain is firstly taken from the inlet of guide vane to the far outlet of runner blade in the inverse method and flows in different regions are solved simultaneously. So the influence of wicket gate parameters on the runner blade design can be considered and the difficulty to define the flow condition at the runner blade inlet is surmounted. As a pre-computation of initial blade design on S2m surface is newly adopted, the iteration of S1 and S2m surfaces has been reduced greatly and the convergence of inverse computation has been improved. The present model has been applied to the inverse computation of a Kaplan turbine runner. Experimental results and the direct flow analysis have proved the validation of inverse computation. Numerical investigations show that a proper enlargement of guide vane distribution diameter is advantageous to improve the performance of axial hydraulic turbine runner. Copyright

  18. A combined direct/inverse three-dimensional transonic wing design method for vector computers

    NASA Technical Reports Server (NTRS)

    Weed, R. A.; Carlson, L. A.; Anderson, W. K.

    1984-01-01

    A three-dimensional transonic-wing design algorithm for vector computers is developed, and the results of sample computations are presented graphically. The method incorporates the direct/inverse scheme of Carlson (1975), a Cartesian grid system with boundary conditions applied at a mean plane, and a potential-flow solver based on the conservative form of the full potential equation and using the ZEBRA II vectorizable solution algorithm of South et al. (1980). The accuracy and consistency of the method with regard to direct and inverse analysis and trailing-edge closure are verified in the test computations.

  19. The Earthquake Source Inversion Validation (SIV) - Project: Summary, Status, Outlook

    NASA Astrophysics Data System (ADS)

    Mai, P. M.

    2017-12-01

    Finite-fault earthquake source inversions infer the (time-dependent) displacement on the rupture surface from geophysical data. The resulting earthquake source models document the complexity of the rupture process. However, this kinematic source inversion is ill-posed and returns non-unique solutions, as seen for instance in multiple source models for the same earthquake, obtained by different research teams, that often exhibit remarkable dissimilarities. To address the uncertainties in earthquake-source inversions and to understand strengths and weaknesses of various methods, the Source Inversion Validation (SIV) project developed a set of forward-modeling exercises and inversion benchmarks. Several research teams then use these validation exercises to test their codes and methods, but also to develop and benchmark new approaches. In this presentation I will summarize the SIV strategy, the existing benchmark exercises and corresponding results. Using various waveform-misfit criteria and newly developed statistical comparison tools to quantify source-model (dis)similarities, the SIV platforms is able to rank solutions and identify particularly promising source inversion approaches. Existing SIV exercises (with related data and descriptions) and all computational tools remain available via the open online collaboration platform; additional exercises and benchmark tests will be uploaded once they are fully developed. I encourage source modelers to use the SIV benchmarks for developing and testing new methods. The SIV efforts have already led to several promising new techniques for tackling the earthquake-source imaging problem. I expect that future SIV benchmarks will provide further innovations and insights into earthquake source kinematics that will ultimately help to better understand the dynamics of the rupture process.

  20. On one-dimensional stretching functions for finite-difference calculations. [computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Vinokur, M.

    1983-01-01

    The class of one-dimensional stretching functions used in finite-difference calculations is studied. For solutions containing a highly localized region of rapid variation, simple criteria for a stretching function are derived using a truncation error analysis. These criteria are used to investigate two types of stretching functions. One an interior stretching function, for which the location and slope of an interior clustering region are specified. The simplest such function satisfying the criteria is found to be one based on the inverse hyperbolic sine. The other type of function is a two-sided stretching function, for which the arbitrary slopes at the two ends of the one-dimensional interval are specified. The simplest such general function is found to be one based on the inverse tangent. Previously announced in STAR as N80-25055

  1. Dynamic Source Inversion of a M6.5 Intraslab Earthquake in Mexico: Application of a New Parallel Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Díaz-Mojica, J. J.; Cruz-Atienza, V. M.; Madariaga, R.; Singh, S. K.; Iglesias, A.

    2013-05-01

    We introduce a novel approach for imaging the earthquakes dynamics from ground motion records based on a parallel genetic algorithm (GA). The method follows the elliptical dynamic-rupture-patch approach introduced by Di Carli et al. (2010) and has been carefully verified through different numerical tests (Díaz-Mojica et al., 2012). Apart from the five model parameters defining the patch geometry, our dynamic source description has four more parameters: the stress drop inside the nucleation and the elliptical patches; and two friction parameters, the slip weakening distance and the change of the friction coefficient. These parameters are constant within the rupture surface. The forward dynamic source problem, involved in the GA inverse method, uses a highly accurate computational solver for the problem, namely the staggered-grid split-node. The synthetic inversion presented here shows that the source model parameterization is suitable for the GA, and that short-scale source dynamic features are well resolved in spite of low-pass filtering of the data for periods comparable to the source duration. Since there is always uncertainty in the propagation medium as well as in the source location and the focal mechanisms, we have introduced a statistical approach to generate a set of solution models so that the envelope of the corresponding synthetic waveforms explains as much as possible the observed data. We applied the method to the 2012 Mw6.5 intraslab Zumpango, Mexico earthquake and determined several fundamental source parameters that are in accordance with different and completely independent estimates for Mexican and worldwide earthquakes. Our weighted-average final model satisfactorily explains eastward rupture directivity observed in the recorded data. Some parameters found for the Zumpango earthquake are: Δτ = 30.2+/-6.2 MPa, Er = 0.68+/-0.36x10^15 J, G = 1.74+/-0.44x10^15 J, η = 0.27+/-0.11, Vr/Vs = 0.52+/-0.09 and Mw = 6.64+/-0.07; for the stress drop, radiated energy, fracture energy, radiation efficiency, rupture velocity and moment magnitude, respectively. Mw6.5 intraslab Zumpango earthquake location, stations location and tectonic setting in central Mexico

  2. Double-inversion mechanisms of the X⁻ + CH₃Y [X,Y = F, Cl, Br, I] SN2 reactions.

    PubMed

    Szabó, István; Czakó, Gábor

    2015-03-26

    The double-inversion and front-side attack transition states as well as the proton-abstraction channels of the X(-) + CH3Y [X,Y = F, Cl, Br, I] reactions are characterized by the explicitly correlated CCSD(T)-F12b/aug-cc-pVTZ(-PP) level of theory using small-core relativistic effective core potentials and the corresponding aug-cc-pVTZ-PP bases for Br and I. In the X = F case the double-inversion classical(adiabatic) barrier heights are 28.7(25.6), 15.8(13.4), 13.2(11.0), and 8.6(6.6) kcal mol(-1) for Y = F, Cl, Br, and I, respectively, whereas the barrier heights are in the 40-90 kcal mol(-1) range for the other 12 reactions. The abstraction channels are always above the double-inversion saddle points. For X = F, the front-side attack classical(adiabatic) barrier heights, 45.8(44.8), 31.0(30.3), 24.7(24.2), and 19.5(19.3) kcal mol(-1) for Y = F, Cl, Br, and I, respectively, are higher than the corresponding double-inversion ones, whereas for the other systems the front-side attack saddle points are in the 35-70 kcal mol(-1) range. The double-inversion transition states have XH···CH2Y(-) structures with Cs point-group symmetry, and the front-side attack saddle points have either Cs (X = F or X = Y) or C1 symmetry with XCY angles in the 78-88° range. On the basis of the previous reaction dynamics simulations and the minimum energy path computations along the inversion coordinate of selected XH···CH2Y(-) systems, we suggest that the double inversion may be a general mechanism for SN2 reactions.

  3. Finite element dynamic analysis of soft tissues using state-space model.

    PubMed

    Iorga, Lucian N; Shan, Baoxiang; Pelegri, Assimina A

    2009-04-01

    A finite element (FE) model is employed to investigate the dynamic response of soft tissues under external excitations, particularly corresponding to the case of harmonic motion imaging. A solid 3D mixed 'u-p' element S8P0 is implemented to capture the near-incompressibility inherent in soft tissues. Two important aspects in structural modelling of these tissues are studied; these are the influence of viscous damping on the dynamic response and, following FE-modelling, a developed state-space formulation that valuates the efficiency of several order reduction methods. It is illustrated that the order of the mathematical model can be significantly reduced, while preserving the accuracy of the observed system dynamics. Thus, the reduced-order state-space representation of soft tissues for general dynamic analysis significantly reduces the computational cost and provides a unitary framework for the 'forward' simulation and 'inverse' estimation of soft tissues. Moreover, the results suggest that damping in soft-tissue is significant, effectively cancelling the contribution of all but the first few vibration modes.

  4. A forward model-based validation of cardiovascular system identification

    NASA Technical Reports Server (NTRS)

    Mukkamala, R.; Cohen, R. J.

    2001-01-01

    We present a theoretical evaluation of a cardiovascular system identification method that we previously developed for the analysis of beat-to-beat fluctuations in noninvasively measured heart rate, arterial blood pressure, and instantaneous lung volume. The method provides a dynamical characterization of the important autonomic and mechanical mechanisms responsible for coupling the fluctuations (inverse modeling). To carry out the evaluation, we developed a computational model of the cardiovascular system capable of generating realistic beat-to-beat variability (forward modeling). We applied the method to data generated from the forward model and compared the resulting estimated dynamics with the actual dynamics of the forward model, which were either precisely known or easily determined. We found that the estimated dynamics corresponded to the actual dynamics and that this correspondence was robust to forward model uncertainty. We also demonstrated the sensitivity of the method in detecting small changes in parameters characterizing autonomic function in the forward model. These results provide confidence in the performance of the cardiovascular system identification method when applied to experimental data.

  5. Computational inverse methods of heat source in fatigue damage problems

    NASA Astrophysics Data System (ADS)

    Chen, Aizhou; Li, Yuan; Yan, Bo

    2018-04-01

    Fatigue dissipation energy is the research focus in field of fatigue damage at present. It is a new idea to solve the problem of calculating fatigue dissipation energy by introducing inverse method of heat source into parameter identification of fatigue dissipation energy model. This paper introduces the research advances on computational inverse method of heat source and regularization technique to solve inverse problem, as well as the existing heat source solution method in fatigue process, prospects inverse method of heat source applying in fatigue damage field, lays the foundation for further improving the effectiveness of fatigue dissipation energy rapid prediction.

  6. Using CFD Surface Solutions to Shape Sonic Boom Signatures Propagated from Off-Body Pressure

    NASA Technical Reports Server (NTRS)

    Ordaz, Irian; Li, Wu

    2013-01-01

    The conceptual design of a low-boom and low-drag supersonic aircraft remains a challenge despite significant progress in recent years. Inverse design using reversed equivalent area and adjoint methods have been demonstrated to be effective in shaping the ground signature propagated from computational fluid dynamics (CFD) off-body pressure distributions. However, there is still a need to reduce the computational cost in the early stages of design to obtain a baseline that is feasible for low-boom shaping, and in the search for a robust low-boom design over the entire sonic boom footprint. The proposed design method addresses the need to reduce the computational cost for robust low-boom design by using surface pressure distributions from CFD solutions to shape sonic boom ground signatures propagated from CFD off-body pressure.

  7. Output Tracking for Systems with Non-Hyperbolic and Near Non-Hyperbolic Internal Dynamics: Helicopter Hover Control

    NASA Technical Reports Server (NTRS)

    Devasia, Santosh

    1996-01-01

    A technique to achieve output tracking for nonminimum phase linear systems with non-hyperbolic and near non-hyperbolic internal dynamics is presented. This approach integrates stable inversion techniques, that achieve exact-tracking, with approximation techniques, that modify the internal dynamics to achieve desirable performance. Such modification of the internal dynamics is used (1) to remove non-hyperbolicity which an obstruction to applying stable inversion techniques and (2) to reduce large pre-actuation time needed to apply stable inversion for near non-hyperbolic cases. The method is applied to an example helicopter hover control problem with near non-hyperbolic internal dynamic for illustrating the trade-off between exact tracking and reduction of pre-actuation time.

  8. Mediation of donor–acceptor distance in an enzymatic methyl transfer reaction

    PubMed Central

    Zhang, Jianyu; Kulik, Heather J.; Martinez, Todd J.; Klinman, Judith P.

    2015-01-01

    Enzymatic methyl transfer, catalyzed by catechol-O-methyltransferase (COMT), is investigated using binding isotope effects (BIEs), time-resolved fluorescence lifetimes, Stokes shifts, and extended graphics processing unit (GPU)-based quantum mechanics/molecular mechanics (QM/MM) approaches. The WT enzyme is compared with mutants at Tyr68, a conserved residue that is located behind the reactive sulfur of cofactor. Small (>1) BIEs are observed for an S-adenosylmethionine (AdoMet)-binary and abortive ternary complex containing 8-hydroxyquinoline, and contrast with previously reported inverse (<1) kinetic isotope effects (KIEs). Extended GPU-based computational studies of a ternary complex containing catecholate show a clear trend in ground state structures, from noncanonical bond lengths for WT toward solution values with mutants. Structural and dynamical differences that are sensitive to Tyr68 have also been detected using time-resolved Stokes shift measurements and molecular dynamics. These experimental and computational results are discussed in the context of active site compaction that requires an ionization of substrate within the enzyme ternary complex. PMID:26080432

  9. A 3D inversion for all-space magnetotelluric data with static shift correction

    NASA Astrophysics Data System (ADS)

    Zhang, Kun

    2017-04-01

    Base on the previous studies on the static shift correction and 3D inversion algorithms, we improve the NLCG 3D inversion method and propose a new static shift correction method which work in the inversion. The static shift correction method is based on the 3D theory and real data. The static shift can be detected by the quantitative analysis of apparent parameters (apparent resistivity and impedance phase) of MT in high frequency range, and completed correction with inversion. The method is an automatic processing technology of computer with 0 cost, and avoids the additional field work and indoor processing with good results. The 3D inversion algorithm is improved (Zhang et al., 2013) base on the NLCG method of Newman & Alumbaugh (2000) and Rodi & Mackie (2001). For the algorithm, we added the parallel structure, improved the computational efficiency, reduced the memory of computer and added the topographic and marine factors. So the 3D inversion could work in general PC with high efficiency and accuracy. And all the MT data of surface stations, seabed stations and underground stations can be used in the inversion algorithm.

  10. The role of simulated small-scale ocean variability in inverse computations for ocean acoustic tomography.

    PubMed

    Dushaw, Brian D; Sagen, Hanne

    2017-12-01

    Ocean acoustic tomography depends on a suitable reference ocean environment with which to set the basic parameters of the inverse problem. Some inverse problems may require a reference ocean that includes the small-scale variations from internal waves, small mesoscale, or spice. Tomographic inversions that employ data of stable shadow zone arrivals, such as those that have been observed in the North Pacific and Canary Basin, are an example. Estimating temperature from the unique acoustic data that have been obtained in Fram Strait is another example. The addition of small-scale variability to augment a smooth reference ocean is essential to understanding the acoustic forward problem in these cases. Rather than a hindrance, the stochastic influences of the small scale can be exploited to obtain accurate inverse estimates. Inverse solutions are readily obtained, and they give computed arrival patterns that matched the observations. The approach is not ad hoc, but universal, and it has allowed inverse estimates for ocean temperature variations in Fram Strait to be readily computed on several acoustic paths for which tomographic data were obtained.

  11. Inverse problem studies of biochemical systems with structure identification of S-systems by embedding training functions in a genetic algorithm.

    PubMed

    Sarode, Ketan Dinkar; Kumar, V Ravi; Kulkarni, B D

    2016-05-01

    An efficient inverse problem approach for parameter estimation, state and structure identification from dynamic data by embedding training functions in a genetic algorithm methodology (ETFGA) is proposed for nonlinear dynamical biosystems using S-system canonical models. Use of multiple shooting and decomposition approach as training functions has been shown for handling of noisy datasets and computational efficiency in studying the inverse problem. The advantages of the methodology are brought out systematically by studying it for three biochemical model systems of interest. By studying a small-scale gene regulatory system described by a S-system model, the first example demonstrates the use of ETFGA for the multifold aims of the inverse problem. The estimation of a large number of parameters with simultaneous state and network identification is shown by training a generalized S-system canonical model with noisy datasets. The results of this study bring out the superior performance of ETFGA on comparison with other metaheuristic approaches. The second example studies the regulation of cAMP oscillations in Dictyostelium cells now assuming limited availability of noisy data. Here, flexibility of the approach to incorporate partial system information in the identification process is shown and its effect on accuracy and predictive ability of the estimated model are studied. The third example studies the phenomenological toy model of the regulation of circadian oscillations in Drosophila that follows rate laws different from S-system power-law. For the limited noisy data, using a priori information about properties of the system, we could estimate an alternate S-system model that showed robust oscillatory behavior with predictive abilities. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. Inverse free steering law for small satellite attitude control and power tracking with VSCMGs

    NASA Astrophysics Data System (ADS)

    Malik, M. S. I.; Asghar, Sajjad

    2014-01-01

    Recent developments in integrated power and attitude control systems (IPACSs) for small satellite, has opened a new dimension to more complex and demanding space missions. This paper presents a new inverse free steering approach for integrated power and attitude control systems using variable-speed single gimbal control moment gyroscope. The proposed inverse free steering law computes the VSCMG steering commands (gimbal rates and wheel accelerations) such that error signal (difference in command and output) in feedback loop is driven to zero. H∞ norm optimization approach is employed to synthesize the static matrix elements of steering law for a static state of VSCMG. Later these matrix elements are suitably made dynamic in order for the adaptation. In order to improve the performance of proposed steering law while passing through a singular state of CMG cluster (no torque output), the matrix element of steering law is suitably modified. Therefore, this steering law is capable of escaping internal singularities and using the full momentum capacity of CMG cluster. Finally, two numerical examples for a satellite in a low earth orbit are simulated to test the proposed steering law.

  13. Elastic Cherenkov effects in transversely isotropic soft materials-II: Ex vivo and in vivo experiments

    NASA Astrophysics Data System (ADS)

    Li, Guo-Yang; He, Qiong; Qian, Lin-Xue; Geng, Huiying; Liu, Yanlin; Yang, Xue-Yi; Luo, Jianwen; Cao, Yanping

    2016-09-01

    In part I of this study, we investigated the elastic Cherenkov effect (ECE) in an incompressible transversely isotropic (TI) soft solid using a combined theoretical and computational approach, based on which an inverse method has been proposed to measure both the anisotropic and hyperelastic parameters of TI soft tissues. In this part, experiments were carried out to validate the inverse method and demonstrate its usefulness in practical measurements. We first performed ex vivo experiments on bovine skeletal muscles. Not only the shear moduli along and perpendicular to the direction of muscle fibers but also the elastic modulus EL and hyperelastic parameter c2 were determined. We next carried out tensile tests to determine EL, which was compared with the value obtained using the shear wave elastography method. Furthermore, we conducted in vivo experiments on the biceps brachii and gastrocnemius muscles of ten healthy volunteers. To the best of our knowledge, this study represents the first attempt to determine EL of human muscles using the dynamic elastography method and inverse analysis. The significance of our method and its potential for clinical use are discussed.

  14. Level-set techniques for facies identification in reservoir modeling

    NASA Astrophysics Data System (ADS)

    Iglesias, Marco A.; McLaughlin, Dennis

    2011-03-01

    In this paper we investigate the application of level-set techniques for facies identification in reservoir models. The identification of facies is a geometrical inverse ill-posed problem that we formulate in terms of shape optimization. The goal is to find a region (a geologic facies) that minimizes the misfit between predicted and measured data from an oil-water reservoir. In order to address the shape optimization problem, we present a novel application of the level-set iterative framework developed by Burger in (2002 Interfaces Free Bound. 5 301-29 2004 Inverse Problems 20 259-82) for inverse obstacle problems. The optimization is constrained by (the reservoir model) a nonlinear large-scale system of PDEs that describes the reservoir dynamics. We reformulate this reservoir model in a weak (integral) form whose shape derivative can be formally computed from standard results of shape calculus. At each iteration of the scheme, the current estimate of the shape derivative is utilized to define a velocity in the level-set equation. The proper selection of this velocity ensures that the new shape decreases the cost functional. We present results of facies identification where the velocity is computed with the gradient-based (GB) approach of Burger (2002) and the Levenberg-Marquardt (LM) technique of Burger (2004). While an adjoint formulation allows the straightforward application of the GB approach, the LM technique requires the computation of the large-scale Karush-Kuhn-Tucker system that arises at each iteration of the scheme. We efficiently solve this system by means of the representer method. We present some synthetic experiments to show and compare the capabilities and limitations of the proposed implementations of level-set techniques for the identification of geologic facies.

  15. 3D Dynamics of the Near-Surface Layer of the Ocean in the Presence of Freshwater Influx

    NASA Astrophysics Data System (ADS)

    Dean, C.; Soloviev, A.

    2015-12-01

    Freshwater inflow due to convective rains or river runoff produces lenses of freshened water in the near surface layer of the ocean. These lenses are localized in space and typically involve both salinity and temperature anomalies. Due to significant density anomalies, strong pressure gradients develop, which result in lateral spreading of freshwater lenses in a form resembling gravity currents. Gravity currents inherently involve three-dimensional dynamics. The gravity current head can include the Kelvin-Helmholtz billows with vertical density inversions. In this work, we have conducted a series of numerical experiments using computational fluid dynamics tools. These numerical simulations were designed to elucidate the relationship between vertical mixing and horizontal advection of salinity under various environmental conditions and potential impact on the pollution transport including oil spills. The near-surface data from the field experiments in the Gulf of Mexico during the SCOPE experiment were available for validation of numerical simulations. In particular, we observed a freshwater layer within a few-meter depth range and, in some cases, a density inversion at the edge of the freshwater lens, which is consistent with the results of numerical simulations. In conclusion, we discuss applicability of these results to the interpretation of Aquarius and SMOS sea surface salinity satellite measurements. The results of this study indicate that 3D dynamics of the near-surface layer of the ocean are essential in the presence of freshwater inflow.

  16. Computational methods for the identification of spatially varying stiffness and damping in beams

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Rosen, I. G.

    1986-01-01

    A numerical approximation scheme for the estimation of functional parameters in Euler-Bernoulli models for the transverse vibration of flexible beams with tip bodies is developed. The method permits the identification of spatially varying flexural stiffness and Voigt-Kelvin viscoelastic damping coefficients which appear in the hybrid system of ordinary and partial differential equations and boundary conditions describing the dynamics of such structures. An inverse problem is formulated as a least squares fit to data subject to constraints in the form of a vector system of abstract first order evolution equations. Spline-based finite element approximations are used to finite dimensionalize the problem. Theoretical convergence results are given and numerical studies carried out on both conventional (serial) and vector computers are discussed.

  17. Inverse dynamics of adaptive structures used as space cranes

    NASA Technical Reports Server (NTRS)

    Das, S. K.; Utku, S.; Wada, B. K.

    1990-01-01

    As a precursor to the real-time control of fast moving adaptive structures used as space cranes, a formulation is given for the flexibility induced motion relative to the nominal motion (i.e., the motion that assumes no flexibility) and for obtaining the open loop time varying driving forces. An algorithm is proposed for the computation of the relative motion and driving forces. The governing equations are given in matrix form with explicit functional dependencies. A simulator is developed to implement the algorithm on a digital computer. In the formulations, the distributed mass of the crane is lumped by two schemes, vz., 'trapezoidal' lumping and 'Simpson's rule' lumping. The effects of the mass lumping schemes are shown by simulator runs.

  18. Round-off errors in cutting plane algorithms based on the revised simplex procedure

    NASA Technical Reports Server (NTRS)

    Moore, J. E.

    1973-01-01

    This report statistically analyzes computational round-off errors associated with the cutting plane approach to solving linear integer programming problems. Cutting plane methods require that the inverse of a sequence of matrices be computed. The problem basically reduces to one of minimizing round-off errors in the sequence of inverses. Two procedures for minimizing this problem are presented, and their influence on error accumulation is statistically analyzed. One procedure employs a very small tolerance factor to round computed values to zero. The other procedure is a numerical analysis technique for reinverting or improving the approximate inverse of a matrix. The results indicated that round-off accumulation can be effectively minimized by employing a tolerance factor which reflects the number of significant digits carried for each calculation and by applying the reinversion procedure once to each computed inverse. If 18 significant digits plus an exponent are carried for each variable during computations, then a tolerance value of 0.1 x 10 to the minus 12th power is reasonable.

  19. Inverse kinematics of a dual linear actuator pitch/roll heliostat

    NASA Astrophysics Data System (ADS)

    Freeman, Joshua; Shankar, Balakrishnan; Sundaram, Ganesh

    2017-06-01

    This work presents a simple, computationally efficient inverse kinematics solution for a pitch/roll heliostat using two linear actuators. The heliostat design and kinematics have been developed, modeled and tested using computer simulation software. A physical heliostat prototype was fabricated to validate the theoretical computations and data. Pitch/roll heliostats have numerous advantages including reduced cost potential and reduced space requirements, with a primary disadvantage being the significantly more complicated kinematics, which are solved here. Novel methods are applied to simplify the inverse kinematics problem which could be applied to other similar problems.

  20. Support Minimized Inversion of Acoustic and Elastic Wave Scattering

    NASA Astrophysics Data System (ADS)

    Safaeinili, Ali

    Inversion of limited data is common in many areas of NDE such as X-ray Computed Tomography (CT), Ultrasonic and eddy current flaw characterization and imaging. In many applications, it is common to have a bias toward a solution with minimum (L^2)^2 norm without any physical justification. When it is a priori known that objects are compact as, say, with cracks and voids, by choosing "Minimum Support" functional instead of the minimum (L^2)^2 norm, an image can be obtained that is equally in agreement with the available data, while it is more consistent with what is most probably seen in the real world. We have utilized a minimum support functional to find a solution with the smallest volume. This inversion algorithm is most successful in reconstructing objects that are compact like voids and cracks. To verify this idea, we first performed a variational nonlinear inversion of acoustic backscatter data using minimum support objective function. A full nonlinear forward model was used to accurately study the effectiveness of the minimized support inversion without error due to the linear (Born) approximation. After successful inversions using a full nonlinear forward model, a linearized acoustic inversion was developed to increase speed and efficiency in imaging process. The results indicate that by using minimum support functional, we can accurately size and characterize voids and/or cracks which otherwise might be uncharacterizable. An extremely important feature of support minimized inversion is its ability to compensate for unknown absolute phase (zero-of-time). Zero-of-time ambiguity is a serious problem in the inversion of the pulse-echo data. The minimum support inversion was successfully used for the inversion of acoustic backscatter data due to compact scatterers without the knowledge of the zero-of-time. The main drawback to this type of inversion is its computer intensiveness. In order to make this type of constrained inversion available for common use, work needs to be performed in three areas: (1) exploitation of state-of-the-art parallel computation, (2) improvement of theoretical formulation of the scattering process for better computation efficiency, and (3) development of better methods for guiding the non-linear inversion. (Abstract shortened by UMI.).

  1. Fluid moments of the nonlinear Landau collision operator

    DOE PAGES

    Hirvijoki, E.; Lingam, M.; Pfefferle, D.; ...

    2016-08-09

    An important problem in plasma physics is the lack of an accurate and complete description of Coulomb collisions in associated fluid models. To shed light on the problem, this Letter introduces an integral identity involving the multivariate Hermite tensor polynomials and presents a method for computing exact expressions for the fluid moments of the nonlinear Landau collision operator. In conclusion, the proposed methodology provides a systematic and rigorous means of extending the validity of fluid models that have an underlying inverse-square force particle dynamics to arbitrary collisionality and flow.

  2. Utilizing High-Performance Computing to Investigate Parameter Sensitivity of an Inversion Model for Vadose Zone Flow and Transport

    NASA Astrophysics Data System (ADS)

    Fang, Z.; Ward, A. L.; Fang, Y.; Yabusaki, S.

    2011-12-01

    High-resolution geologic models have proven effective in improving the accuracy of subsurface flow and transport predictions. However, many of the parameters in subsurface flow and transport models cannot be determined directly at the scale of interest and must be estimated through inverse modeling. A major challenge, particularly in vadose zone flow and transport, is the inversion of the highly-nonlinear, high-dimensional problem as current methods are not readily scalable for large-scale, multi-process models. In this paper we describe the implementation of a fully automated approach for addressing complex parameter optimization and sensitivity issues on massively parallel multi- and many-core systems. The approach is based on the integration of PNNL's extreme scale Subsurface Transport Over Multiple Phases (eSTOMP) simulator, which uses the Global Array toolkit, with the Beowulf-Cluster inspired parallel nonlinear parameter estimation software, BeoPEST in the MPI mode. In the eSTOMP/BeoPEST implementation, a pre-processor generates all of the PEST input files based on the eSTOMP input file. Simulation results for comparison with observations are extracted automatically at each time step eliminating the need for post-process data extractions. The inversion framework was tested with three different experimental data sets: one-dimensional water flow at Hanford Grass Site; irrigation and infiltration experiment at the Andelfingen Site; and a three-dimensional injection experiment at Hanford's Sisson and Lu Site. Good agreements are achieved in all three applications between observations and simulations in both parameter estimates and water dynamics reproduction. Results show that eSTOMP/BeoPEST approach is highly scalable and can be run efficiently with hundreds or thousands of processors. BeoPEST is fault tolerant and new nodes can be dynamically added and removed. A major advantage of this approach is the ability to use high-resolution geologic models to preserve the spatial structure in the inverse model, which leads to better parameter estimates and improved predictions when using the inverse-conditioned realizations of parameter fields.

  3. Dynamic gamma knife radiosurgery

    NASA Astrophysics Data System (ADS)

    Luan, Shuang; Swanson, Nathan; Chen, Zhe; Ma, Lijun

    2009-03-01

    Gamma knife has been the treatment of choice for various brain tumors and functional disorders. Current gamma knife radiosurgery is planned in a 'ball-packing' approach and delivered in a 'step-and-shoot' manner, i.e. it aims to 'pack' the different sized spherical high-dose volumes (called 'shots') into a tumor volume. We have developed a dynamic scheme for gamma knife radiosurgery based on the concept of 'dose-painting' to take advantage of the new robotic patient positioning system on the latest Gamma Knife C™ and Perfexion™ units. In our scheme, the spherical high dose volume created by the gamma knife unit will be viewed as a 3D spherical 'paintbrush', and treatment planning reduces to finding the best route of this 'paintbrush' to 'paint' a 3D tumor volume. Under our dose-painting concept, gamma knife radiosurgery becomes dynamic, where the patient moves continuously under the robotic positioning system. We have implemented a fully automatic dynamic gamma knife radiosurgery treatment planning system, where the inverse planning problem is solved as a traveling salesman problem combined with constrained least-square optimizations. We have also carried out experimental studies of dynamic gamma knife radiosurgery and showed the following. (1) Dynamic gamma knife radiosurgery is ideally suited for fully automatic inverse planning, where high quality radiosurgery plans can be obtained in minutes of computation. (2) Dynamic radiosurgery plans are more conformal than step-and-shoot plans and can maintain a steep dose gradient (around 13% per mm) between the target tumor volume and the surrounding critical structures. (3) It is possible to prescribe multiple isodose lines with dynamic gamma knife radiosurgery, so that the treatment can cover the periphery of the target volume while escalating the dose for high tumor burden regions. (4) With dynamic gamma knife radiosurgery, one can obtain a family of plans representing a tradeoff between the delivery time and the dose distributions, thus giving the clinician one more dimension of flexibility of choosing a plan based on the clinical situations.

  4. Improved FFT-based numerical inversion of Laplace transforms via fast Hartley transform algorithm

    NASA Technical Reports Server (NTRS)

    Hwang, Chyi; Lu, Ming-Jeng; Shieh, Leang S.

    1991-01-01

    The disadvantages of numerical inversion of the Laplace transform via the conventional fast Fourier transform (FFT) are identified and an improved method is presented to remedy them. The improved method is based on introducing a new integration step length Delta(omega) = pi/mT for trapezoidal-rule approximation of the Bromwich integral, in which a new parameter, m, is introduced for controlling the accuracy of the numerical integration. Naturally, this method leads to multiple sets of complex FFT computations. A new inversion formula is derived such that N equally spaced samples of the inverse Laplace transform function can be obtained by (m/2) + 1 sets of N-point complex FFT computations or by m sets of real fast Hartley transform (FHT) computations.

  5. Nonlinear inversion of potential-field data using a hybrid-encoding genetic algorithm

    USGS Publications Warehouse

    Chen, C.; Xia, J.; Liu, J.; Feng, G.

    2006-01-01

    Using a genetic algorithm to solve an inverse problem of complex nonlinear geophysical equations is advantageous because it does not require computer gradients of models or "good" initial models. The multi-point search of a genetic algorithm makes it easier to find the globally optimal solution while avoiding falling into a local extremum. As is the case in other optimization approaches, the search efficiency for a genetic algorithm is vital in finding desired solutions successfully in a multi-dimensional model space. A binary-encoding genetic algorithm is hardly ever used to resolve an optimization problem such as a simple geophysical inversion with only three unknowns. The encoding mechanism, genetic operators, and population size of the genetic algorithm greatly affect search processes in the evolution. It is clear that improved operators and proper population size promote the convergence. Nevertheless, not all genetic operations perform perfectly while searching under either a uniform binary or a decimal encoding system. With the binary encoding mechanism, the crossover scheme may produce more new individuals than with the decimal encoding. On the other hand, the mutation scheme in a decimal encoding system will create new genes larger in scope than those in the binary encoding. This paper discusses approaches of exploiting the search potential of genetic operations in the two encoding systems and presents an approach with a hybrid-encoding mechanism, multi-point crossover, and dynamic population size for geophysical inversion. We present a method that is based on the routine in which the mutation operation is conducted in the decimal code and multi-point crossover operation in the binary code. The mix-encoding algorithm is called the hybrid-encoding genetic algorithm (HEGA). HEGA provides better genes with a higher probability by a mutation operator and improves genetic algorithms in resolving complicated geophysical inverse problems. Another significant result is that final solution is determined by the average model derived from multiple trials instead of one computation due to the randomness in a genetic algorithm procedure. These advantages were demonstrated by synthetic and real-world examples of inversion of potential-field data. ?? 2005 Elsevier Ltd. All rights reserved.

  6. Location identification for indoor instantaneous point contaminant source by probability-based inverse Computational Fluid Dynamics modeling.

    PubMed

    Liu, X; Zhai, Z

    2008-02-01

    Indoor pollutions jeopardize human health and welfare and may even cause serious morbidity and mortality under extreme conditions. To effectively control and improve indoor environment quality requires immediate interpretation of pollutant sensor readings and accurate identification of indoor pollution history and source characteristics (e.g. source location and release time). This procedure is complicated by non-uniform and dynamic contaminant indoor dispersion behaviors as well as diverse sensor network distributions. This paper introduces a probability concept based inverse modeling method that is able to identify the source location for an instantaneous point source placed in an enclosed environment with known source release time. The study presents the mathematical models that address three different sensing scenarios: sensors without concentration readings, sensors with spatial concentration readings, and sensors with temporal concentration readings. The paper demonstrates the inverse modeling method and algorithm with two case studies: air pollution in an office space and in an aircraft cabin. The predictions were successfully verified against the forward simulation settings, indicating good capability of the method in finding indoor pollutant sources. The research lays a solid ground for further study of the method for more complicated indoor contamination problems. The method developed can help track indoor contaminant source location with limited sensor outputs. This will ensure an effective and prompt execution of building control strategies and thus achieve a healthy and safe indoor environment. The method can also assist the design of optimal sensor networks.

  7. An inverse problem strategy based on forward model evaluations: Gradient-based optimization without adjoint solves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aguilo Valentin, Miguel Alejandro

    2016-07-01

    This study presents a new nonlinear programming formulation for the solution of inverse problems. First, a general inverse problem formulation based on the compliance error functional is presented. The proposed error functional enables the computation of the Lagrange multipliers, and thus the first order derivative information, at the expense of just one model evaluation. Therefore, the calculation of the Lagrange multipliers does not require the solution of the computationally intensive adjoint problem. This leads to significant speedups for large-scale, gradient-based inverse problems.

  8. Reproducible Hydrogeophysical Inversions through the Open-Source Library pyGIMLi

    NASA Astrophysics Data System (ADS)

    Wagner, F. M.; Rücker, C.; Günther, T.

    2017-12-01

    Many tasks in applied geosciences cannot be solved by a single measurement method and require the integration of geophysical, geotechnical and hydrological methods. In the emerging field of hydrogeophysics, researchers strive to gain quantitative information on process-relevant subsurface parameters by means of multi-physical models, which simulate the dynamic process of interest as well as its geophysical response. However, such endeavors are associated with considerable technical challenges, since they require coupling of different numerical models. This represents an obstacle for many practitioners and students. Even technically versatile users tend to build individually tailored solutions by coupling different (and potentially proprietary) forward simulators at the cost of scientific reproducibility. We argue that the reproducibility of studies in computational hydrogeophysics, and therefore the advancement of the field itself, requires versatile open-source software. To this end, we present pyGIMLi - a flexible and computationally efficient framework for modeling and inversion in geophysics. The object-oriented library provides management for structured and unstructured meshes in 2D and 3D, finite-element and finite-volume solvers, various geophysical forward operators, as well as Gauss-Newton based frameworks for constrained, joint and fully-coupled inversions with flexible regularization. In a step-by-step demonstration, it is shown how the hydrogeophysical response of a saline tracer migration can be simulated. Tracer concentration data from boreholes and measured voltages at the surface are subsequently used to estimate the hydraulic conductivity distribution of the aquifer within a single reproducible Python script.

  9. Semiclassics, Goldstone bosons and CFT data

    NASA Astrophysics Data System (ADS)

    Monin, A.; Pirtskhalava, D.; Rattazzi, R.; Seibold, F. K.

    2017-06-01

    Hellerman et al. (arXiv:1505.01537) have shown that in a generic CFT the spectrum of operators carrying a large U(1) charge can be analyzed semiclassically in an expansion in inverse powers of the charge. The key is the operator state correspondence by which such operators are associated with a finite density superfluid phase for the theory quantized on the cylinder. The dynamics is dominated by the corresponding Goldstone hydrodynamic mode and the derivative expansion coincides with the inverse charge expansion. We illustrate and further clarify this situation by first considering simple quantum mechanical analogues. We then systematize the approach by employing the coset construction for non-linearly realized space-time symmetries. Focussing on CFT3 we illustrate the case of higher rank and non-abelian groups and the computation of higher point functions. Three point function coefficients turn out to satisfy universal scaling laws and correlations as the charge and spin are varied.

  10. Temporal sparsity exploiting nonlocal regularization for 4D computed tomography reconstruction

    PubMed Central

    Kazantsev, Daniil; Guo, Enyu; Kaestner, Anders; Lionheart, William R. B.; Bent, Julian; Withers, Philip J.; Lee, Peter D.

    2016-01-01

    X-ray imaging applications in medical and material sciences are frequently limited by the number of tomographic projections collected. The inversion of the limited projection data is an ill-posed problem and needs regularization. Traditional spatial regularization is not well adapted to the dynamic nature of time-lapse tomography since it discards the redundancy of the temporal information. In this paper, we propose a novel iterative reconstruction algorithm with a nonlocal regularization term to account for time-evolving datasets. The aim of the proposed nonlocal penalty is to collect the maximum relevant information in the spatial and temporal domains. With the proposed sparsity seeking approach in the temporal space, the computational complexity of the classical nonlocal regularizer is substantially reduced (at least by one order of magnitude). The presented reconstruction method can be directly applied to various big data 4D (x, y, z+time) tomographic experiments in many fields. We apply the proposed technique to modelled data and to real dynamic X-ray microtomography (XMT) data of high resolution. Compared to the classical spatio-temporal nonlocal regularization approach, the proposed method delivers reconstructed images of improved resolution and higher contrast while remaining significantly less computationally demanding. PMID:27002902

  11. An approach to multivariable control of manipulators

    NASA Technical Reports Server (NTRS)

    Seraji, H.

    1987-01-01

    The paper presents simple schemes for multivariable control of multiple-joint robot manipulators in joint and Cartesian coordinates. The joint control scheme consists of two independent multivariable feedforward and feedback controllers. The feedforward controller is the minimal inverse of the linearized model of robot dynamics and contains only proportional-double-derivative (PD2) terms - implying feedforward from the desired position, velocity and acceleration. This controller ensures that the manipulator joint angles track any reference trajectories. The feedback controller is of proportional-integral-derivative (PID) type and is designed to achieve pole placement. This controller reduces any initial tracking error to zero as desired and also ensures that robust steady-state tracking of step-plus-exponential trajectories is achieved by the joint angles. Simple and explicit expressions of computation of the feedforward and feedback gains are obtained based on the linearized model of robot dynamics. This leads to computationally efficient schemes for either on-line gain computation or off-line gain scheduling to account for variations in the linearized robot model due to changes in the operating point. The joint control scheme is extended to direct control of the end-effector motion in Cartesian space. Simulation results are given for illustration.

  12. Inverse Transformation: Unleashing Spatially Heterogeneous Dynamics with an Alternative Approach to XPCS Data Analysis.

    PubMed

    Andrews, Ross N; Narayanan, Suresh; Zhang, Fan; Kuzmenko, Ivan; Ilavsky, Jan

    2018-02-01

    X-ray photon correlation spectroscopy (XPCS), an extension of dynamic light scattering (DLS) in the X-ray regime, detects temporal intensity fluctuations of coherent speckles and provides scattering vector-dependent sample dynamics at length scales smaller than DLS. The penetrating power of X-rays enables probing dynamics in a broad array of materials with XPCS, including polymers, glasses and metal alloys, where attempts to describe the dynamics with a simple exponential fit usually fails. In these cases, the prevailing XPCS data analysis approach employs stretched or compressed exponential decay functions (Kohlrausch functions), which implicitly assume homogeneous dynamics. In this paper, we propose an alternative analysis scheme based upon inverse Laplace or Gaussian transformation for elucidating heterogeneous distributions of dynamic time scales in XPCS, an approach analogous to the CONTIN algorithm widely accepted in the analysis of DLS from polydisperse and multimodal systems. Using XPCS data measured from colloidal gels, we demonstrate the inverse transform approach reveals hidden multimodal dynamics in materials, unleashing the full potential of XPCS.

  13. Inverse Transformation: Unleashing Spatially Heterogeneous Dynamics with an Alternative Approach to XPCS Data Analysis

    PubMed Central

    Andrews, Ross N.; Narayanan, Suresh; Zhang, Fan; Kuzmenko, Ivan; Ilavsky, Jan

    2018-01-01

    X-ray photon correlation spectroscopy (XPCS), an extension of dynamic light scattering (DLS) in the X-ray regime, detects temporal intensity fluctuations of coherent speckles and provides scattering vector-dependent sample dynamics at length scales smaller than DLS. The penetrating power of X-rays enables probing dynamics in a broad array of materials with XPCS, including polymers, glasses and metal alloys, where attempts to describe the dynamics with a simple exponential fit usually fails. In these cases, the prevailing XPCS data analysis approach employs stretched or compressed exponential decay functions (Kohlrausch functions), which implicitly assume homogeneous dynamics. In this paper, we propose an alternative analysis scheme based upon inverse Laplace or Gaussian transformation for elucidating heterogeneous distributions of dynamic time scales in XPCS, an approach analogous to the CONTIN algorithm widely accepted in the analysis of DLS from polydisperse and multimodal systems. Using XPCS data measured from colloidal gels, we demonstrate the inverse transform approach reveals hidden multimodal dynamics in materials, unleashing the full potential of XPCS. PMID:29875506

  14. Mathematical design of a novel input/instruction device using a moving acoustic emitter

    NASA Astrophysics Data System (ADS)

    Wang, Xianchao; Guo, Yukun; Li, Jingzhi; Liu, Hongyu

    2017-10-01

    This paper is concerned with the mathematical design of a novel input/instruction device using a moving emitter. The emitter acts as a point source and can be installed on a digital pen or worn on the finger of the human being who desires to interact/communicate with the computer. The input/instruction can be recognized by identifying the moving trajectory of the emitter performed by the human being from the collected wave field data. The identification process is modelled as an inverse source problem where one intends to identify the trajectory of a moving point source. There are several salient features of our study which distinguish our result from the existing ones in the literature. First, the point source is moving in an inhomogeneous background medium, which models the human body. Second, the dynamical wave field data are collected in a limited aperture. Third, the reconstruction method is independent of the background medium, and it is totally direct without any matrix inversion. Hence, it is efficient and robust with respect to the measurement noise. Both theoretical justifications and computational experiments are presented to verify our novel findings.

  15. Constraint Embedding Technique for Multibody System Dynamics

    NASA Technical Reports Server (NTRS)

    Woo, Simon S.; Cheng, Michael K.

    2011-01-01

    Multibody dynamics play a critical role in simulation testbeds for space missions. There has been a considerable interest in the development of efficient computational algorithms for solving the dynamics of multibody systems. Mass matrix factorization and inversion techniques and the O(N) class of forward dynamics algorithms developed using a spatial operator algebra stand out as important breakthrough on this front. Techniques such as these provide the efficient algorithms and methods for the application and implementation of such multibody dynamics models. However, these methods are limited only to tree-topology multibody systems. Closed-chain topology systems require different techniques that are not as efficient or as broad as those for tree-topology systems. The closed-chain forward dynamics approach consists of treating the closed-chain topology as a tree-topology system subject to additional closure constraints. The resulting forward dynamics solution consists of: (a) ignoring the closure constraints and using the O(N) algorithm to solve for the free unconstrained accelerations for the system; (b) using the tree-topology solution to compute a correction force to enforce the closure constraints; and (c) correcting the unconstrained accelerations with correction accelerations resulting from the correction forces. This constraint-embedding technique shows how to use direct embedding to eliminate local closure-loops in the system and effectively convert the system back to a tree-topology system. At this point, standard tree-topology techniques can be brought to bear on the problem. The approach uses a spatial operator algebra approach to formulating the equations of motion. The operators are block-partitioned around the local body subgroups to convert them into aggregate bodies. Mass matrix operator factorization and inversion techniques are applied to the reformulated tree-topology system. Thus in essence, the new technique allows conversion of a system with closure-constraints into an equivalent tree-topology system, and thus allows one to take advantage of the host of techniques available to the latter class of systems. This technology is highly suitable for the class of multibody systems where the closure-constraints are local, i.e., where they are confined to small groupings of bodies within the system. Important examples of such local closure-constraints are constraints associated with four-bar linkages, geared motors, differential suspensions, etc. One can eliminate these closure-constraints and convert the system into a tree-topology system by embedding the constraints directly into the system dynamics and effectively replacing the body groupings with virtual aggregate bodies. Once eliminated, one can apply the well-known results and algorithms for tree-topology systems to solve the dynamics of such closed-chain system.

  16. Inverse boundary-layer theory and comparison with experiment

    NASA Technical Reports Server (NTRS)

    Carter, J. E.

    1978-01-01

    Inverse boundary layer computational procedures, which permit nonsingular solutions at separation and reattachment, are presented. In the first technique, which is for incompressible flow, the displacement thickness is prescribed; in the second technique, for compressible flow, a perturbation mass flow is the prescribed condition. The pressure is deduced implicitly along with the solution in each of these techniques. Laminar and turbulent computations, which are typical of separated flow, are presented and comparisons are made with experimental data. In both inverse procedures, finite difference techniques are used along with Newton iteration. The resulting procedure is no more complicated than conventional boundary layer computations. These separated boundary layer techniques appear to be well suited for complete viscous-inviscid interaction computations.

  17. Computer programs for forward and inverse modeling of acoustic and electromagnetic data

    USGS Publications Warehouse

    Ellefsen, Karl J.

    2011-01-01

    A suite of computer programs was developed by U.S. Geological Survey personnel for forward and inverse modeling of acoustic and electromagnetic data. This report describes the computer resources that are needed to execute the programs, the installation of the programs, the program designs, some tests of their accuracy, and some suggested improvements.

  18. Domain identification in impedance computed tomography by spline collocation method

    NASA Technical Reports Server (NTRS)

    Kojima, Fumio

    1990-01-01

    A method for estimating an unknown domain in elliptic boundary value problems is considered. The problem is formulated as an inverse problem of integral equations of the second kind. A computational method is developed using a splice collocation scheme. The results can be applied to the inverse problem of impedance computed tomography (ICT) for image reconstruction.

  19. SU-C-207-01: Four-Dimensional Inverse Geometry Computed Tomography: Concept and Its Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, K; Kim, D; Kim, T

    2015-06-15

    Purpose: In past few years, the inverse geometry computed tomography (IGCT) system has been developed to overcome shortcomings of a conventional computed tomography (CT) system such as scatter problem induced from large detector size and cone-beam artifact. In this study, we intend to present a concept of a four-dimensional (4D) IGCT system that has positive aspects above all with temporal resolution for dynamic studies and reduction of motion artifact. Methods: Contrary to conventional CT system, projection data at a certain angle in IGCT was a group of fractionated narrow cone-beam projection data, projection group (PG), acquired from multi-source array whichmore » have extremely short time gap of sequential operation between each of sources. At this, for 4D IGCT imaging, time-related data acquisition parameters were determined by combining multi-source scanning time for collecting one PG with conventional 4D CBCT data acquisition sequence. Over a gantry rotation, acquired PGs from multi-source array were tagged time and angle for 4D image reconstruction. Acquired PGs were sorted into 10 phase and image reconstructions were independently performed at each phase. Image reconstruction algorithm based upon filtered-backprojection was used in this study. Results: The 4D IGCT had uniform image without cone-beam artifact on the contrary to 4D CBCT image. In addition, the 4D IGCT images of each phase had no significant artifact induced from motion compared with 3D CT. Conclusion: The 4D IGCT image seems to give relatively accurate dynamic information of patient anatomy based on the results were more endurable than 3D CT about motion artifact. From this, it will be useful for dynamic study and respiratory-correlated radiation therapy. This work was supported by the Industrial R&D program of MOTIE/KEIT [10048997, Development of the core technology for integrated therapy devices based on real-time MRI guided tumor tracking] and the Mid-career Researcher Program (2014R1A2A1A10050270) through the National Research Foundation of Korea funded by the Ministry of Science, ICT&Future Planning.« less

  20. CONTIN XPCS: Software for Inverse Transform Analysis of X-Ray Photon Correlation Spectroscopy Dynamics

    PubMed Central

    Narayanan, Suresh; Zhang, Fan; Kuzmenko, Ivan; Ilavsky, Jan

    2018-01-01

    X-ray photon correlation spectroscopy (XPCS) and dynamic light scattering (DLS) both reveal dynamics using coherent scattering, but X-rays permit investigating of dynamics in a much more diverse array of materials. Heterogeneous dynamics occur in many such materials, and we showed how classic tools employed in analysis of heterogeneous DLS dynamics extend to XPCS, revealing additional information that conventional Kohlrausch exponential fitting obscures. This work presents the software implementation of inverse transform analysis of XPCS data called CONTIN XPCS, an extension of traditional CONTIN that accommodates dynamics encountered in equilibrium XPCS measurements. PMID:29875507

  1. CONTIN XPCS: Software for Inverse Transform Analysis of X-Ray Photon Correlation Spectroscopy Dynamics.

    PubMed

    Andrews, Ross N; Narayanan, Suresh; Zhang, Fan; Kuzmenko, Ivan; Ilavsky, Jan

    2018-02-01

    X-ray photon correlation spectroscopy (XPCS) and dynamic light scattering (DLS) both reveal dynamics using coherent scattering, but X-rays permit investigating of dynamics in a much more diverse array of materials. Heterogeneous dynamics occur in many such materials, and we showed how classic tools employed in analysis of heterogeneous DLS dynamics extend to XPCS, revealing additional information that conventional Kohlrausch exponential fitting obscures. This work presents the software implementation of inverse transform analysis of XPCS data called CONTIN XPCS, an extension of traditional CONTIN that accommodates dynamics encountered in equilibrium XPCS measurements.

  2. Molecular structures and intramolecular dynamics of pentahalides

    NASA Astrophysics Data System (ADS)

    Ischenko, A. A.

    2017-03-01

    This paper reviews advances of modern gas electron diffraction (GED) method combined with high-resolution spectroscopy and quantum chemical calculations in studies of the impact of intramolecular dynamics in free molecules of pentahalides. Some recently developed approaches to the electron diffraction data interpretation, based on direct incorporation of the adiabatic potential energy surface parameters to the diffraction intensity are described. In this way, complementary data of different experimental and computational methods can be directly combined for solving problems of the molecular structure and its dynamics. The possibility to evaluate some important parameters of the adiabatic potential energy surface - barriers to pseudorotation and saddle point of intermediate configuration from diffraction intensities in solving the inverse GED problem is demonstrated on several examples. With increasing accuracy of the electron diffraction intensities and the development of the theoretical background of electron scattering and data interpretation, it has become possible to investigate complex nuclear dynamics in fluxional systems by the GED method. Results of other research groups are also included in the discussion.

  3. Clinical Applications of Stochastic Dynamic Models of the Brain, Part I: A Primer.

    PubMed

    Roberts, James A; Friston, Karl J; Breakspear, Michael

    2017-04-01

    Biological phenomena arise through interactions between an organism's intrinsic dynamics and stochastic forces-random fluctuations due to external inputs, thermal energy, or other exogenous influences. Dynamic processes in the brain derive from neurophysiology and anatomical connectivity; stochastic effects arise through sensory fluctuations, brainstem discharges, and random microscopic states such as thermal noise. The dynamic evolution of systems composed of both dynamic and random effects can be studied with stochastic dynamic models (SDMs). This article, Part I of a two-part series, offers a primer of SDMs and their application to large-scale neural systems in health and disease. The companion article, Part II, reviews the application of SDMs to brain disorders. SDMs generate a distribution of dynamic states, which (we argue) represent ideal candidates for modeling how the brain represents states of the world. When augmented with variational methods for model inversion, SDMs represent a powerful means of inferring neuronal dynamics from functional neuroimaging data in health and disease. Together with deeper theoretical considerations, this work suggests that SDMs will play a unique and influential role in computational psychiatry, unifying empirical observations with models of perception and behavior. Copyright © 2017 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.

  4. SISYPHUS: A high performance seismic inversion factory

    NASA Astrophysics Data System (ADS)

    Gokhberg, Alexey; Simutė, Saulė; Boehm, Christian; Fichtner, Andreas

    2016-04-01

    In the recent years the massively parallel high performance computers became the standard instruments for solving the forward and inverse problems in seismology. The respective software packages dedicated to forward and inverse waveform modelling specially designed for such computers (SPECFEM3D, SES3D) became mature and widely available. These packages achieve significant computational performance and provide researchers with an opportunity to solve problems of bigger size at higher resolution within a shorter time. However, a typical seismic inversion process contains various activities that are beyond the common solver functionality. They include management of information on seismic events and stations, 3D models, observed and synthetic seismograms, pre-processing of the observed signals, computation of misfits and adjoint sources, minimization of misfits, and process workflow management. These activities are time consuming, seldom sufficiently automated, and therefore represent a bottleneck that can substantially offset performance benefits provided by even the most powerful modern supercomputers. Furthermore, a typical system architecture of modern supercomputing platforms is oriented towards the maximum computational performance and provides limited standard facilities for automation of the supporting activities. We present a prototype solution that automates all aspects of the seismic inversion process and is tuned for the modern massively parallel high performance computing systems. We address several major aspects of the solution architecture, which include (1) design of an inversion state database for tracing all relevant aspects of the entire solution process, (2) design of an extensible workflow management framework, (3) integration with wave propagation solvers, (4) integration with optimization packages, (5) computation of misfits and adjoint sources, and (6) process monitoring. The inversion state database represents a hierarchical structure with branches for the static process setup, inversion iterations, and solver runs, each branch specifying information at the event, station and channel levels. The workflow management framework is based on an embedded scripting engine that allows definition of various workflow scenarios using a high-level scripting language and provides access to all available inversion components represented as standard library functions. At present the SES3D wave propagation solver is integrated in the solution; the work is in progress for interfacing with SPECFEM3D. A separate framework is designed for interoperability with an optimization module; the workflow manager and optimization process run in parallel and cooperate by exchanging messages according to a specially designed protocol. A library of high-performance modules implementing signal pre-processing, misfit and adjoint computations according to established good practices is included. Monitoring is based on information stored in the inversion state database and at present implements a command line interface; design of a graphical user interface is in progress. The software design fits well into the common massively parallel system architecture featuring a large number of computational nodes running distributed applications under control of batch-oriented resource managers. The solution prototype has been implemented on the "Piz Daint" supercomputer provided by the Swiss Supercomputing Centre (CSCS).

  5. Optimization-Based Inverse Identification of the Parameters of a Concrete Cap Material Model

    NASA Astrophysics Data System (ADS)

    Král, Petr; Hokeš, Filip; Hušek, Martin; Kala, Jiří; Hradil, Petr

    2017-10-01

    Issues concerning the advanced numerical analysis of concrete building structures in sophisticated computing systems currently require the involvement of nonlinear mechanics tools. The efforts to design safer, more durable and mainly more economically efficient concrete structures are supported via the use of advanced nonlinear concrete material models and the geometrically nonlinear approach. The application of nonlinear mechanics tools undoubtedly presents another step towards the approximation of the real behaviour of concrete building structures within the framework of computer numerical simulations. However, the success rate of this application depends on having a perfect understanding of the behaviour of the concrete material models used and having a perfect understanding of the used material model parameters meaning. The effective application of nonlinear concrete material models within computer simulations often becomes very problematic because these material models very often contain parameters (material constants) whose values are difficult to obtain. However, getting of the correct values of material parameters is very important to ensure proper function of a concrete material model used. Today, one possibility, which permits successful solution of the mentioned problem, is the use of optimization algorithms for the purpose of the optimization-based inverse material parameter identification. Parameter identification goes hand in hand with experimental investigation while it trying to find parameter values of the used material model so that the resulting data obtained from the computer simulation will best approximate the experimental data. This paper is focused on the optimization-based inverse identification of the parameters of a concrete cap material model which is known under the name the Continuous Surface Cap Model. Within this paper, material parameters of the model are identified on the basis of interaction between nonlinear computer simulations, gradient based and nature inspired optimization algorithms and experimental data, the latter of which take the form of a load-extension curve obtained from the evaluation of uniaxial tensile test results. The aim of this research was to obtain material model parameters corresponding to the quasi-static tensile loading which may be further used for the research involving dynamic and high-speed tensile loading. Based on the obtained results it can be concluded that the set goal has been reached.

  6. VLSI architectures for computing multiplications and inverses in GF(2m)

    NASA Technical Reports Server (NTRS)

    Wang, C. C.; Truong, T. K.; Shao, H. M.; Deutsch, L. J.; Omura, J. K.

    1985-01-01

    Finite field arithmetic logic is central in the implementation of Reed-Solomon coders and in some cryptographic algorithms. There is a need for good multiplication and inversion algorithms that are easily realized on VLSI chips. Massey and Omura recently developed a new multiplication algorithm for Galois fields based on a normal basis representation. A pipeline structure is developed to realize the Massey-Omura multiplier in the finite field GF(2m). With the simple squaring property of the normal-basis representation used together with this multiplier, a pipeline architecture is also developed for computing inverse elements in GF(2m). The designs developed for the Massey-Omura multiplier and the computation of inverse elements are regular, simple, expandable and, therefore, naturally suitable for VLSI implementation.

  7. VLSI architectures for computing multiplications and inverses in GF(2-m)

    NASA Technical Reports Server (NTRS)

    Wang, C. C.; Truong, T. K.; Shao, H. M.; Deutsch, L. J.; Omura, J. K.; Reed, I. S.

    1983-01-01

    Finite field arithmetic logic is central in the implementation of Reed-Solomon coders and in some cryptographic algorithms. There is a need for good multiplication and inversion algorithms that are easily realized on VLSI chips. Massey and Omura recently developed a new multiplication algorithm for Galois fields based on a normal basis representation. A pipeline structure is developed to realize the Massey-Omura multiplier in the finite field GF(2m). With the simple squaring property of the normal-basis representation used together with this multiplier, a pipeline architecture is also developed for computing inverse elements in GF(2m). The designs developed for the Massey-Omura multiplier and the computation of inverse elements are regular, simple, expandable and, therefore, naturally suitable for VLSI implementation.

  8. VLSI architectures for computing multiplications and inverses in GF(2m).

    PubMed

    Wang, C C; Truong, T K; Shao, H M; Deutsch, L J; Omura, J K; Reed, I S

    1985-08-01

    Finite field arithmetic logic is central in the implementation of Reed-Solomon coders and in some cryptographic algorithms. There is a need for good multiplication and inversion algorithms that can be easily realized on VLSI chips. Massey and Omura recently developed a new multiplication algorithm for Galois fields based on a normal basis representation. In this paper, a pipeline structure is developed to realize the Massey-Omura multiplier in the finite field GF(2m). With the simple squaring property of the normal basis representation used together with this multiplier, a pipeline architecture is developed for computing inverse elements in GF(2m). The designs developed for the Massey-Omura multiplier and the computation of inverse elements are regular, simple, expandable, and therefore, naturally suitable for VLSI implementation.

  9. FOREWORD: 5th International Workshop on New Computational Methods for Inverse Problems

    NASA Astrophysics Data System (ADS)

    Vourc'h, Eric; Rodet, Thomas

    2015-11-01

    This volume of Journal of Physics: Conference Series is dedicated to the scientific research presented during the 5th International Workshop on New Computational Methods for Inverse Problems, NCMIP 2015 (http://complement.farman.ens-cachan.fr/NCMIP_2015.html). This workshop took place at Ecole Normale Supérieure de Cachan, on May 29, 2015. The prior editions of NCMIP also took place in Cachan, France, firstly within the scope of ValueTools Conference, in May 2011, and secondly at the initiative of Institut Farman, in May 2012, May 2013 and May 2014. The New Computational Methods for Inverse Problems (NCMIP) workshop focused on recent advances in the resolution of inverse problems. Indeed, inverse problems appear in numerous scientific areas such as geophysics, biological and medical imaging, material and structure characterization, electrical, mechanical and civil engineering, and finances. The resolution of inverse problems consists of estimating the parameters of the observed system or structure from data collected by an instrumental sensing or imaging device. Its success firstly requires the collection of relevant observation data. It also requires accurate models describing the physical interactions between the instrumental device and the observed system, as well as the intrinsic properties of the solution itself. Finally, it requires the design of robust, accurate and efficient inversion algorithms. Advanced sensor arrays and imaging devices provide high rate and high volume data; in this context, the efficient resolution of the inverse problem requires the joint development of new models and inversion methods, taking computational and implementation aspects into account. During this one-day workshop, researchers had the opportunity to bring to light and share new techniques and results in the field of inverse problems. The topics of the workshop were: algorithms and computational aspects of inversion, Bayesian estimation, Kernel methods, learning methods, convex optimization, free discontinuity problems, metamodels, proper orthogonal decomposition, reduced models for the inversion, non-linear inverse scattering, image reconstruction and restoration, and applications (bio-medical imaging, non-destructive evaluation...). NCMIP 2015 was a one-day workshop held in May 2015 which attracted around 70 attendees. Each of the submitted papers has been reviewed by two reviewers. There have been 15 accepted papers. In addition, three international speakers were invited to present a longer talk. The workshop was supported by Institut Farman (ENS Cachan, CNRS) and endorsed by the following French research networks: GDR ISIS, GDR MIA, GDR MOA and GDR Ondes. The program committee acknowledges the following research laboratories: CMLA, LMT, LURPA and SATIE.

  10. Inverse Design: Playing "Jeopardy" in Materials Science (A "Life at the Frontiers of Energy Research" contest entry from the 2011 Energy Frontier Research Centers (EFRCs) Summit and Forum)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zunger, Alex

    "Inverse Design: Playing 'Jeopardy' in Materials Science" was submitted by the Center for Inverse Design (CID) to the "Life at the Frontiers of Energy Research" video contest at the 2011 Science for Our Nation's Energy Future: Energy Frontier Research Centers (EFRCs) Summit and Forum. Twenty-six EFRCs created short videos to highlight their mission and their work. CID, an EFRC directed by Bill Tumas at the National Renewable Energy Laboratory is a partnership of scientists from six institutions: NREL (lead), Northwestern University, University of Colorado, Colorado School of Mines, Stanford University, and Oregon State University. The Office of Basic Energy Sciencesmore » in the U.S. Department of Energy's Office of Science established the 46 Energy Frontier Research Centers (EFRCs) in 2009. These collaboratively-organized centers conduct fundamental research focused on 'grand challenges' and use-inspired 'basic research needs' recently identified in major strategic planning efforts by the scientific community. The overall purpose is to accelerate scientific progress toward meeting the nation's critical energy challenges. The mission of the Center for Inverse Design is 'to replace trial-and-error methods used in the development of materials for solar energy conversion with an inverse design approach powered by theory and computation.' Research topics are: solar photovoltaic, photonic, metamaterial, defects, spin dynamics, matter by design, novel materials synthesis, and defect tolerant materials.« less

  11. A test of the chromosomal theory of ecotypic speciation in Anopheles gambiae

    PubMed Central

    Manoukis, Nicholas C.; Powell, Jeffrey R.; Touré, Mahamoudou B.; Sacko, Adama; Edillo, Frances E.; Coulibaly, Mamadou B.; Traoré, Sekou F.; Taylor, Charles E.; Besansky, Nora J.

    2008-01-01

    The role of chromosomal inversions in speciation has long been of interest to evolutionists. Recent quantitative modeling has stimulated reconsideration of previous conceptual models for chromosomal speciation. Anopheles gambiae, the most important vector of human malaria, carries abundant chromosomal inversion polymorphism nonrandomly associated with ecotypes that mate assortatively. Here, we consider the potential role of paracentric inversions in promoting speciation in A. gambiae via “ecotypification,” a term that refers to differentiation arising from local adaptation. In particular, we focus on the Bamako form, an ecotype characterized by low inversion polymorphism and fixation of an inversion, 2Rj, that is very rare or absent in all other forms of A. gambiae. The Bamako form has a restricted distribution by the upper Niger River and its tributaries that is associated with a distinctive type of larval habitat, laterite rock pools, hypothesized to be its optimal breeding site. We first present computer simulations to investigate whether the population dynamics of A. gambiae are consistent with chromosomal speciation by ecotypification. The models are parameterized using field observations on the various forms of A. gambiae that exist in Mali, West Africa. We then report on the distribution of larvae of this species collected from rock pools and more characteristic breeding sites nearby. Both the simulations and field observations support the thesis that speciation by ecotypification is occurring, or has occurred, prompting consideration of Bamako as an independent species. PMID:18287019

  12. Computation of inverse magnetic cascades

    NASA Technical Reports Server (NTRS)

    Montgomery, D.

    1981-01-01

    Inverse cascades of magnetic quantities for turbulent incompressible magnetohydrodynamics are reviewed, for two and three dimensions. The theory is extended to the Strauss equations, a description intermediate between two and three dimensions appropriate to Tokamak magnetofluids. Consideration of the absolute equilibrium Gibbs ensemble for the system leads to a prediction of an inverse cascade of magnetic helicity, which may manifest itself as a major disruption. An agenda for computational investigation of this conjecture is proposed.

  13. Viscoelastic material inversion using Sierra-SD and ROL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walsh, Timothy; Aquino, Wilkins; Ridzal, Denis

    2014-11-01

    In this report we derive frequency-domain methods for inverse characterization of the constitutive parameters of viscoelastic materials. The inverse problem is cast in a PDE-constrained optimization framework with efficient computation of gradients and Hessian vector products through matrix free operations. The abstract optimization operators for first and second derivatives are derived from first principles. Various methods from the Rapid Optimization Library (ROL) are tested on the viscoelastic inversion problem. The methods described herein are applied to compute the viscoelastic bulk and shear moduli of a foam block model, which was recently used in experimental testing for viscoelastic property characterization.

  14. General Theory and Algorithms for the Non-Casual Inversion, Slewing and Control of Space-Based Articulated Structures

    DTIC Science & Technology

    1993-10-01

    Structures: Simultaneous Trajectory Tracking and Vibration Reduction ... 10 3 . Buckling Control of a Flexible Beam Using Piezoelectric Actuators...bounded solution for the inverse dynamic torque has to be non-causal. Bayo, et. al. [ 3 ], extended the inverse dynamics to planar, multiple-link systems...presented by &ayo and Moulin [4] for the single link system, with provisions for 3 extension to multiple link systems. An equivalent time domain approach for

  15. Optimization of computations for adjoint field and Jacobian needed in 3D CSEM inversion

    NASA Astrophysics Data System (ADS)

    Dehiya, Rahul; Singh, Arun; Gupta, Pravin K.; Israil, M.

    2017-01-01

    We present the features and results of a newly developed code, based on Gauss-Newton optimization technique, for solving three-dimensional Controlled-Source Electromagnetic inverse problem. In this code a special emphasis has been put on representing the operations by block matrices for conjugate gradient iteration. We show how in the computation of Jacobian, the matrix formed by differentiation of system matrix can be made independent of frequency to optimize the operations at conjugate gradient step. The coarse level parallel computing, using OpenMP framework, is used primarily due to its simplicity in implementation and accessibility of shared memory multi-core computing machine to almost anyone. We demonstrate how the coarseness of modeling grid in comparison to source (comp`utational receivers) spacing can be exploited for efficient computing, without compromising the quality of the inverted model, by reducing the number of adjoint calls. It is also demonstrated that the adjoint field can even be computed on a grid coarser than the modeling grid without affecting the inversion outcome. These observations were reconfirmed using an experiment design where the deviation of source from straight tow line is considered. Finally, a real field data inversion experiment is presented to demonstrate robustness of the code.

  16. Computing Fourier integral operators with caustics

    NASA Astrophysics Data System (ADS)

    Caday, Peter

    2016-12-01

    Fourier integral operators (FIOs) have widespread applications in imaging, inverse problems, and PDEs. An implementation of a generic algorithm for computing FIOs associated with canonical graphs is presented, based on a recent paper of de Hoop et al. Given the canonical transformation and principal symbol of the operator, a preprocessing step reduces application of an FIO approximately to multiplications, pushforwards and forward and inverse discrete Fourier transforms, which can be computed in O({N}n+(n-1)/2{log}N) time for an n-dimensional FIO. The same preprocessed data also allows computation of the inverse and transpose of the FIO, with identical runtime. Examples demonstrate the algorithm’s output, and easily extendible MATLAB/C++ source code is available from the author.

  17. Knee joint forces: prediction, measurement, and significance

    PubMed Central

    D’Lima, Darryl D.; Fregly, Benjamin J.; Patil, Shantanu; Steklov, Nikolai; Colwell, Clifford W.

    2011-01-01

    Knee forces are highly significant in osteoarthritis and in the survival and function of knee arthroplasty. A large number of studies have attempted to estimate forces around the knee during various activities. Several approaches have been used to relate knee kinematics and external forces to internal joint contact forces, the most popular being inverse dynamics, forward dynamics, and static body analyses. Knee forces have also been measured in vivo after knee arthroplasty, which serves as valuable validation of computational predictions. This review summarizes the results of published studies that measured knee forces for various activities. The efficacy of various methods to alter knee force distribution, such as gait modification, orthotics, walking aids, and custom treadmills are analyzed. Current gaps in our knowledge are identified and directions for future research in this area are outlined. PMID:22468461

  18. Finite element analysis in fluids; Proceedings of the Seventh International Conference on Finite Element Methods in Flow Problems, University of Alabama, Huntsville, Apr. 3-7, 1989

    NASA Technical Reports Server (NTRS)

    Chung, T. J. (Editor); Karr, Gerald R. (Editor)

    1989-01-01

    Recent advances in computational fluid dynamics are examined in reviews and reports, with an emphasis on finite-element methods. Sections are devoted to adaptive meshes, atmospheric dynamics, combustion, compressible flows, control-volume finite elements, crystal growth, domain decomposition, EM-field problems, FDM/FEM, and fluid-structure interactions. Consideration is given to free-boundary problems with heat transfer, free surface flow, geophysical flow problems, heat and mass transfer, high-speed flow, incompressible flow, inverse design methods, MHD problems, the mathematics of finite elements, and mesh generation. Also discussed are mixed finite elements, multigrid methods, non-Newtonian fluids, numerical dissipation, parallel vector processing, reservoir simulation, seepage, shallow-water problems, spectral methods, supercomputer architectures, three-dimensional problems, and turbulent flows.

  19. Getting in shape: Reconstructing three-dimensional long-track speed skating kinematics by comparing several body pose reconstruction techniques.

    PubMed

    van der Kruk, E; Schwab, A L; van der Helm, F C T; Veeger, H E J

    2018-03-01

    In gait studies body pose reconstruction (BPR) techniques have been widely explored, but no previous protocols have been developed for speed skating, while the peculiarities of the skating posture and technique do not automatically allow for the transfer of the results of those explorations to kinematic skating data. The aim of this paper is to determine the best procedure for body pose reconstruction and inverse dynamics of speed skating, and to what extend this choice influences the estimation of joint power. The results show that an eight body segment model together with a global optimization method with revolute joint in the knee and in the lumbosacral joint, while keeping the other joints spherical, would be the most realistic model to use for the inverse kinematics in speed skating. To determine joint power, this method should be combined with a least-square error method for the inverse dynamics. Reporting on the BPR technique and the inverse dynamic method is crucial to enable comparison between studies. Our data showed an underestimation of up to 74% in mean joint power when no optimization procedure was applied for BPR and an underestimation of up to 31% in mean joint power when a bottom-up inverse dynamics method was chosen instead of a least square error approach. Although these results are aimed at speed skating, reporting on the BPR procedure and the inverse dynamics method, together with setting a golden standard should be common practice in all human movement research to allow comparison between studies. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. Impact of meteorological inflow uncertainty on tracer transport and source estimation in urban atmospheres

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lucas, Donald D.; Gowardhan, Akshay; Cameron-Smith, Philip

    2015-08-08

    Here, a computational Bayesian inverse technique is used to quantify the effects of meteorological inflow uncertainty on tracer transport and source estimation in a complex urban environment. We estimate a probability distribution of meteorological inflow by comparing wind observations to Monte Carlo simulations from the Aeolus model. Aeolus is a computational fluid dynamics model that simulates atmospheric and tracer flow around buildings and structures at meter-scale resolution. Uncertainty in the inflow is propagated through forward and backward Lagrangian dispersion calculations to determine the impact on tracer transport and the ability to estimate the release location of an unknown source. Ourmore » uncertainty methods are compared against measurements from an intensive observation period during the Joint Urban 2003 tracer release experiment conducted in Oklahoma City.« less

  1. Analysis of Tube Hydroforming by means of an Inverse Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen, Ba Nghiep; Johnson, Kenneth I.; Khaleel, Mohammad A.

    2003-05-01

    This paper presents a computational tool for the analysis of freely hydroformed tubes by means of an inverse approach. The formulation of the inverse method developed by Guo et al. is adopted and extended to the tube hydrofoming problems in which the initial geometry is a round tube submitted to hydraulic pressure and axial feed at the tube ends (end-feed). A simple criterion based on a forming limit diagram is used to predict the necking regions in the deformed workpiece. Although the developed computational tool is a stand-alone code, it has been linked to the Marc finite element code formore » meshing and visualization of results. The application of the inverse approach to tube hydroforming is illustrated through the analyses of the aluminum alloy AA6061-T4 seamless tubes under free hydroforming conditions. The results obtained are in good agreement with those issued from a direct incremental approach. However, the computational time in the inverse procedure is much less than that in the incremental method.« less

  2. Restricted access Improved hydrogeophysical characterization and monitoring through parallel modeling and inversion of time-domain resistivity andinduced-polarization data

    USGS Publications Warehouse

    Johnson, Timothy C.; Versteeg, Roelof J.; Ward, Andy; Day-Lewis, Frederick D.; Revil, André

    2010-01-01

    Electrical geophysical methods have found wide use in the growing discipline of hydrogeophysics for characterizing the electrical properties of the subsurface and for monitoring subsurface processes in terms of the spatiotemporal changes in subsurface conductivity, chargeability, and source currents they govern. Presently, multichannel and multielectrode data collections systems can collect large data sets in relatively short periods of time. Practitioners, however, often are unable to fully utilize these large data sets and the information they contain because of standard desktop-computer processing limitations. These limitations can be addressed by utilizing the storage and processing capabilities of parallel computing environments. We have developed a parallel distributed-memory forward and inverse modeling algorithm for analyzing resistivity and time-domain induced polar-ization (IP) data. The primary components of the parallel computations include distributed computation of the pole solutions in forward mode, distributed storage and computation of the Jacobian matrix in inverse mode, and parallel execution of the inverse equation solver. We have tested the corresponding parallel code in three efforts: (1) resistivity characterization of the Hanford 300 Area Integrated Field Research Challenge site in Hanford, Washington, U.S.A., (2) resistivity characterization of a volcanic island in the southern Tyrrhenian Sea in Italy, and (3) resistivity and IP monitoring of biostimulation at a Superfund site in Brandywine, Maryland, U.S.A. Inverse analysis of each of these data sets would be limited or impossible in a standard serial computing environment, which underscores the need for parallel high-performance computing to fully utilize the potential of electrical geophysical methods in hydrogeophysical applications.

  3. Finite-fault source inversion using adjoint methods in 3D heterogeneous media

    NASA Astrophysics Data System (ADS)

    Somala, Surendra Nadh; Ampuero, Jean-Paul; Lapusta, Nadia

    2018-04-01

    Accounting for lateral heterogeneities in the 3D velocity structure of the crust is known to improve earthquake source inversion, compared to results based on 1D velocity models which are routinely assumed to derive finite-fault slip models. The conventional approach to include known 3D heterogeneity in source inversion involves pre-computing 3D Green's functions, which requires a number of 3D wave propagation simulations proportional to the number of stations or to the number of fault cells. The computational cost of such an approach is prohibitive for the dense datasets that could be provided by future earthquake observation systems. Here, we propose an adjoint-based optimization technique to invert for the spatio-temporal evolution of slip velocity. The approach does not require pre-computed Green's functions. The adjoint method provides the gradient of the cost function, which is used to improve the model iteratively employing an iterative gradient-based minimization method. The adjoint approach is shown to be computationally more efficient than the conventional approach based on pre-computed Green's functions in a broad range of situations. We consider data up to 1 Hz from a Haskell source scenario (a steady pulse-like rupture) on a vertical strike-slip fault embedded in an elastic 3D heterogeneous velocity model. The velocity model comprises a uniform background and a 3D stochastic perturbation with the von Karman correlation function. Source inversions based on the 3D velocity model are performed for two different station configurations, a dense and a sparse network with 1 km and 20 km station spacing, respectively. These reference inversions show that our inversion scheme adequately retrieves the rise time when the velocity model is exactly known, and illustrates how dense coverage improves the inference of peak slip velocities. We investigate the effects of uncertainties in the velocity model by performing source inversions based on an incorrect, homogeneous velocity model. We find that, for velocity uncertainties that have standard deviation and correlation length typical of available 3D crustal models, the inverted sources can be severely contaminated by spurious features even if the station density is high. When data from thousand or more receivers is used in source inversions in 3D heterogeneous media, the computational cost of the method proposed in this work is at least two orders of magnitude lower than source inversion based on pre-computed Green's functions.

  4. Finite-fault source inversion using adjoint methods in 3-D heterogeneous media

    NASA Astrophysics Data System (ADS)

    Somala, Surendra Nadh; Ampuero, Jean-Paul; Lapusta, Nadia

    2018-07-01

    Accounting for lateral heterogeneities in the 3-D velocity structure of the crust is known to improve earthquake source inversion, compared to results based on 1-D velocity models which are routinely assumed to derive finite-fault slip models. The conventional approach to include known 3-D heterogeneity in source inversion involves pre-computing 3-D Green's functions, which requires a number of 3-D wave propagation simulations proportional to the number of stations or to the number of fault cells. The computational cost of such an approach is prohibitive for the dense data sets that could be provided by future earthquake observation systems. Here, we propose an adjoint-based optimization technique to invert for the spatio-temporal evolution of slip velocity. The approach does not require pre-computed Green's functions. The adjoint method provides the gradient of the cost function, which is used to improve the model iteratively employing an iterative gradient-based minimization method. The adjoint approach is shown to be computationally more efficient than the conventional approach based on pre-computed Green's functions in a broad range of situations. We consider data up to 1 Hz from a Haskell source scenario (a steady pulse-like rupture) on a vertical strike-slip fault embedded in an elastic 3-D heterogeneous velocity model. The velocity model comprises a uniform background and a 3-D stochastic perturbation with the von Karman correlation function. Source inversions based on the 3-D velocity model are performed for two different station configurations, a dense and a sparse network with 1 and 20 km station spacing, respectively. These reference inversions show that our inversion scheme adequately retrieves the rise time when the velocity model is exactly known, and illustrates how dense coverage improves the inference of peak-slip velocities. We investigate the effects of uncertainties in the velocity model by performing source inversions based on an incorrect, homogeneous velocity model. We find that, for velocity uncertainties that have standard deviation and correlation length typical of available 3-D crustal models, the inverted sources can be severely contaminated by spurious features even if the station density is high. When data from thousand or more receivers is used in source inversions in 3-D heterogeneous media, the computational cost of the method proposed in this work is at least two orders of magnitude lower than source inversion based on pre-computed Green's functions.

  5. Recovering Long-wavelength Velocity Models using Spectrogram Inversion with Single- and Multi-frequency Components

    NASA Astrophysics Data System (ADS)

    Ha, J.; Chung, W.; Shin, S.

    2015-12-01

    Many waveform inversion algorithms have been proposed in order to construct subsurface velocity structures from seismic data sets. These algorithms have suffered from computational burden, local minima problems, and the lack of low-frequency components. Computational efficiency can be improved by the application of back-propagation techniques and advances in computing hardware. In addition, waveform inversion algorithms, for obtaining long-wavelength velocity models, could avoid both the local minima problem and the effect of the lack of low-frequency components in seismic data. In this study, we proposed spectrogram inversion as a technique for recovering long-wavelength velocity models. In spectrogram inversion, decomposed frequency components from spectrograms of traces, in the observed and calculated data, are utilized to generate traces with reproduced low-frequency components. Moreover, since each decomposed component can reveal the different characteristics of a subsurface structure, several frequency components were utilized to analyze the velocity features in the subsurface. We performed the spectrogram inversion using a modified SEG/SEGE salt A-A' line. Numerical results demonstrate that spectrogram inversion could also recover the long-wavelength velocity features. However, inversion results varied according to the frequency components utilized. Based on the results of inversion using a decomposed single-frequency component, we noticed that robust inversion results are obtained when a dominant frequency component of the spectrogram was utilized. In addition, detailed information on recovered long-wavelength velocity models was obtained using a multi-frequency component combined with single-frequency components. Numerical examples indicate that various detailed analyses of long-wavelength velocity models can be carried out utilizing several frequency components.

  6. On the inversion-indel distance

    PubMed Central

    2013-01-01

    Background The inversion distance, that is the distance between two unichromosomal genomes with the same content allowing only inversions of DNA segments, can be computed thanks to a pioneering approach of Hannenhalli and Pevzner in 1995. In 2000, El-Mabrouk extended the inversion model to allow the comparison of unichromosomal genomes with unequal contents, thus insertions and deletions of DNA segments besides inversions. However, an exact algorithm was presented only for the case in which we have insertions alone and no deletion (or vice versa), while a heuristic was provided for the symmetric case, that allows both insertions and deletions and is called the inversion-indel distance. In 2005, Yancopoulos, Attie and Friedberg started a new branch of research by introducing the generic double cut and join (DCJ) operation, that can represent several genome rearrangements (including inversions). Among others, the DCJ model gave rise to two important results. First, it has been shown that the inversion distance can be computed in a simpler way with the help of the DCJ operation. Second, the DCJ operation originated the DCJ-indel distance, that allows the comparison of genomes with unequal contents, considering DCJ, insertions and deletions, and can be computed in linear time. Results In the present work we put these two results together to solve an open problem, showing that, when the graph that represents the relation between the two compared genomes has no bad components, the inversion-indel distance is equal to the DCJ-indel distance. We also give a lower and an upper bound for the inversion-indel distance in the presence of bad components. PMID:24564182

  7. Inverse transformation: unleashing spatially heterogeneous dynamics with an alternative approach to XPCS data analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrews, Ross N.; Narayanan, Suresh; Zhang, Fan

    X-ray photon correlation spectroscopy (XPCS), an extension of dynamic light scattering (DLS) in the X-ray regime, detects temporal intensity fluctuations of coherent speckles and provides scattering-vector-dependent sample dynamics at length scales smaller than DLS. The penetrating power of X-rays enables XPCS to probe the dynamics in a broad array of materials, including polymers, glasses and metal alloys, where attempts to describe the dynamics with a simple exponential fit usually fail. In these cases, the prevailing XPCS data analysis approach employs stretched or compressed exponential decay functions (Kohlrausch functions), which implicitly assume homogeneous dynamics. This paper proposes an alternative analysis schememore » based upon inverse Laplace or Gaussian transformation for elucidating heterogeneous distributions of dynamic time scales in XPCS, an approach analogous to the CONTIN algorithm widely accepted in the analysis of DLS from polydisperse and multimodal systems. In conclusion, using XPCS data measured from colloidal gels, it is demonstrated that the inverse transform approach reveals hidden multimodal dynamics in materials, unleashing the full potential of XPCS.« less

  8. Inverse transformation: unleashing spatially heterogeneous dynamics with an alternative approach to XPCS data analysis

    DOE PAGES

    Andrews, Ross N.; Narayanan, Suresh; Zhang, Fan; ...

    2018-02-01

    X-ray photon correlation spectroscopy (XPCS), an extension of dynamic light scattering (DLS) in the X-ray regime, detects temporal intensity fluctuations of coherent speckles and provides scattering-vector-dependent sample dynamics at length scales smaller than DLS. The penetrating power of X-rays enables XPCS to probe the dynamics in a broad array of materials, including polymers, glasses and metal alloys, where attempts to describe the dynamics with a simple exponential fit usually fail. In these cases, the prevailing XPCS data analysis approach employs stretched or compressed exponential decay functions (Kohlrausch functions), which implicitly assume homogeneous dynamics. This paper proposes an alternative analysis schememore » based upon inverse Laplace or Gaussian transformation for elucidating heterogeneous distributions of dynamic time scales in XPCS, an approach analogous to the CONTIN algorithm widely accepted in the analysis of DLS from polydisperse and multimodal systems. In conclusion, using XPCS data measured from colloidal gels, it is demonstrated that the inverse transform approach reveals hidden multimodal dynamics in materials, unleashing the full potential of XPCS.« less

  9. Imaging the Earth's anisotropic structure with Bayesian Inversion of fundamental and higher mode surface-wave dispersion data

    NASA Astrophysics Data System (ADS)

    Ravenna, Matteo; Lebedev, Sergei; Celli, Nicolas

    2017-04-01

    We develop a Markov Chain Monte Carlo inversion of fundamental and higher mode phase-velocity curves for radially and azimuthally anisotropic structure of the crust and upper mantle. In the inversions of Rayleigh- and Love-wave dispersion curves for radially anisotropic structure, we obtain probabilistic 1D radially anisotropic shear-velocity profiles of the isotropic average Vs and anisotropy (or Vsv and Vsh) as functions of depth. In the inversions for azimuthal anisotropy, Rayleigh-wave dispersion curves at different azimuths are inverted for the vertically polarized shear-velocity structure (Vsv) and the 2-phi component of azimuthal anisotropy. The strength and originality of the method is in its fully non-linear approach. Each model realization is computed using exact forward calculations. The uncertainty of the models is a part of the output. In the inversions for azimuthal anisotropy, in particular, the computation of the forward problem is performed separately at different azimuths, with no linear approximations on the relation of the Earth's elastic parameters to surface wave phase velocities. The computations are performed in parallel in order reduce the computing time. We compare inversions of the fundamental mode phase-velocity curves alone with inversions that also include overtones. The addition of higher modes enhances the resolving power of the anisotropic structure of the deep upper mantle. We apply the inversion method to phase-velocity curves in a few regions, including the Hangai dome region in Mongolia. Our models provide constraints on the Moho depth, the Lithosphere-Asthenosphere Boundary, and the alignment of the anisotropic fabric and the direction of current and past flow, from the crust down to the deep asthenosphere.

  10. Control of a flexible bracing manipulator: Integration of current research work to realize the bracing manipulator

    NASA Technical Reports Server (NTRS)

    Kwon, Dong-Soo

    1991-01-01

    All research results about flexible manipulator control were integrated to show a control scenario of a bracing manipulator. First, dynamic analysis of a flexible manipulator was done for modeling. Second, from the dynamic model, the inverse dynamic equation was derived, and the time domain inverse dynamic method was proposed for the calculation of the feedforward torque and the desired flexible coordinate trajectories. Third, a tracking controller was designed by combining the inverse dynamic feedforward control with the joint feedback control. The control scheme was applied to the tip position control of a single link flexible manipulator for zero and non-zero initial condition cases. Finally, the contact control scheme was added to the position tracking control. A control scenario of a bracing manipulator is provided and evaluated through simulation and experiment on a single link flexible manipulator.

  11. A Higher Order Iterative Method for Computing the Drazin Inverse

    PubMed Central

    Soleymani, F.; Stanimirović, Predrag S.

    2013-01-01

    A method with high convergence rate for finding approximate inverses of nonsingular matrices is suggested and established analytically. An extension of the introduced computational scheme to general square matrices is defined. The extended method could be used for finding the Drazin inverse. The application of the scheme on large sparse test matrices alongside the use in preconditioning of linear system of equations will be presented to clarify the contribution of the paper. PMID:24222747

  12. Time-domain wavefield reconstruction inversion

    NASA Astrophysics Data System (ADS)

    Li, Zhen-Chun; Lin, Yu-Zhao; Zhang, Kai; Li, Yuan-Yuan; Yu, Zhen-Nan

    2017-12-01

    Wavefield reconstruction inversion (WRI) is an improved full waveform inversion theory that has been proposed in recent years. WRI method expands the searching space by introducing the wave equation into the objective function and reconstructing the wavefield to update model parameters, thereby improving the computing efficiency and mitigating the influence of the local minimum. However, frequency-domain WRI is difficult to apply to real seismic data because of the high computational memory demand and requirement of time-frequency transformation with additional computational costs. In this paper, wavefield reconstruction inversion theory is extended into the time domain, the augmented wave equation of WRI is derived in the time domain, and the model gradient is modified according to the numerical test with anomalies. The examples of synthetic data illustrate the accuracy of time-domain WRI and the low dependency of WRI on low-frequency information.

  13. Easy way to determine quantitative spatial resolution distribution for a general inverse problem

    NASA Astrophysics Data System (ADS)

    An, M.; Feng, M.

    2013-12-01

    The spatial resolution computation of a solution was nontrivial and more difficult than solving an inverse problem. Most geophysical studies, except for tomographic studies, almost uniformly neglect the calculation of a practical spatial resolution. In seismic tomography studies, a qualitative resolution length can be indicatively given via visual inspection of the restoration of a synthetic structure (e.g., checkerboard tests). An effective strategy for obtaining quantitative resolution length is to calculate Backus-Gilbert resolution kernels (also referred to as a resolution matrix) by matrix operation. However, not all resolution matrices can provide resolution length information, and the computation of resolution matrix is often a difficult problem for very large inverse problems. A new class of resolution matrices, called the statistical resolution matrices (An, 2012, GJI), can be directly determined via a simple one-parameter nonlinear inversion performed based on limited pairs of random synthetic models and their inverse solutions. The total procedure were restricted to forward/inversion processes used in the real inverse problem and were independent of the degree of inverse skill used in the solution inversion. Spatial resolution lengths can be directly given during the inversion. Tests on 1D/2D/3D model inversion demonstrated that this simple method can be at least valid for a general linear inverse problem.

  14. An inverse dynamics approach to trajectory optimization for an aerospace plane

    NASA Technical Reports Server (NTRS)

    Lu, Ping

    1992-01-01

    An inverse dynamics approach for trajectory optimization is proposed. This technique can be useful in many difficult trajectory optimization and control problems. The application of the approach is exemplified by ascent trajectory optimization for an aerospace plane. Both minimum-fuel and minimax types of performance indices are considered. When rocket augmentation is available for ascent, it is shown that accurate orbital insertion can be achieved through the inverse control of the rocket in the presence of disturbances.

  15. Inverse dynamic substructuring using the direct hybrid assembly in the frequency domain

    NASA Astrophysics Data System (ADS)

    D'Ambrogio, Walter; Fregolent, Annalisa

    2014-04-01

    The paper deals with the identification of the dynamic behaviour of a structural subsystem, starting from the known dynamic behaviour of both the coupled system and the remaining part of the structural system (residual subsystem). This topic is also known as decoupling problem, subsystem subtraction or inverse dynamic substructuring. Whenever it is necessary to combine numerical models (e.g. FEM) and test models (e.g. FRFs), one speaks of experimental dynamic substructuring. Substructure decoupling techniques can be classified as inverse coupling or direct decoupling techniques. In inverse coupling, the equations describing the coupling problem are rearranged to isolate the unknown substructure instead of the coupled structure. On the contrary, direct decoupling consists in adding to the coupled system a fictitious subsystem that is the negative of the residual subsystem. Starting from a reduced version of the 3-field formulation (dynamic equilibrium using FRFs, compatibility and equilibrium of interface forces), a direct hybrid assembly is developed by requiring that both compatibility and equilibrium conditions are satisfied exactly, either at coupling DoFs only, or at additional internal DoFs of the residual subsystem. Equilibrium and compatibility DoFs might not be the same: this generates the so-called non-collocated approach. The technique is applied using experimental data from an assembled system made by a plate and a rigid mass.

  16. Building-Resolved CFD Simulations for Greenhouse Gas Transport and Dispersion over Washington DC / Baltimore

    NASA Astrophysics Data System (ADS)

    Prasad, K.; Lopez-Coto, I.; Ghosh, S.; Mueller, K.; Whetstone, J. R.

    2015-12-01

    The North-East Corridor project aims to use a top-down inversion methodology to quantify sources of Greenhouse Gas (GHG) emissions over urban domains such as Washington DC / Baltimore with high spatial and temporal resolution. Atmospheric transport of tracer gases from an emission source to a tower mounted receptor are usually conducted using the Weather Research and Forecasting (WRF) model. For such simulations, WRF employs a parameterized turbulence model and does not resolve the fine scale dynamics generated by the flow around buildings and communities comprising a large city. The NIST Fire Dynamics Simulator (FDS) is a computational fluid dynamics model that utilizes large eddy simulation methods to model flow around buildings at length scales much smaller than is practical with WRF. FDS has the potential to evaluate the impact of complex urban topography on near-field dispersion and mixing difficult to simulate with a mesoscale atmospheric model. Such capabilities may be important in determining urban GHG emissions using atmospheric measurements. A methodology has been developed to run FDS as a sub-grid scale model within a WRF simulation. The coupling is based on nudging the FDS flow field towards that computed by WRF, and is currently limited to one way coupling performed in an off-line mode. Using the coupled WRF / FDS model, NIST will investigate the effects of the urban canopy at horizontal resolutions of 10-20 m in a domain of 12 x 12 km. The coupled WRF-FDS simulations will be used to calculate the dispersion of tracer gases in the North-East Corridor and to evaluate the upwind areas that contribute to tower observations, referred to in the inversion community as influence functions. Results of this study will provide guidance regarding the importance of explicit simulations of urban atmospheric turbulence in obtaining accurate estimates of greenhouse gas emissions and transport.

  17. Evaluation of a musculoskeletal model with prosthetic knee through six experimental gait trials.

    PubMed

    Kia, Mohammad; Stylianou, Antonis P; Guess, Trent M

    2014-03-01

    Knowledge of the forces acting on musculoskeletal joint tissues during movement benefits tissue engineering, artificial joint replacement, and our understanding of ligament and cartilage injury. Computational models can be used to predict these internal forces, but musculoskeletal models that simultaneously calculate muscle force and the resulting loading on joint structures are rare. This study used publicly available gait, skeletal geometry, and instrumented prosthetic knee loading data [1] to evaluate muscle driven forward dynamics simulations of walking. Inputs to the simulation were measured kinematics and outputs included muscle, ground reaction, ligament, and joint contact forces. A full body musculoskeletal model with subject specific lower extremity geometries was developed in the multibody framework. A compliant contact was defined between the prosthetic femoral component and tibia insert geometries. Ligament structures were modeled with a nonlinear force-strain relationship. The model included 45 muscles on the right lower leg. During forward dynamics simulations a feedback control scheme calculated muscle forces using the error signal between the current muscle lengths and the lengths recorded during inverse kinematics simulations. Predicted tibio-femoral contact force, ground reaction forces, and muscle forces were compared to experimental measurements for six different gait trials using three different gait types (normal, trunk sway, and medial thrust). The mean average deviation (MAD) and root mean square deviation (RMSD) over one gait cycle are reported. The muscle driven forward dynamics simulations were computationally efficient and consistently reproduced the inverse kinematics motion. The forward simulations also predicted total knee contact forces (166N

  18. Performance recovery of a class of uncertain non-affine systems with unmodelled dynamics: an indirect dynamic inversion method

    NASA Astrophysics Data System (ADS)

    Yi, Bowen; Lin, Shuyi; Yang, Bo; Zhang, Weidong

    2018-02-01

    This paper presents an output feedback indirect dynamic inversion (IDI) approach for a class of uncertain nonaffine systems with input unmodelled dynamics. Compared with previous approaches to achieve performance recovery, the proposed method aims at dealing with a broader class of nonaffine-in-control systems with triangular structure. An IDI state feedback law is designed first, in which less knowledge of the model plant is needed compared to earlier approximate dynamic inversion methods, thus yielding more robust performance. After that, an extended high-gain observer is designed to accomplish the task with output feedback. Finally, we prove that the designed IDI controller is equivalent to an adaptive proportional-integral (PI) controller, with respect to both time response equivalence and robustness equivalence. The conclusion implies that for the studied strict-feedback non-affine systems with unmodelled dynamics, there always exits a PI controller to stabilise the systems. The effectiveness and benefits of the designed approach are verified by three examples.

  19. Laser Induced Aluminum Surface Breakdown Model

    NASA Technical Reports Server (NTRS)

    Chen, Yen-Sen; Liu, Jiwen; Zhang, Sijun; Wang, Ten-See (Technical Monitor)

    2002-01-01

    Laser powered propulsion systems involve complex fluid dynamics, thermodynamics and radiative transfer processes. Based on an unstructured grid, pressure-based computational aerothermodynamics; platform, several sub-models describing such underlying physics as laser ray tracing and focusing, thermal non-equilibrium, plasma radiation and air spark ignition have been developed. This proposed work shall extend the numerical platform and existing sub-models to include the aluminum wall surface Inverse Bremsstrahlung (IB) effect from which surface ablation and free-electron generation can be initiated without relying on the air spark ignition sub-model. The following tasks will be performed to accomplish the research objectives.

  20. Users manual for the Variable dimension Automatic Synthesis Program (VASP)

    NASA Technical Reports Server (NTRS)

    White, J. S.; Lee, H. Q.

    1971-01-01

    A dictionary and some problems for the Variable Automatic Synthesis Program VASP are submitted. The dictionary contains a description of each subroutine and instructions on its use. The example problems give the user a better perspective on the use of VASP for solving problems in modern control theory. These example problems include dynamic response, optimal control gain, solution of the sampled data matrix Ricatti equation, matrix decomposition, and pseudo inverse of a matrix. Listings of all subroutines are also included. The VASP program has been adapted to run in the conversational mode on the Ames 360/67 computer.

  1. The SCEC/USGS dynamic earthquake rupture code verification exercise

    USGS Publications Warehouse

    Harris, R.A.; Barall, M.; Archuleta, R.; Dunham, E.; Aagaard, Brad T.; Ampuero, J.-P.; Bhat, H.; Cruz-Atienza, Victor M.; Dalguer, L.; Dawson, P.; Day, S.; Duan, B.; Ely, G.; Kaneko, Y.; Kase, Y.; Lapusta, N.; Liu, Yajing; Ma, S.; Oglesby, D.; Olsen, K.; Pitarka, A.; Song, S.; Templeton, E.

    2009-01-01

    Numerical simulations of earthquake rupture dynamics are now common, yet it has been difficult to test the validity of these simulations because there have been few field observations and no analytic solutions with which to compare the results. This paper describes the Southern California Earthquake Center/U.S. Geological Survey (SCEC/USGS) Dynamic Earthquake Rupture Code Verification Exercise, where codes that simulate spontaneous rupture dynamics in three dimensions are evaluated and the results produced by these codes are compared using Web-based tools. This is the first time that a broad and rigorous examination of numerous spontaneous rupture codes has been performed—a significant advance in this science. The automated process developed to attain this achievement provides for a future where testing of codes is easily accomplished.Scientists who use computer simulations to understand earthquakes utilize a range of techniques. Most of these assume that earthquakes are caused by slip at depth on faults in the Earth, but hereafter the strategies vary. Among the methods used in earthquake mechanics studies are kinematic approaches and dynamic approaches.The kinematic approach uses a computer code that prescribes the spatial and temporal evolution of slip on the causative fault (or faults). These types of simulations are very helpful, especially since they can be used in seismic data inversions to relate the ground motions recorded in the field to slip on the fault(s) at depth. However, these kinematic solutions generally provide no insight into the physics driving the fault slip or information about why the involved fault(s) slipped that much (or that little). In other words, these kinematic solutions may lack information about the physical dynamics of earthquake rupture that will be most helpful in forecasting future events.To help address this issue, some researchers use computer codes to numerically simulate earthquakes and construct dynamic, spontaneous rupture (hereafter called “spontaneous rupture”) solutions. For these types of numerical simulations, rather than prescribing the slip function at each location on the fault(s), just the friction constitutive properties and initial stress conditions are prescribed. The subsequent stresses and fault slip spontaneously evolve over time as part of the elasto-dynamic solution. Therefore, spontaneous rupture computer simulations of earthquakes allow us to include everything that we know, or think that we know, about earthquake dynamics and to test these ideas against earthquake observations.

  2. Self-organizing radial basis function networks for adaptive flight control and aircraft engine state estimation

    NASA Astrophysics Data System (ADS)

    Shankar, Praveen

    The performance of nonlinear control algorithms such as feedback linearization and dynamic inversion is heavily dependent on the fidelity of the dynamic model being inverted. Incomplete or incorrect knowledge of the dynamics results in reduced performance and may lead to instability. Augmenting the baseline controller with approximators which utilize a parametrization structure that is adapted online reduces the effect of this error between the design model and actual dynamics. However, currently existing parameterizations employ a fixed set of basis functions that do not guarantee arbitrary tracking error performance. To address this problem, we develop a self-organizing parametrization structure that is proven to be stable and can guarantee arbitrary tracking error performance. The training algorithm to grow the network and adapt the parameters is derived from Lyapunov theory. In addition to growing the network of basis functions, a pruning strategy is incorporated to keep the size of the network as small as possible. This algorithm is implemented on a high performance flight vehicle such as F-15 military aircraft. The baseline dynamic inversion controller is augmented with a Self-Organizing Radial Basis Function Network (SORBFN) to minimize the effect of the inversion error which may occur due to imperfect modeling, approximate inversion or sudden changes in aircraft dynamics. The dynamic inversion controller is simulated for different situations including control surface failures, modeling errors and external disturbances with and without the adaptive network. A performance measure of maximum tracking error is specified for both the controllers a priori. Excellent tracking error minimization to a pre-specified level using the adaptive approximation based controller was achieved while the baseline dynamic inversion controller failed to meet this performance specification. The performance of the SORBFN based controller is also compared to a fixed RBF network based adaptive controller. While the fixed RBF network based controller which is tuned to compensate for control surface failures fails to achieve the same performance under modeling uncertainty and disturbances, the SORBFN is able to achieve good tracking convergence under all error conditions.

  3. Efficacy of an ankle brace with a subtalar locking system in inversion control in dynamic movements.

    PubMed

    Zhang, Songning; Wortley, Michael; Chen, Qingjian; Freedman, Julia

    2009-12-01

    Controlled laboratory study. To examine effectiveness of an ankle brace with a subtalar locking system in restricting ankle inversion during passive and dynamic movements. Semirigid ankle braces are considered more effective in restricting ankle inversion than other types of brace, but a semirigid brace with a subtalar locking system may be even more effective. Nineteen healthy subjects with no history of major lower extremity injuries were included in the study. Participants performed 5 trials of an ankle inversion drop test and a lateral-cutting movement without wearing a brace and while wearing either the Element (with the subtalar locking system), a Functional ankle brace, or an ASO ankle brace. A 2-way repeated-measures analysis of variance (ANOVA) was used to assess brace differences (P?.05). All 3 braces significantly reduced total passive ankle frontal plane range of motion (ROM), with the Element ankle brace being the most effective. For the inversion drop the results showed significant reductions in peak ankle inversion angle and inversion ROM for all 3 braces compared to the no brace condition; and the peak inversion velocity was also reduced for the Element brace and the Functional brace. In the lateral-cutting movement, a small but significant reduction of the peak inversion angle in early foot contact and the peak eversion velocity at push-off were seen when wearing the Element and the Functional ankle braces compared to the no brace condition. Peak vertical ground reaction force was reduced for the Element brace compared to the ASO brace and the no brace conditions. These results suggest that the tested ankle braces, especially the Element brace, provided effective restriction of ankle inversion during both passive and dynamic movements.

  4. FOREWORD: 4th International Workshop on New Computational Methods for Inverse Problems (NCMIP2014)

    NASA Astrophysics Data System (ADS)

    2014-10-01

    This volume of Journal of Physics: Conference Series is dedicated to the scientific contributions presented during the 4th International Workshop on New Computational Methods for Inverse Problems, NCMIP 2014 (http://www.farman.ens-cachan.fr/NCMIP_2014.html). This workshop took place at Ecole Normale Supérieure de Cachan, on May 23, 2014. The prior editions of NCMIP also took place in Cachan, France, firstly within the scope of ValueTools Conference, in May 2011 (http://www.ncmip.org/2011/), and secondly at the initiative of Institut Farman, in May 2012 and May 2013, (http://www.farman.ens-cachan.fr/NCMIP_2012.html), (http://www.farman.ens-cachan.fr/NCMIP_2013.html). The New Computational Methods for Inverse Problems (NCMIP) Workshop focused on recent advances in the resolution of inverse problems. Indeed, inverse problems appear in numerous scientific areas such as geophysics, biological and medical imaging, material and structure characterization, electrical, mechanical and civil engineering, and finances. The resolution of inverse problems consists of estimating the parameters of the observed system or structure from data collected by an instrumental sensing or imaging device. Its success firstly requires the collection of relevant observation data. It also requires accurate models describing the physical interactions between the instrumental device and the observed system, as well as the intrinsic properties of the solution itself. Finally, it requires the design of robust, accurate and efficient inversion algorithms. Advanced sensor arrays and imaging devices provide high rate and high volume data; in this context, the efficient resolution of the inverse problem requires the joint development of new models and inversion methods, taking computational and implementation aspects into account. During this one-day workshop, researchers had the opportunity to bring to light and share new techniques and results in the field of inverse problems. The topics of the workshop were: algorithms and computational aspects of inversion, Bayesian estimation, Kernel methods, learning methods, convex optimization, free discontinuity problems, metamodels, proper orthogonal decomposition, reduced models for the inversion, non-linear inverse scattering, image reconstruction and restoration, and applications (bio-medical imaging, non-destructive evaluation...). NCMIP 2014 was a one-day workshop held in May 2014 which attracted around sixty attendees. Each of the submitted papers has been reviewed by two reviewers. There have been nine accepted papers. In addition, three international speakers were invited to present a longer talk. The workshop was supported by Institut Farman (ENS Cachan, CNRS) and endorsed by the following French research networks (GDR ISIS, GDR MIA, GDR MOA, GDR Ondes). The program committee acknowledges the following research laboratories: CMLA, LMT, LURPA, SATIE. Eric Vourc'h and Thomas Rodet

  5. Obtaining valid geologic models from 3-D resistivity inversion of magnetotelluric data at Pahute Mesa, Nevada

    USGS Publications Warehouse

    Rodriguez, Brian D.; Sweetkind, Donald S.

    2015-01-01

    The 3-D inversion was generally able to reproduce the gross resistivity structure of the “known” model, but the simulated conductive volcanic composite unit horizons were often too shallow when compared to the “known” model. Additionally, the chosen computation parameters such as station spacing appear to have resulted in computational artifacts that are difficult to interpret but could potentially be removed with further refinements of the 3-D resistivity inversion modeling technique.

  6. Parallel solution of the symmetric tridiagonal eigenproblem. Research report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jessup, E.R.

    1989-10-01

    This thesis discusses methods for computing all eigenvalues and eigenvectors of a symmetric tridiagonal matrix on a distributed-memory Multiple Instruction, Multiple Data multiprocessor. Only those techniques having the potential for both high numerical accuracy and significant large-grained parallelism are investigated. These include the QL method or Cuppen's divide and conquer method based on rank-one updating to compute both eigenvalues and eigenvectors, bisection to determine eigenvalues and inverse iteration to compute eigenvectors. To begin, the methods are compared with respect to computation time, communication time, parallel speed up, and accuracy. Experiments on an IPSC hypercube multiprocessor reveal that Cuppen's method ismore » the most accurate approach, but bisection with inverse iteration is the fastest and most parallel. Because the accuracy of the latter combination is determined by the quality of the computed eigenvectors, the factors influencing the accuracy of inverse iteration are examined. This includes, in part, statistical analysis of the effect of a starting vector with random components. These results are used to develop an implementation of inverse iteration producing eigenvectors with lower residual error and better orthogonality than those generated by the EISPACK routine TINVIT. This thesis concludes with adaptions of methods for the symmetric tridiagonal eigenproblem to the related problem of computing the singular value decomposition (SVD) of a bidiagonal matrix.« less

  7. Parallel solution of the symmetric tridiagonal eigenproblem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jessup, E.R.

    1989-01-01

    This thesis discusses methods for computing all eigenvalues and eigenvectors of a symmetric tridiagonal matrix on a distributed memory MIMD multiprocessor. Only those techniques having the potential for both high numerical accuracy and significant large-grained parallelism are investigated. These include the QL method or Cuppen's divide and conquer method based on rank-one updating to compute both eigenvalues and eigenvectors, bisection to determine eigenvalues, and inverse iteration to compute eigenvectors. To begin, the methods are compared with respect to computation time, communication time, parallel speedup, and accuracy. Experiments on an iPSC hyper-cube multiprocessor reveal that Cuppen's method is the most accuratemore » approach, but bisection with inverse iteration is the fastest and most parallel. Because the accuracy of the latter combination is determined by the quality of the computed eigenvectors, the factors influencing the accuracy of inverse iteration are examined. This includes, in part, statistical analysis of the effects of a starting vector with random components. These results are used to develop an implementation of inverse iteration producing eigenvectors with lower residual error and better orthogonality than those generated by the EISPACK routine TINVIT. This thesis concludes with adaptations of methods for the symmetric tridiagonal eigenproblem to the related problem of computing the singular value decomposition (SVD) of a bidiagonal matrix.« less

  8. Numerical modeling of landslides and generated seismic waves: The Bingham Canyon Mine landslides

    NASA Astrophysics Data System (ADS)

    Miallot, H.; Mangeney, A.; Capdeville, Y.; Hibert, C.

    2016-12-01

    Landslides are important natural hazards and key erosion processes. They create long period surface waves that can be recorded by regional and global seismic networks. The seismic signals are generated by acceleration/deceleration of the mass sliding over the topography. They consist in a unique and powerful tool to detect, characterize and quantify the landslide dynamics. We investigate here the processes at work during the two massive landslides that struck the Bingham Canyon Mine on the 10th April 2013. We carry a combined analysis of the generated seismic signals and the landslide processes computed with a 3D modeling on a complex topography. Forces computed by broadband seismic waveform inversion are used to constrain the study and particularly the force-source and the bulk dynamic. The source time function are obtained by a 3D model (Shaltop) where rheological parameters can be adjusted. We first investigate the influence of the initial shape of the sliding mass which strongly affects the whole landslide dynamic. We also see that the initial shape of the source mass of the first landslide constrains pretty well the second landslide source mass. We then investigate the effect of a rheological parameter, the frictional angle, that strongly influences the resulted computed seismic source function. We test here numerous friction laws as the frictional Coulomb law and a velocity-weakening friction law. Our results show that the force waveform fitting the observed data is highly variable depending on these different choices.

  9. Full analogue electronic realisation of the Hodgkin-Huxley neuronal dynamics in weak-inversion CMOS.

    PubMed

    Lazaridis, E; Drakakis, E M; Barahona, M

    2007-01-01

    This paper presents a non-linear analog synthesis path towards the modeling and full implementation of the Hodgkin-Huxley neuronal dynamics in silicon. The proposed circuits have been realized in weak-inversion CMOS technology and take advantage of both log-domain and translinear transistor-level techniques.

  10. CONTIN XPCS: software for inverse transform analysis of X-ray photon correlation spectroscopy dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrews, Ross N.; Narayanan, Suresh; Zhang, Fan

    X-ray photon correlation spectroscopy (XPCS) and dynamic light scattering (DLS) reveal materials dynamics using coherent scattering, with XPCS permitting the investigation of dynamics in a more diverse array of materials than DLS. Heterogeneous dynamics occur in many material systems. The authors' recent work has shown how classic tools employed in the DLS analysis of heterogeneous dynamics can be extended to XPCS, revealing additional information that conventional Kohlrausch exponential fitting obscures. The present work describes the software implementation of inverse transform analysis of XPCS data. This software, calledCONTIN XPCS, is an extension of traditionalCONTINanalysis and accommodates the various dynamics encountered inmore » equilibrium XPCS measurements.« less

  11. CONTIN XPCS: software for inverse transform analysis of X-ray photon correlation spectroscopy dynamics

    DOE PAGES

    Andrews, Ross N.; Narayanan, Suresh; Zhang, Fan; ...

    2018-02-01

    X-ray photon correlation spectroscopy (XPCS) and dynamic light scattering (DLS) reveal materials dynamics using coherent scattering, with XPCS permitting the investigation of dynamics in a more diverse array of materials than DLS. Heterogeneous dynamics occur in many material systems. The authors' recent work has shown how classic tools employed in the DLS analysis of heterogeneous dynamics can be extended to XPCS, revealing additional information that conventional Kohlrausch exponential fitting obscures. The present work describes the software implementation of inverse transform analysis of XPCS data. This software, calledCONTIN XPCS, is an extension of traditionalCONTINanalysis and accommodates the various dynamics encountered inmore » equilibrium XPCS measurements.« less

  12. Fast inverse scattering solutions using the distorted Born iterative method and the multilevel fast multipole algorithm

    PubMed Central

    Hesford, Andrew J.; Chew, Weng C.

    2010-01-01

    The distorted Born iterative method (DBIM) computes iterative solutions to nonlinear inverse scattering problems through successive linear approximations. By decomposing the scattered field into a superposition of scattering by an inhomogeneous background and by a material perturbation, large or high-contrast variations in medium properties can be imaged through iterations that are each subject to the distorted Born approximation. However, the need to repeatedly compute forward solutions still imposes a very heavy computational burden. To ameliorate this problem, the multilevel fast multipole algorithm (MLFMA) has been applied as a forward solver within the DBIM. The MLFMA computes forward solutions in linear time for volumetric scatterers. The typically regular distribution and shape of scattering elements in the inverse scattering problem allow the method to take advantage of data redundancy and reduce the computational demands of the normally expensive MLFMA setup. Additional benefits are gained by employing Kaczmarz-like iterations, where partial measurements are used to accelerate convergence. Numerical results demonstrate both the efficiency of the forward solver and the successful application of the inverse method to imaging problems with dimensions in the neighborhood of ten wavelengths. PMID:20707438

  13. Approximation Of Multi-Valued Inverse Functions Using Clustering And Sugeno Fuzzy Inference

    NASA Technical Reports Server (NTRS)

    Walden, Maria A.; Bikdash, Marwan; Homaifar, Abdollah

    1998-01-01

    Finding the inverse of a continuous function can be challenging and computationally expensive when the inverse function is multi-valued. Difficulties may be compounded when the function itself is difficult to evaluate. We show that we can use fuzzy-logic approximators such as Sugeno inference systems to compute the inverse on-line. To do so, a fuzzy clustering algorithm can be used in conjunction with a discriminating function to split the function data into branches for the different values of the forward function. These data sets are then fed into a recursive least-squares learning algorithm that finds the proper coefficients of the Sugeno approximators; each Sugeno approximator finds one value of the inverse function. Discussions about the accuracy of the approximation will be included.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perumalla, Kalyan S.; Yoginath, Srikanth B.

    Problems such as fault tolerance and scalable synchronization can be efficiently solved using reversibility of applications. Making applications reversible by relying on computation rather than on memory is ideal for large scale parallel computing, especially for the next generation of supercomputers in which memory is expensive in terms of latency, energy, and price. In this direction, a case study is presented here in reversing a computational core, namely, Basic Linear Algebra Subprograms, which is widely used in scientific applications. A new Reversible BLAS (RBLAS) library interface has been designed, and a prototype has been implemented with two modes: (1) amore » memory-mode in which reversibility is obtained by checkpointing to memory in forward and restoring from memory in reverse, and (2) a computational-mode in which nothing is saved in the forward, but restoration is done entirely via inverse computation in reverse. The article is focused on detailed performance benchmarking to evaluate the runtime dynamics and performance effects, comparing reversible computation with checkpointing on both traditional CPU platforms and recent GPU accelerator platforms. For BLAS Level-1 subprograms, data indicates over an order of magnitude better speed of reversible computation compared to checkpointing. For BLAS Level-2 and Level-3, a more complex tradeoff is observed between reversible computation and checkpointing, depending on computational and memory complexities of the subprograms.« less

  15. Phase Inversion of EPDM/PP Blends: Effect of Viscosity Ratio

    NASA Astrophysics Data System (ADS)

    Machado, Ana Vera; Antunes, Carla Filipa; van Duin, Martin

    2011-07-01

    EPDM/PP blends and TPVs with and without crosslinking, respectively, were prepared, in a batch mixer, using three different EPDM rubbers. EPDM/PP based TPVs were dynamic vulcanised using the resol/SnCl2 system. Samples were collected along the time in order to get information on the morphology evolution and crosslinking density during dynamic vulcanisation. The morphology was studied by SEM and the crosslink density by gel content. In the case of low viscosity EPDMs, crosslinking of the EPDM phase was retarded due to its low crosslinking efficiency. This delay on crosslinking reaction enables the observation of the various stages of the morphological mechanism that takes place during dynamic vulcanisation. It could be observed that phase inversion takes place via lamellar mechanism. More detailed insight on phase inversion mechanism during dynamic vulcanisation was accomplished.

  16. Stability Result For Dynamic Inversion Devised to Control Large Flexible Aircraft

    NASA Technical Reports Server (NTRS)

    Gregory, Irene M.

    2001-01-01

    High performance aircraft of the future will be designed lighter, more maneuverable, and operate over an ever expanding flight envelope. One of the largest differences from the flight control perspective between current and future advanced aircraft is elasticity. Over the last decade, dynamic inversion methodology has gained considerable popularity in application to highly maneuverable fighter aircraft, which were treated as rigid vehicles. This paper is an initial attempt to establish global stability results for dynamic inversion methodology as applied to a large, flexible aircraft. This work builds on a previous result for rigid fighter aircraft and adds a new level of complexity that is the flexible aircraft dynamics, which cannot be ignored even in the most basic flight control. The results arise from observations of the control laws designed for a new generation of the High-Speed Civil Transport aircraft.

  17. Sorting signed permutations by inversions in O(nlogn) time.

    PubMed

    Swenson, Krister M; Rajan, Vaibhav; Lin, Yu; Moret, Bernard M E

    2010-03-01

    The study of genomic inversions (or reversals) has been a mainstay of computational genomics for nearly 20 years. After the initial breakthrough of Hannenhalli and Pevzner, who gave the first polynomial-time algorithm for sorting signed permutations by inversions, improved algorithms have been designed, culminating with an optimal linear-time algorithm for computing the inversion distance and a subquadratic algorithm for providing a shortest sequence of inversions--also known as sorting by inversions. Remaining open was the question of whether sorting by inversions could be done in O(nlogn) time. In this article, we present a qualified answer to this question, by providing two new sorting algorithms, a simple and fast randomized algorithm and a deterministic refinement. The deterministic algorithm runs in time O(nlogn + kn), where k is a data-dependent parameter. We provide the results of extensive experiments showing that both the average and the standard deviation for k are small constants, independent of the size of the permutation. We conclude (but do not prove) that almost all signed permutations can be sorted by inversions in O(nlogn) time.

  18. Inferior olive mirrors joint dynamics to implement an inverse controller.

    PubMed

    Alvarez-Icaza, Rodrigo; Boahen, Kwabena

    2012-10-01

    To produce smooth and coordinated motion, our nervous systems need to generate precisely timed muscle activation patterns that, due to axonal conduction delay, must be generated in a predictive and feedforward manner. Kawato proposed that the cerebellum accomplishes this by acting as an inverse controller that modulates descending motor commands to predictively drive the spinal cord such that the musculoskeletal dynamics are canceled out. This and other cerebellar theories do not, however, account for the rich biophysical properties expressed by the olivocerebellar complex's various cell types, making these theories difficult to verify experimentally. Here we propose that a multizonal microcomplex's (MZMC) inferior olivary neurons use their subthreshold oscillations to mirror a musculoskeletal joint's underdamped dynamics, thereby achieving inverse control. We used control theory to map a joint's inverse model onto an MZMC's biophysics, and we used biophysical modeling to confirm that inferior olivary neurons can express the dynamics required to mirror biomechanical joints. We then combined both techniques to predict how experimentally injecting current into the inferior olive would affect overall motor output performance. We found that this experimental manipulation unmasked a joint's natural dynamics, as observed by motor output ringing at the joint's natural frequency, with amplitude proportional to the amount of current. These results support the proposal that the cerebellum-in particular an MZMC-is an inverse controller; the results also provide a biophysical implementation for this controller and allow one to make an experimentally testable prediction.

  19. Bayesian model calibration of computational models in velocimetry diagnosed dynamic compression experiments.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Justin; Hund, Lauren

    2017-02-01

    Dynamic compression experiments are being performed on complicated materials using increasingly complex drivers. The data produced in these experiments are beginning to reach a regime where traditional analysis techniques break down; requiring the solution of an inverse problem. A common measurement in dynamic experiments is an interface velocity as a function of time, and often this functional output can be simulated using a hydrodynamics code. Bayesian model calibration is a statistical framework to estimate inputs into a computational model in the presence of multiple uncertainties, making it well suited to measurements of this type. In this article, we apply Bayesianmore » model calibration to high pressure (250 GPa) ramp compression measurements in tantalum. We address several issues speci c to this calibration including the functional nature of the output as well as parameter and model discrepancy identi ability. Speci cally, we propose scaling the likelihood function by an e ective sample size rather than modeling the autocorrelation function to accommodate the functional output and propose sensitivity analyses using the notion of `modularization' to assess the impact of experiment-speci c nuisance input parameters on estimates of material properties. We conclude that the proposed Bayesian model calibration procedure results in simple, fast, and valid inferences on the equation of state parameters for tantalum.« less

  20. Proton exchange membrane fuel cell model for aging predictions: Simulated equivalent active surface area loss and comparisons with durability tests

    NASA Astrophysics Data System (ADS)

    Robin, C.; Gérard, M.; Quinaud, M.; d'Arbigny, J.; Bultel, Y.

    2016-09-01

    The prediction of Proton Exchange Membrane Fuel Cell (PEMFC) lifetime is one of the major challenges to optimize both material properties and dynamic control of the fuel cell system. In this study, by a multiscale modeling approach, a mechanistic catalyst dissolution model is coupled to a dynamic PEMFC cell model to predict the performance loss of the PEMFC. Results are compared to two 2000-h experimental aging tests. More precisely, an original approach is introduced to estimate the loss of an equivalent active surface area during an aging test. Indeed, when the computed Electrochemical Catalyst Surface Area profile is fitted on the experimental measures from Cyclic Voltammetry, the computed performance loss of the PEMFC is underestimated. To be able to predict the performance loss measured by polarization curves during the aging test, an equivalent active surface area is obtained by a model inversion. This methodology enables to successfully find back the experimental cell voltage decay during time. The model parameters are fitted from the polarization curves so that they include the global degradation. Moreover, the model captures the aging heterogeneities along the surface of the cell observed experimentally. Finally, a second 2000-h durability test in dynamic operating conditions validates the approach.

  1. Movement Characteristics Analysis and Dynamic Simulation of Collaborative Measuring Robot

    NASA Astrophysics Data System (ADS)

    guoqing, MA; li, LIU; zhenglin, YU; guohua, CAO; yanbin, ZHENG

    2017-03-01

    Human-machine collaboration is becoming increasingly more necessary, and so collaborative robot applications are also in high demand. We selected a UR10 robot as our research subject for this study. First, we applied D-H coordinate transformation of the robot to establish a link system, and we then used inverse transformation to solve the robot’s inverse kinematics and find all the joints. Use Lagrange method to analysis UR robot dynamics; use ADAMS multibody dynamics simulation software to dynamic simulation; verifying the correctness of the derived kinetic models.

  2. Parallel O(log n) algorithms for open- and closed-chain rigid multibody systems based on a new mass matrix factorization technique

    NASA Technical Reports Server (NTRS)

    Fijany, Amir

    1993-01-01

    In this paper, parallel O(log n) algorithms for computation of rigid multibody dynamics are developed. These parallel algorithms are derived by parallelization of new O(n) algorithms for the problem. The underlying feature of these O(n) algorithms is a drastically different strategy for decomposition of interbody force which leads to a new factorization of the mass matrix (M). Specifically, it is shown that a factorization of the inverse of the mass matrix in the form of the Schur Complement is derived as M(exp -1) = C - B(exp *)A(exp -1)B, wherein matrices C, A, and B are block tridiagonal matrices. The new O(n) algorithm is then derived as a recursive implementation of this factorization of M(exp -1). For the closed-chain systems, similar factorizations and O(n) algorithms for computation of Operational Space Mass Matrix lambda and its inverse lambda(exp -1) are also derived. It is shown that these O(n) algorithms are strictly parallel, that is, they are less efficient than other algorithms for serial computation of the problem. But, to our knowledge, they are the only known algorithms that can be parallelized and that lead to both time- and processor-optimal parallel algorithms for the problem, i.e., parallel O(log n) algorithms with O(n) processors. The developed parallel algorithms, in addition to their theoretical significance, are also practical from an implementation point of view due to their simple architectural requirements.

  3. Computation of dynamic seismic responses to viscous fluid of digitized three-dimensional Berea sandstones with a coupled finite-difference method.

    PubMed

    Zhang, Yang; Toksöz, M Nafi

    2012-08-01

    The seismic response of saturated porous rocks is studied numerically using microtomographic images of three-dimensional digitized Berea sandstones. A stress-strain calculation is employed to compute the velocities and attenuations of rock samples whose sizes are much smaller than the seismic wavelength of interest. To compensate for the contributions of small cracks lost in the imaging process to the total velocity and attenuation, a hybrid method is developed to recover the crack distribution, in which the differential effective medium theory, the Kuster-Toksöz model, and a modified squirt-flow model are utilized in a two-step Monte Carlo inversion. In the inversion, the velocities of P- and S-waves measured for the dry and water-saturated cases, and the measured attenuation of P-waves for different fluids are used. By using such a hybrid method, both the velocities of saturated porous rocks and the attenuations are predicted accurately when compared to laboratory data. The hybrid method is a practical way to model numerically the seismic properties of saturated porous rocks until very high resolution digital data are available. Cracks lost in the imaging process are critical for accurately predicting velocities and attenuations of saturated porous rocks.

  4. 3D Acoustic Full Waveform Inversion for Engineering Purpose

    NASA Astrophysics Data System (ADS)

    Lim, Y.; Shin, S.; Kim, D.; Kim, S.; Chung, W.

    2017-12-01

    Seismic waveform inversion is the most researched data processing technique. In recent years, with an increase in marine development projects, seismic surveys are commonly conducted for engineering purposes; however, researches for application of waveform inversion are insufficient. The waveform inversion updates the subsurface physical property by minimizing the difference between modeled and observed data. Furthermore, it can be used to generate an accurate subsurface image; however, this technique consumes substantial computational resources. Its most compute-intensive step is the calculation of the gradient and hessian values. This aspect gains higher significance in 3D as compared to 2D. This paper introduces a new method for calculating gradient and hessian values, in an effort to reduce computational overburden. In the conventional waveform inversion, the calculation area covers all sources and receivers. In seismic surveys for engineering purposes, the number of receivers is limited. Therefore, it is inefficient to construct the hessian and gradient for the entire region (Figure 1). In order to tackle this problem, we calculate the gradient and the hessian for a single shot within the range of the relevant source and receiver. This is followed by summing up of these positions for the entire shot (Figure 2). In this paper, we demonstrate that reducing the area of calculation of the hessian and gradient for one shot reduces the overall amount of computation and therefore, the computation time. Furthermore, it is proved that the waveform inversion can be suitably applied for engineering purposes. In future research, we propose to ascertain an effective calculation range. This research was supported by the Basic Research Project(17-3314) of the Korea Institute of Geoscience and Mineral Resources(KIGAM) funded by the Ministry of Science, ICT and Future Planning of Korea.

  5. A new approach to integrate GPU-based Monte Carlo simulation into inverse treatment plan optimization for proton therapy.

    PubMed

    Li, Yongbao; Tian, Zhen; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun

    2017-01-07

    Monte Carlo (MC)-based spot dose calculation is highly desired for inverse treatment planning in proton therapy because of its accuracy. Recent studies on biological optimization have also indicated the use of MC methods to compute relevant quantities of interest, e.g. linear energy transfer. Although GPU-based MC engines have been developed to address inverse optimization problems, their efficiency still needs to be improved. Also, the use of a large number of GPUs in MC calculation is not favorable for clinical applications. The previously proposed adaptive particle sampling (APS) method can improve the efficiency of MC-based inverse optimization by using the computationally expensive MC simulation more effectively. This method is more efficient than the conventional approach that performs spot dose calculation and optimization in two sequential steps. In this paper, we propose a computational library to perform MC-based spot dose calculation on GPU with the APS scheme. The implemented APS method performs a non-uniform sampling of the particles from pencil beam spots during the optimization process, favoring those from the high intensity spots. The library also conducts two computationally intensive matrix-vector operations frequently used when solving an optimization problem. This library design allows a streamlined integration of the MC-based spot dose calculation into an existing proton therapy inverse planning process. We tested the developed library in a typical inverse optimization system with four patient cases. The library achieved the targeted functions by supporting inverse planning in various proton therapy schemes, e.g. single field uniform dose, 3D intensity modulated proton therapy, and distal edge tracking. The efficiency was 41.6  ±  15.3% higher than the use of a GPU-based MC package in a conventional calculation scheme. The total computation time ranged between 2 and 50 min on a single GPU card depending on the problem size.

  6. A new approach to integrate GPU-based Monte Carlo simulation into inverse treatment plan optimization for proton therapy

    NASA Astrophysics Data System (ADS)

    Li, Yongbao; Tian, Zhen; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun

    2017-01-01

    Monte Carlo (MC)-based spot dose calculation is highly desired for inverse treatment planning in proton therapy because of its accuracy. Recent studies on biological optimization have also indicated the use of MC methods to compute relevant quantities of interest, e.g. linear energy transfer. Although GPU-based MC engines have been developed to address inverse optimization problems, their efficiency still needs to be improved. Also, the use of a large number of GPUs in MC calculation is not favorable for clinical applications. The previously proposed adaptive particle sampling (APS) method can improve the efficiency of MC-based inverse optimization by using the computationally expensive MC simulation more effectively. This method is more efficient than the conventional approach that performs spot dose calculation and optimization in two sequential steps. In this paper, we propose a computational library to perform MC-based spot dose calculation on GPU with the APS scheme. The implemented APS method performs a non-uniform sampling of the particles from pencil beam spots during the optimization process, favoring those from the high intensity spots. The library also conducts two computationally intensive matrix-vector operations frequently used when solving an optimization problem. This library design allows a streamlined integration of the MC-based spot dose calculation into an existing proton therapy inverse planning process. We tested the developed library in a typical inverse optimization system with four patient cases. The library achieved the targeted functions by supporting inverse planning in various proton therapy schemes, e.g. single field uniform dose, 3D intensity modulated proton therapy, and distal edge tracking. The efficiency was 41.6  ±  15.3% higher than the use of a GPU-based MC package in a conventional calculation scheme. The total computation time ranged between 2 and 50 min on a single GPU card depending on the problem size.

  7. A New Approach to Integrate GPU-based Monte Carlo Simulation into Inverse Treatment Plan Optimization for Proton Therapy

    PubMed Central

    Li, Yongbao; Tian, Zhen; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun

    2016-01-01

    Monte Carlo (MC)-based spot dose calculation is highly desired for inverse treatment planning in proton therapy because of its accuracy. Recent studies on biological optimization have also indicated the use of MC methods to compute relevant quantities of interest, e.g. linear energy transfer. Although GPU-based MC engines have been developed to address inverse optimization problems, their efficiency still needs to be improved. Also, the use of a large number of GPUs in MC calculation is not favorable for clinical applications. The previously proposed adaptive particle sampling (APS) method can improve the efficiency of MC-based inverse optimization by using the computationally expensive MC simulation more effectively. This method is more efficient than the conventional approach that performs spot dose calculation and optimization in two sequential steps. In this paper, we propose a computational library to perform MC-based spot dose calculation on GPU with the APS scheme. The implemented APS method performs a non-uniform sampling of the particles from pencil beam spots during the optimization process, favoring those from the high intensity spots. The library also conducts two computationally intensive matrix-vector operations frequently used when solving an optimization problem. This library design allows a streamlined integration of the MC-based spot dose calculation into an existing proton therapy inverse planning process. We tested the developed library in a typical inverse optimization system with four patient cases. The library achieved the targeted functions by supporting inverse planning in various proton therapy schemes, e.g. single field uniform dose, 3D intensity modulated proton therapy, and distal edge tracking. The efficiency was 41.6±15.3% higher than the use of a GPU-based MC package in a conventional calculation scheme. The total computation time ranged between 2 and 50 min on a single GPU card depending on the problem size. PMID:27991456

  8. Identification of dynamic load for prosthetic structures.

    PubMed

    Zhang, Dequan; Han, Xu; Zhang, Zhongpu; Liu, Jie; Jiang, Chao; Yoda, Nobuhiro; Meng, Xianghua; Li, Qing

    2017-12-01

    Dynamic load exists in numerous biomechanical systems, and its identification signifies a critical issue for characterizing dynamic behaviors and studying biomechanical consequence of the systems. This study aims to identify dynamic load in the dental prosthetic structures, namely, 3-unit implant-supported fixed partial denture (I-FPD) and teeth-supported fixed partial denture. The 3-dimensional finite element models were constructed through specific patient's computerized tomography images. A forward algorithm and regularization technique were developed for identifying dynamic load. To verify the effectiveness of the identification method proposed, the I-FPD and teeth-supported fixed partial denture structures were investigated to determine the dynamic loads. For validating the results of inverse identification, an experimental force-measuring system was developed by using a 3-dimensional piezoelectric transducer to measure the dynamic load in the I-FPD structure in vivo. The computationally identified loads were presented with different noise levels to determine their influence on the identification accuracy. The errors between the measured load and identified counterpart were calculated for evaluating the practical applicability of the proposed procedure in biomechanical engineering. This study is expected to serve as a demonstrative role in identifying dynamic loading in biomedical systems, where a direct in vivo measurement may be rather demanding in some areas of interest clinically. Copyright © 2017 John Wiley & Sons, Ltd.

  9. Multiple estimation channel decoupling and optimization method based on inverse system

    NASA Astrophysics Data System (ADS)

    Wu, Peng; Mu, Rongjun; Zhang, Xin; Deng, Yanpeng

    2018-03-01

    This paper addressed the intelligent autonomous navigation request of intelligent deformation missile, based on the intelligent deformation missile dynamics and kinematics modeling, navigation subsystem solution method and error modeling, and then focuses on the corresponding data fusion and decision fusion technology, decouples the sensitive channel of the filter input through the inverse system of design dynamics to reduce the influence of sudden change of the measurement information on the filter input. Then carrying out a series of simulation experiments, which verified the feasibility of the inverse system decoupling algorithm effectiveness.

  10. Efficient Stochastic Inversion Using Adjoint Models and Kernel-PCA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thimmisetty, Charanraj A.; Zhao, Wenju; Chen, Xiao

    2017-10-18

    Performing stochastic inversion on a computationally expensive forward simulation model with a high-dimensional uncertain parameter space (e.g. a spatial random field) is computationally prohibitive even when gradient information can be computed efficiently. Moreover, the ‘nonlinear’ mapping from parameters to observables generally gives rise to non-Gaussian posteriors even with Gaussian priors, thus hampering the use of efficient inversion algorithms designed for models with Gaussian assumptions. In this paper, we propose a novel Bayesian stochastic inversion methodology, which is characterized by a tight coupling between the gradient-based Langevin Markov Chain Monte Carlo (LMCMC) method and a kernel principal component analysis (KPCA). Thismore » approach addresses the ‘curse-of-dimensionality’ via KPCA to identify a low-dimensional feature space within the high-dimensional and nonlinearly correlated parameter space. In addition, non-Gaussian posterior distributions are estimated via an efficient LMCMC method on the projected low-dimensional feature space. We will demonstrate this computational framework by integrating and adapting our recent data-driven statistics-on-manifolds constructions and reduction-through-projection techniques to a linear elasticity model.« less

  11. Superresolution radar imaging based on fast inverse-free sparse Bayesian learning for multiple measurement vectors

    NASA Astrophysics Data System (ADS)

    He, Xingyu; Tong, Ningning; Hu, Xiaowei

    2018-01-01

    Compressive sensing has been successfully applied to inverse synthetic aperture radar (ISAR) imaging of moving targets. By exploiting the block sparse structure of the target image, sparse solution for multiple measurement vectors (MMV) can be applied in ISAR imaging and a substantial performance improvement can be achieved. As an effective sparse recovery method, sparse Bayesian learning (SBL) for MMV involves a matrix inverse at each iteration. Its associated computational complexity grows significantly with the problem size. To address this problem, we develop a fast inverse-free (IF) SBL method for MMV. A relaxed evidence lower bound (ELBO), which is computationally more amiable than the traditional ELBO used by SBL, is obtained by invoking fundamental property for smooth functions. A variational expectation-maximization scheme is then employed to maximize the relaxed ELBO, and a computationally efficient IF-MSBL algorithm is proposed. Numerical results based on simulated and real data show that the proposed method can reconstruct row sparse signal accurately and obtain clear superresolution ISAR images. Moreover, the running time and computational complexity are reduced to a great extent compared with traditional SBL methods.

  12. Non-Arrhenius viscosity related to short-time ion dynamics in a fragile molten salt.

    PubMed

    Singh, Prabhakar; Banhatti, Radha D; Funke, Klaus

    2005-03-21

    The equation T x sigmaDC(T) = alpha x exp[--(E*/kappa(B)T)--gamma x exp(E*/kappa(B)T)] has been used to understand the non-Arrhenius behaviour of the DC conductivity in supercooled glass-forming melts. Here, alpha, gamma and E* are parameters, E* denoting the activation energy for an elementary displacive step. Unlike the empirical VTF relation, our equation provides a link between the long-time and the short-time ion dynamics as observed in broad-band conductivity spectra. Surprisingly, the same equation with the same value of E* but different gamma successfully describes the fluidity (inverse viscosity) of a fragile glass-forming melt. This opens up the possibility of relating non-Arrhenius viscosities to short-time properties, which is in agreement with recent experimental and computer-simulation results.

  13. Robust determination of the chemical potential in the pole expansion and selected inversion method for solving Kohn-Sham density functional theory

    NASA Astrophysics Data System (ADS)

    Jia, Weile; Lin, Lin

    2017-10-01

    Fermi operator expansion (FOE) methods are powerful alternatives to diagonalization type methods for solving Kohn-Sham density functional theory (KSDFT). One example is the pole expansion and selected inversion (PEXSI) method, which approximates the Fermi operator by rational matrix functions and reduces the computational complexity to at most quadratic scaling for solving KSDFT. Unlike diagonalization type methods, the chemical potential often cannot be directly read off from the result of a single step of evaluation of the Fermi operator. Hence multiple evaluations are needed to be sequentially performed to compute the chemical potential to ensure the correct number of electrons within a given tolerance. This hinders the performance of FOE methods in practice. In this paper, we develop an efficient and robust strategy to determine the chemical potential in the context of the PEXSI method. The main idea of the new method is not to find the exact chemical potential at each self-consistent-field (SCF) iteration but to dynamically and rigorously update the upper and lower bounds for the true chemical potential, so that the chemical potential reaches its convergence along the SCF iteration. Instead of evaluating the Fermi operator for multiple times sequentially, our method uses a two-level strategy that evaluates the Fermi operators in parallel. In the regime of full parallelization, the wall clock time of each SCF iteration is always close to the time for one single evaluation of the Fermi operator, even when the initial guess is far away from the converged solution. We demonstrate the effectiveness of the new method using examples with metallic and insulating characters, as well as results from ab initio molecular dynamics.

  14. Full-Body Musculoskeletal Model for Muscle-Driven Simulation of Human Gait.

    PubMed

    Rajagopal, Apoorva; Dembia, Christopher L; DeMers, Matthew S; Delp, Denny D; Hicks, Jennifer L; Delp, Scott L

    2016-10-01

    Musculoskeletal models provide a non-invasive means to study human movement and predict the effects of interventions on gait. Our goal was to create an open-source 3-D musculoskeletal model with high-fidelity representations of the lower limb musculature of healthy young individuals that can be used to generate accurate simulations of gait. Our model includes bony geometry for the full body, 37 degrees of freedom to define joint kinematics, Hill-type models of 80 muscle-tendon units actuating the lower limbs, and 17 ideal torque actuators driving the upper body. The model's musculotendon parameters are derived from previous anatomical measurements of 21 cadaver specimens and magnetic resonance images of 24 young healthy subjects. We tested the model by evaluating its computational time and accuracy of simulations of healthy walking and running. Generating muscle-driven simulations of normal walking and running took approximately 10 minutes on a typical desktop computer. The differences between our muscle-generated and inverse dynamics joint moments were within 3% (RMSE) of the peak inverse dynamics joint moments in both walking and running, and our simulated muscle activity showed qualitative agreement with salient features from experimental electromyography data. These results suggest that our model is suitable for generating muscle-driven simulations of healthy gait. We encourage other researchers to further validate and apply the model to study other motions of the lower extremity. The model is implemented in the open-source software platform OpenSim. The model and data used to create and test the simulations are freely available at https://simtk.org/home/full_body/, allowing others to reproduce these results and create their own simulations.

  15. Full body musculoskeletal model for muscle-driven simulation of human gait

    PubMed Central

    Rajagopal, Apoorva; Dembia, Christopher L.; DeMers, Matthew S.; Delp, Denny D.; Hicks, Jennifer L.; Delp, Scott L.

    2017-01-01

    Objective Musculoskeletal models provide a non-invasive means to study human movement and predict the effects of interventions on gait. Our goal was to create an open-source, three-dimensional musculoskeletal model with high-fidelity representations of the lower limb musculature of healthy young individuals that can be used to generate accurate simulations of gait. Methods Our model includes bony geometry for the full body, 37 degrees of freedom to define joint kinematics, Hill-type models of 80 muscle-tendon units actuating the lower limbs, and 17 ideal torque actuators driving the upper body. The model’s musculotendon parameters are derived from previous anatomical measurements of 21 cadaver specimens and magnetic resonance images of 24 young healthy subjects. We tested the model by evaluating its computational time and accuracy of simulations of healthy walking and running. Results Generating muscle-driven simulations of normal walking and running took approximately 10 minutes on a typical desktop computer. The differences between our muscle-generated and inverse dynamics joint moments were within 3% (RMSE) of the peak inverse dynamics joint moments in both walking and running, and our simulated muscle activity showed qualitative agreement with salient features from experimental electromyography data. Conclusion These results suggest that our model is suitable for generating muscle-driven simulations of healthy gait. We encourage other researchers to further validate and apply the model to study other motions of the lower-extremity. Significance The model is implemented in the open source software platform OpenSim. The model and data used to create and test the simulations are freely available at https://simtk.org/home/full_body/, allowing others to reproduce these results and create their own simulations. PMID:27392337

  16. Robust determination of the chemical potential in the pole expansion and selected inversion method for solving Kohn-Sham density functional theory.

    PubMed

    Jia, Weile; Lin, Lin

    2017-10-14

    Fermi operator expansion (FOE) methods are powerful alternatives to diagonalization type methods for solving Kohn-Sham density functional theory (KSDFT). One example is the pole expansion and selected inversion (PEXSI) method, which approximates the Fermi operator by rational matrix functions and reduces the computational complexity to at most quadratic scaling for solving KSDFT. Unlike diagonalization type methods, the chemical potential often cannot be directly read off from the result of a single step of evaluation of the Fermi operator. Hence multiple evaluations are needed to be sequentially performed to compute the chemical potential to ensure the correct number of electrons within a given tolerance. This hinders the performance of FOE methods in practice. In this paper, we develop an efficient and robust strategy to determine the chemical potential in the context of the PEXSI method. The main idea of the new method is not to find the exact chemical potential at each self-consistent-field (SCF) iteration but to dynamically and rigorously update the upper and lower bounds for the true chemical potential, so that the chemical potential reaches its convergence along the SCF iteration. Instead of evaluating the Fermi operator for multiple times sequentially, our method uses a two-level strategy that evaluates the Fermi operators in parallel. In the regime of full parallelization, the wall clock time of each SCF iteration is always close to the time for one single evaluation of the Fermi operator, even when the initial guess is far away from the converged solution. We demonstrate the effectiveness of the new method using examples with metallic and insulating characters, as well as results from ab initio molecular dynamics.

  17. Distributed micro-releases of bioterror pathogens : threat characterizations and epidemiology from uncertain patient observables.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wolf, Michael M.; Marzouk, Youssef M.; Adams, Brian M.

    2008-10-01

    Terrorist attacks using an aerosolized pathogen preparation have gained credibility as a national security concern since the anthrax attacks of 2001. The ability to characterize the parameters of such attacks, i.e., to estimate the number of people infected, the time of infection, the average dose received, and the rate of disease spread in contemporary American society (for contagious diseases), is important when planning a medical response. For non-contagious diseases, we address the characterization problem by formulating a Bayesian inverse problem predicated on a short time-series of diagnosed patients exhibiting symptoms. To keep the approach relevant for response planning, we limitmore » ourselves to 3.5 days of data. In computational tests performed for anthrax, we usually find these observation windows sufficient, especially if the outbreak model employed in the inverse problem is accurate. For contagious diseases, we formulated a Bayesian inversion technique to infer both pathogenic transmissibility and the social network from outbreak observations, ensuring that the two determinants of spreading are identified separately. We tested this technique on data collected from a 1967 smallpox epidemic in Abakaliki, Nigeria. We inferred, probabilistically, different transmissibilities in the structured Abakaliki population, the social network, and the chain of transmission. Finally, we developed an individual-based epidemic model to realistically simulate the spread of a rare (or eradicated) disease in a modern society. This model incorporates the mixing patterns observed in an (American) urban setting and accepts, as model input, pathogenic transmissibilities estimated from historical outbreaks that may have occurred in socio-economic environments with little resemblance to contemporary society. Techniques were also developed to simulate disease spread on static and sampled network reductions of the dynamic social networks originally in the individual-based model, yielding faster, though approximate, network-based epidemic models. These reduced-order models are useful in scenario analysis for medical response planning, as well as in computationally intensive inverse problems.« less

  18. Inversion Of Jacobian Matrix For Robot Manipulators

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Bejczy, Antal K.

    1989-01-01

    Report discusses inversion of Jacobian matrix for class of six-degree-of-freedom arms with spherical wrist, i.e., with last three joints intersecting. Shows by taking advantage of simple geometry of such arms, closed-form solution of Q=J-1X, which represents linear transformation from task space to joint space, obtained efficiently. Presents solutions for PUMA arm, JPL/Stanford arm, and six-revolute-joint coplanar arm along with all singular points. Main contribution of paper shows simple geometry of this type of arms exploited in performing inverse transformation without any need to compute Jacobian or its inverse explicitly. Implication of this computational efficiency advanced task-space control schemes for spherical-wrist arms implemented more efficiently.

  19. Recursive inverse factorization.

    PubMed

    Rubensson, Emanuel H; Bock, Nicolas; Holmström, Erik; Niklasson, Anders M N

    2008-03-14

    A recursive algorithm for the inverse factorization S(-1)=ZZ(*) of Hermitian positive definite matrices S is proposed. The inverse factorization is based on iterative refinement [A.M.N. Niklasson, Phys. Rev. B 70, 193102 (2004)] combined with a recursive decomposition of S. As the computational kernel is matrix-matrix multiplication, the algorithm can be parallelized and the computational effort increases linearly with system size for systems with sufficiently sparse matrices. Recent advances in network theory are used to find appropriate recursive decompositions. We show that optimization of the so-called network modularity results in an improved partitioning compared to other approaches. In particular, when the recursive inverse factorization is applied to overlap matrices of irregularly structured three-dimensional molecules.

  20. 2D Inviscid and Viscous Inverse Design Using Continuous Adjoint and Lax-Wendroff Formulation

    NASA Astrophysics Data System (ADS)

    Proctor, Camron Lisle

    The continuous adjoint (CA) technique for optimization and/or inverse-design of aerodynamic components has seen nearly 30 years of documented success in academia. The benefits of using CA versus a direct sensitivity analysis are shown repeatedly in the literature. However, the use of CA in industry is relatively unheard-of. The sparseness of industry contributions to the field may be attributed to the tediousness of the derivation and/or to the difficulties in implementation due to the lack of well-documented adjoint numerical methods. The focus of this work has been to thoroughly document the techniques required to build a two-dimensional CA inverse-design tool. To this end, this work begins with a short background on computational fluid dynamics (CFD) and the use of optimization tools in conjunction with CFD tools to solve aerodynamic optimization problems. A thorough derivation of the continuous adjoint equations and the accompanying gradient calculations for inviscid and viscous constraining equations follows the introduction. Next, the numerical techniques used for solving the partial differential equations (PDEs) governing the flow equations and the adjoint equations are described. Numerical techniques for the supplementary equations are discussed briefly. Subsequently, a verification of the efficacy of the inverse design tool, for the inviscid adjoint equations as well as possible numerical implementation pitfalls are discussed. The NACA0012 airfoil is used as an initial airfoil and surface pressure distribution and the NACA16009 is used as the desired pressure and vice versa. Using a Savitsky-Golay gradient filter, convergence (defined as a cost function<1E-5) is reached in approximately 220 design iteration using 121 design variables. The inverse-design using inviscid adjoint equations results are followed by the discussion of the viscous inverse design results and techniques used to further the convergence of the optimizer. The relationship between limiting step-size and convergence in a line-search optimization is shown to slightly decrease the final cost function at significant computational cost. A gradient damping technique is presented and shown to increase the convergence rate for the optimization in viscous problems, at a negligible increase in computational cost, but is insufficient to converge the solution. Systematically including adjacent surface vertices in the perturbation of a design variable, also a surface vertex, is shown to affect the convergence capability of the viscous optimizer. Finally, a comparison of using inviscid adjoint equations, as opposed to viscous adjoint equations, on viscous flow is presented, and the inviscid adjoint paired with viscous flow is found to reduce the cost function further than the viscous adjoint for the presented problem.

  1. Evidence for Use of Mathematical Inversion by Three-Year-Old Children

    ERIC Educational Resources Information Center

    Sherman, Jody; Bisanz, Jeffrey

    2007-01-01

    The principle of inversion--that a + b - b must equal a--requires a sensitivity to the relation between addition and subtraction that is critical for understanding arithmetic. Use of inversion, albeit inconsistent, has been observed in school-age children, but when use of a computational shortcut based on inversion emerges and how awareness of the…

  2. Feedback control laws for highly maneuverable aircraft

    NASA Technical Reports Server (NTRS)

    Garrard, William L.; Balas, Gary J.

    1994-01-01

    During the first half of the year, the investigators concentrated their efforts on completing the design of control laws for the longitudinal axis of the HARV. During the second half of the year they concentrated on the synthesis of control laws for the lateral-directional axes. The longitudinal control law design efforts can be briefly summarized as follows. Longitudinal control laws were developed for the HARV using mu synthesis design techniques coupled with dynamic inversion. An inner loop dynamic inversion controller was used to simplify the system dynamics by eliminating the aerodynamic nonlinearities and inertial cross coupling. Models of the errors resulting from uncertainties in the principal longitudinal aerodynamic terms were developed and included in the model of the HARV with the inner loop dynamic inversion controller. This resulted in an inner loop transfer function model which was an integrator with the modeling errors characterized as uncertainties in gain and phase. Outer loop controllers were then designed using mu synthesis to provide robustness to these modeling errors and give desired response to pilot inputs. Both pitch rate and angle of attack command following systems were designed. The following tasks have been accomplished for the lateral-directional controllers: inner and outer loop dynamic inversion controllers have been designed; an error model based on a linearized perturbation model of the inner loop system was derived; controllers for the inner loop system have been designed, using classical techniques, that control roll rate and Dutch roll response; the inner loop dynamic inversion and classical controllers have been implemented on the six degree of freedom simulation; and lateral-directional control allocation scheme has been developed based on minimizing required control effort.

  3. Comparison of dynamic treatment regimes via inverse probability weighting.

    PubMed

    Hernán, Miguel A; Lanoy, Emilie; Costagliola, Dominique; Robins, James M

    2006-03-01

    Appropriate analysis of observational data is our best chance to obtain answers to many questions that involve dynamic treatment regimes. This paper describes a simple method to compare dynamic treatment regimes by artificially censoring subjects and then using inverse probability weighting (IPW) to adjust for any selection bias introduced by the artificial censoring. The basic strategy can be summarized in four steps: 1) define two regimes of interest, 2) artificially censor individuals when they stop following one of the regimes of interest, 3) estimate inverse probability weights to adjust for the potential selection bias introduced by censoring in the previous step, 4) compare the survival of the uncensored individuals under each regime of interest by fitting an inverse probability weighted Cox proportional hazards model with the dichotomous regime indicator and the baseline confounders as covariates. In the absence of model misspecification, the method is valid provided data are available on all time-varying and baseline joint predictors of survival and regime discontinuation. We present an application of the method to compare the AIDS-free survival under two dynamic treatment regimes in a large prospective study of HIV-infected patients. The paper concludes by discussing the relative advantages and disadvantages of censoring/IPW versus g-estimation of nested structural models to compare dynamic regimes.

  4. Calculation of the dielectric properties of semiconductors

    NASA Astrophysics Data System (ADS)

    Engel, G. E.; Farid, Behnam

    1992-12-01

    We report on numerical calculations of the dynamical dielectric function in silicon, using a continued-fraction expansion of the polarizability and a recently proposed representation of the inverse dielectric function in terms of plasmonlike excitations. A number of important technical refinements to further improve the computational efficiency of the method are introduced, making the ab initio calculation of the full energy dependence of the dielectric function comparable in cost to calculation of its static value. Physical results include the observation of previously unresolved features in the random-phase approximated dielectric function and its inverse within the framework of density-functional theory in the local-density approximation, which may be accessible to experiment. We discuss the dispersion of plasmon energies in silicon along the Λ and Δ directions and find improved agreement with experiment compared to earlier calculations. We also present quantitative evidence indicating the degree of violation of the Johnson f-sum rule for the dielectric function due to the nonlocality of the one-electron potential used in the underlying band-structure calculations.

  5. 3D and 4D magnetic susceptibility tomography based on complex MR images

    DOEpatents

    Chen, Zikuan; Calhoun, Vince D

    2014-11-11

    Magnetic susceptibility is the physical property for T2*-weighted magnetic resonance imaging (T2*MRI). The invention relates to methods for reconstructing an internal distribution (3D map) of magnetic susceptibility values, .chi. (x,y,z), of an object, from 3D T2*MRI phase images, by using Computed Inverse Magnetic Resonance Imaging (CIMRI) tomography. The CIMRI technique solves the inverse problem of the 3D convolution by executing a 3D Total Variation (TV) regularized iterative convolution scheme, using a split Bregman iteration algorithm. The reconstruction of .chi. (x,y,z) can be designed for low-pass, band-pass, and high-pass features by using a convolution kernel that is modified from the standard dipole kernel. Multiple reconstructions can be implemented in parallel, and averaging the reconstructions can suppress noise. 4D dynamic magnetic susceptibility tomography can be implemented by reconstructing a 3D susceptibility volume from a 3D phase volume by performing 3D CIMRI magnetic susceptibility tomography at each snapshot time.

  6. Temperature, Pressure, and Infrared Image Survey of an Axisymmetric Heated Exhaust Plume

    NASA Technical Reports Server (NTRS)

    Nelson, Edward L.; Mahan, J. Robert; Birckelbaw, Larry D.; Turk, Jeffrey A.; Wardwell, Douglas A.; Hange, Craig E.

    1996-01-01

    The focus of this research is to numerically predict an infrared image of a jet engine exhaust plume, given field variables such as temperature, pressure, and exhaust plume constituents as a function of spatial position within the plume, and to compare this predicted image directly with measured data. This work is motivated by the need to validate computational fluid dynamic (CFD) codes through infrared imaging. The technique of reducing the three-dimensional field variable domain to a two-dimensional infrared image invokes the use of an inverse Monte Carlo ray trace algorithm and an infrared band model for exhaust gases. This report describes an experiment in which the above-mentioned field variables were carefully measured. Results from this experiment, namely tables of measured temperature and pressure data, as well as measured infrared images, are given. The inverse Monte Carlo ray trace technique is described. Finally, experimentally obtained infrared images are directly compared to infrared images predicted from the measured field variables.

  7. Transient multi-physics analysis of a magnetorheological shock absorber with the inverse Jiles-Atherton hysteresis model

    NASA Astrophysics Data System (ADS)

    Zheng, Jiajia; Li, Yancheng; Li, Zhaochun; Wang, Jiong

    2015-10-01

    This paper presents multi-physics modeling of an MR absorber considering the magnetic hysteresis to capture the nonlinear relationship between the applied current and the generated force under impact loading. The magnetic field, temperature field, and fluid dynamics are represented by the Maxwell equations, conjugate heat transfer equations, and Navier-Stokes equations. These fields are coupled through the apparent viscosity and the magnetic force, both of which in turn depend on the magnetic flux density and the temperature. Based on a parametric study, an inverse Jiles-Atherton hysteresis model is used and implemented for the magnetic field simulation. The temperature rise of the MR fluid in the annular gap caused by core loss (i.e. eddy current loss and hysteresis loss) and fluid motion is computed to investigate the current-force behavior. A group of impulsive tests was performed for the manufactured MR absorber with step exciting currents. The numerical and experimental results showed good agreement, which validates the effectiveness of the proposed multi-physics FEA model.

  8. A design philosophy for multi-layer neural networks with applications to robot control

    NASA Technical Reports Server (NTRS)

    Vadiee, Nader; Jamshidi, MO

    1989-01-01

    A system is proposed which receives input information from many sensors that may have diverse scaling, dimension, and data representations. The proposed system tolerates sensory information with faults. The proposed self-adaptive processing technique has great promise in integrating the techniques of artificial intelligence and neural networks in an attempt to build a more intelligent computing environment. The proposed architecture can provide a detailed decision tree based on the input information, information stored in a long-term memory, and the adapted rule-based knowledge. A mathematical model for analysis will be obtained to validate the cited hypotheses. An extensive software program will be developed to simulate a typical example of pattern recognition problem. It is shown that the proposed model displays attention, expectation, spatio-temporal, and predictory behavior which are specific to the human brain. The anticipated results of this research project are: (1) creation of a new dynamic neural network structure, and (2) applications to and comparison with conventional multi-layer neural network structures. The anticipated benefits from this research are vast. The model can be used in a neuro-computer architecture as a building block which can perform complicated, nonlinear, time-varying mapping from a multitude of input excitory classes to an output or decision environment. It can be used for coordinating different sensory inputs and past experience of a dynamic system and actuating signals. The commercial applications of this project can be the creation of a special-purpose neuro-computer hardware which can be used in spatio-temporal pattern recognitions in such areas as air defense systems, e.g., target tracking, and recognition. Potential robotics-related applications are trajectory planning, inverse dynamics computations, hierarchical control, task-oriented control, and collision avoidance.

  9. An adaptive drug delivery design using neural networks for effective treatment of infectious diseases: a simulation study.

    PubMed

    Padhi, Radhakant; Bhardhwaj, Jayender R

    2009-06-01

    An adaptive drug delivery design is presented in this paper using neural networks for effective treatment of infectious diseases. The generic mathematical model used describes the coupled evolution of concentration of pathogens, plasma cells, antibodies and a numerical value that indicates the relative characteristic of a damaged organ due to the disease under the influence of external drugs. From a system theoretic point of view, the external drugs can be interpreted as control inputs, which can be designed based on control theoretic concepts. In this study, assuming a set of nominal parameters in the mathematical model, first a nonlinear controller (drug administration) is designed based on the principle of dynamic inversion. This nominal drug administration plan was found to be effective in curing "nominal model patients" (patients whose immunological dynamics conform to the mathematical model used for the control design exactly. However, it was found to be ineffective in curing "realistic model patients" (patients whose immunological dynamics may have off-nominal parameter values and possibly unwanted inputs) in general. Hence, to make the drug delivery dosage design more effective for realistic model patients, a model-following adaptive control design is carried out next by taking the help of neural networks, that are trained online. Simulation studies indicate that the adaptive controller proposed in this paper holds promise in killing the invading pathogens and healing the damaged organ even in the presence of parameter uncertainties and continued pathogen attack. Note that the computational requirements for computing the control are very minimal and all associated computations (including the training of neural networks) can be carried out online. However it assumes that the required diagnosis process can be carried out at a sufficient faster rate so that all the states are available for control computation.

  10. A space efficient flexible pivot selection approach to evaluate determinant and inverse of a matrix.

    PubMed

    Jafree, Hafsa Athar; Imtiaz, Muhammad; Inayatullah, Syed; Khan, Fozia Hanif; Nizami, Tajuddin

    2014-01-01

    This paper presents new simple approaches for evaluating determinant and inverse of a matrix. The choice of pivot selection has been kept arbitrary thus they reduce the error while solving an ill conditioned system. Computation of determinant of a matrix has been made more efficient by saving unnecessary data storage and also by reducing the order of the matrix at each iteration, while dictionary notation [1] has been incorporated for computing the matrix inverse thereby saving unnecessary calculations. These algorithms are highly class room oriented, easy to use and implemented by students. By taking the advantage of flexibility in pivot selection, one may easily avoid development of the fractions by most. Unlike the matrix inversion method [2] and [3], the presented algorithms obviate the use of permutations and inverse permutations.

  11. Robust, nonlinear, high angle-of-attack control design for a supermaneuverable vehicle

    NASA Technical Reports Server (NTRS)

    Adams, Richard J.

    1993-01-01

    High angle-of-attack flight control laws are developed for a supermaneuverable fighter aircraft. The methods of dynamic inversion and structured singular value synthesis are combined into an approach which addresses both the nonlinearity and robustness problems of flight at extreme operating conditions. The primary purpose of the dynamic inversion control elements is to linearize the vehicle response across the flight envelope. Structured singular value synthesis is used to design a dynamic controller which provides robust tracking to pilot commands. The resulting control system achieves desired flying qualities and guarantees a large margin of robustness to uncertainties for high angle-of-attack flight conditions. The results of linear simulation and structured singular value stability analysis are presented to demonstrate satisfaction of the design criteria. High fidelity nonlinear simulation results show that the combined dynamics inversion/structured singular value synthesis control law achieves a high level of performance in a realistic environment.

  12. Microsecond Simulations of DNA and Ion Transport in Nanopores with Novel Ion-Ion and Ion-Nucleotides Effective Potentials

    PubMed Central

    De Biase, Pablo M.; Markosyan, Suren; Noskov, Sergei

    2014-01-01

    We developed a novel scheme based on the Grand-Canonical Monte-Carlo/Brownian Dynamics (GCMC/BD) simulations and have extended it to studies of ion currents across three nanopores with the potential for ssDNA sequencing: solid-state nanopore Si3N4, α-hemolysin, and E111N/M113Y/K147N mutant. To describe nucleotide-specific ion dynamics compatible with ssDNA coarse-grained model, we used the Inverse Monte-Carlo protocol, which maps the relevant ion-nucleotide distribution functions from an all-atom MD simulations. Combined with the previously developed simulation platform for Brownian Dynamic (BD) simulations of ion transport, it allows for microsecond- and millisecond-long simulations of ssDNA dynamics in nanopore with a conductance computation accuracy that equals or exceeds that of all-atom MD simulations. In spite of the simplifications, the protocol produces results that agree with the results of previous studies on ion conductance across open channels and provide direct correlations with experimentally measured blockade currents and ion conductances that have been estimated from all-atom MD simulations. PMID:24738152

  13. Oligopolies with contingent workforce and unemployment insurance systems

    NASA Astrophysics Data System (ADS)

    Matsumoto, Akio; Merlone, Ugo; Szidarovszky, Ferenc

    2015-10-01

    In the recent literature the introduction of modified cost functions has added reality into the classical oligopoly analysis. Furthermore, the market evolution requires much more flexibility to firms, and in several countries contingent workforce plays an important role in the production choices by the firms. Therefore, an analysis of dynamic adjustment costs is in order to understand oligopoly dynamics. In this paper, dynamic single-product oligopolies without product differentiation are first examined with the additional consideration of production adjustment costs. Linear inverse demand and cost functions are considered and it is assumed that the firms adjust their outputs partially toward best response. The set of the steady states is characterized by a system of linear inequalities and there are usually infinitely many steady states. The asymptotic behavior of the output trajectories is examined by using computer simulation. The numerical results indicate that the resulting dynamics is richer than in the case of the classical Cournot model. This model and results are then compared to oligopolies with unemployment insurance systems when the additional cost is considered if firms do not use their maximum capacities.

  14. Computational methods for inverse problems in geophysics: inversion of travel time observations

    USGS Publications Warehouse

    Pereyra, V.; Keller, H.B.; Lee, W.H.K.

    1980-01-01

    General ways of solving various inverse problems are studied for given travel time observations between sources and receivers. These problems are separated into three components: (a) the representation of the unknown quantities appearing in the model; (b) the nonlinear least-squares problem; (c) the direct, two-point ray-tracing problem used to compute travel time once the model parameters are given. Novel software is described for (b) and (c), and some ideas given on (a). Numerical results obtained with artificial data and an implementation of the algorithm are also presented. ?? 1980.

  15. Combined multi-spectrum and orthogonal Laplacianfaces for fast CB-XLCT imaging with single-view data

    NASA Astrophysics Data System (ADS)

    Zhang, Haibo; Geng, Guohua; Chen, Yanrong; Qu, Xuan; Zhao, Fengjun; Hou, Yuqing; Yi, Huangjian; He, Xiaowei

    2017-12-01

    Cone-beam X-ray luminescence computed tomography (CB-XLCT) is an attractive hybrid imaging modality, which has the potential of monitoring the metabolic processes of nanophosphors-based drugs in vivo. Single-view data reconstruction as a key issue of CB-XLCT imaging promotes the effective study of dynamic XLCT imaging. However, it suffers from serious ill-posedness in the inverse problem. In this paper, a multi-spectrum strategy is adopted to relieve the ill-posedness of reconstruction. The strategy is based on the third-order simplified spherical harmonic approximation model. Then, an orthogonal Laplacianfaces-based method is proposed to reduce the large computational burden without degrading the imaging quality. Both simulated data and in vivo experimental data were used to evaluate the efficiency and robustness of the proposed method. The results are satisfactory in terms of both location and quantitative recovering with computational efficiency, indicating that the proposed method is practical and promising for single-view CB-XLCT imaging.

  16. Kinetic barriers in the isomerization of substituted ureas: implications for computer-aided drug design.

    PubMed

    Loeffler, Johannes R; Ehmki, Emanuel S R; Fuchs, Julian E; Liedl, Klaus R

    2016-05-01

    Urea derivatives are ubiquitously found in many chemical disciplines. N,N'-substituted ureas may show different conformational preferences depending on their substitution pattern. The high energetic barrier for isomerization of the cis and trans state poses additional challenges on computational simulation techniques aiming at a reproduction of the biological properties of urea derivatives. Herein, we investigate energetics of urea conformations and their interconversion using a broad spectrum of methodologies ranging from data mining, via quantum chemistry to molecular dynamics simulation and free energy calculations. We find that the inversion of urea conformations is inherently slow and beyond the time scale of typical simulation protocols. Therefore, extra care needs to be taken by computational chemists to work with appropriate model systems. We find that both knowledge-driven approaches as well as physics-based methods may guide molecular modelers towards accurate starting structures for expensive calculations to ensure that conformations of urea derivatives are modeled as adequately as possible.

  17. Inverse dynamics of a 3 degree of freedom spatial flexible manipulator

    NASA Technical Reports Server (NTRS)

    Bayo, Eduardo; Serna, M.

    1989-01-01

    A technique is presented for solving the inverse dynamics and kinematics of 3 degree of freedom spatial flexible manipulator. The proposed method finds the joint torques necessary to produce a specified end effector motion. Since the inverse dynamic problem in elastic manipulators is closely coupled to the inverse kinematic problem, the solution of the first also renders the displacements and rotations at any point of the manipulator, including the joints. Furthermore the formulation is complete in the sense that it includes all the nonlinear terms due to the large rotation of the links. The Timoshenko beam theory is used to model the elastic characteristics, and the resulting equations of motion are discretized using the finite element method. An iterative solution scheme is proposed that relies on local linearization of the problem. The solution of each linearization is carried out in the frequency domain. The performance and capabilities of this technique are tested through simulation analysis. Results show the potential use of this method for the smooth motion control of space telerobots.

  18. Flight instrument and telemetry response and its inversion

    NASA Technical Reports Server (NTRS)

    Weinberger, M. R.

    1971-01-01

    Mathematical models of rate gyros, servo accelerometers, pressure transducers, and telemetry systems were derived and their parameters were obtained from laboratory tests. Analog computer simulations were used extensively for verification of the validity for fast and large input signals. An optimal inversion method was derived to reconstruct input signals from noisy output signals and a computer program was prepared.

  19. COED Transactions, Vol. IX, No. 2, February 1977. Prism: An Educational Aide to Symbolic Differentiation and Simplification of Algebraic Expressions.

    ERIC Educational Resources Information Center

    Marcovitz, Alan B., Ed.

    A computer program for numeric and symbolic manipulation and the methodology underlying its development are presented. Some features of the program are: an option for implied multiplication; computation of higher-order derivatives; differentiation of 26 different trigonometric, hyperbolic, inverse trigonometric, and inverse hyperbolic functions;…

  20. Three-dimensional inversion for Network-Magnetotelluric data

    NASA Astrophysics Data System (ADS)

    Siripunvaraporn, W.; Uyeshima, M.; Egbert, G.

    2004-09-01

    Three-dimensional inversion of Network-Magnetotelluric (MT) data has been implemented. The program is based on a conventional 3-D MT inversion code (Siripunvaraporn et al., 2004), which is a data space variant of the OCCAM approach. In addition to modifications required for computing Network-MT responses and sensitivities, the program makes use of Massage Passing Interface (MPI) software, with allowing computations for each period to be run on separate CPU nodes. Here, we consider inversion of synthetic data generated from simple models consisting of a 1 W-m conductive block buried at varying depths in a 100 W-m background. We focus in particular on inversion of long period (320-40,960 seconds) data, because Network-MT data usually have high coherency in these period ranges. Even with only long period data the inversion recovers shallow and deep structures, as long as these are large enough to affect the data significantly. However, resolution of the inversion depends greatly on the geometry of the dipole network, the range of periods used, and the horizontal size of the conductive anomaly.

  1. Control of a high beta maneuvering reentry vehicle using dynamic inversion.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watts, Alfred Chapman

    2005-05-01

    The design of flight control systems for high performance maneuvering reentry vehicles presents a significant challenge to the control systems designer. These vehicles typically have a much higher ballistic coefficient than crewed vehicles like as the Space Shuttle or proposed crew return vehicles such as the X-38. Moreover, the missions of high performance vehicles usually require a steeper reentry flight path angle, followed by a pull-out into level flight. These vehicles then must transit the entire atmosphere and robustly perform the maneuvers required for the mission. The vehicles must also be flown with small static margins in order to performmore » the required maneuvers, which can result in highly nonlinear aerodynamic characteristics that frequently transition from being aerodynamically stable to unstable as angle of attack increases. The control system design technique of dynamic inversion has been applied successfully to both high performance aircraft and low beta reentry vehicles. The objective of this study was to explore the application of this technique to high performance maneuvering reentry vehicles, including the basic derivation of the dynamic inversion technique, followed by the extension of that technique to the use of tabular trim aerodynamic models in the controller. The dynamic inversion equations are developed for high performance vehicles and augmented to allow the selection of a desired response for the control system. A six degree of freedom simulation is used to evaluate the performance of the dynamic inversion approach, and results for both nominal and off nominal aerodynamic characteristics are presented.« less

  2. On the Duality of Forward and Inverse Light Transport.

    PubMed

    Chandraker, Manmohan; Bai, Jiamin; Ng, Tian-Tsong; Ramamoorthi, Ravi

    2011-10-01

    Inverse light transport seeks to undo global illumination effects, such as interreflections, that pervade images of most scenes. This paper presents the theoretical and computational foundations for inverse light transport as a dual of forward rendering. Mathematically, this duality is established through the existence of underlying Neumann series expansions. Physically, it can be shown that each term of our inverse series cancels an interreflection bounce, just as the forward series adds them. While the convergence properties of the forward series are well known, we show that the oscillatory convergence of the inverse series leads to more interesting conditions on material reflectance. Conceptually, the inverse problem requires the inversion of a large light transport matrix, which is impractical for realistic resolutions using standard techniques. A natural consequence of our theoretical framework is a suite of fast computational algorithms for light transport inversion--analogous to finite element radiosity, Monte Carlo and wavelet-based methods in forward rendering--that rely at most on matrix-vector multiplications. We demonstrate two practical applications, namely, separation of individual bounces of the light transport and fast projector radiometric compensation, to display images free of global illumination artifacts in real-world environments.

  3. Performance of some numerical Laplace inversion methods on American put option formula

    NASA Astrophysics Data System (ADS)

    Octaviano, I.; Yuniar, A. R.; Anisa, L.; Surjanto, S. D.; Putri, E. R. M.

    2018-03-01

    Numerical inversion approaches of Laplace transform is used to obtain a semianalytic solution. Some of the mathematical inversion methods such as Durbin-Crump, Widder, and Papoulis can be used to calculate American put options through the optimal exercise price in the Laplace space. The comparison of methods on some simple functions is aimed to know the accuracy and parameters which used in the calculation of American put options. The result obtained is the performance of each method regarding accuracy and computational speed. The Durbin-Crump method has an average error relative of 2.006e-004 with computational speed of 0.04871 seconds, the Widder method has an average error relative of 0.0048 with computational speed of 3.100181 seconds, and the Papoulis method has an average error relative of 9.8558e-004 with computational speed of 0.020793 seconds.

  4. Application of hybrid methodology to rotors in steady and maneuvering flight

    NASA Astrophysics Data System (ADS)

    Rajmohan, Nischint

    Helicopters are versatile flying machines that have capabilities that are unparalleled by fixed wing aircraft, such as operating in hover, performing vertical takeoff and landing on unprepared sites. This makes their use especially desirable in military and search-and-rescue operations. However, modern helicopters still suffer from high levels of noise and vibration caused by the physical phenomena occurring in the vicinity of the rotor blades. Therefore, improvement in rotorcraft design to reduce the noise and vibration levels requires understanding of the underlying physical phenomena, and accurate prediction capabilities of the resulting rotorcraft aeromechanics. The goal of this research is to study the aeromechanics of rotors in steady and maneuvering flight using hybrid Computational Fluid Dynamics (CFD) methodology. The hybrid CFD methodology uses the Navier-Stokes equations to solve the flow near the blade surface but the effect of the far wake is computed through the wake model. The hybrid CFD methodology is computationally efficient and its wake modeling approach is nondissipative making it an attractive tool to study rotorcraft aeromechanics. Several enhancements were made to the CFD methodology and it was coupled to a Computational Structural Dynamics (CSD) methodology to perform a trimmed aeroelastic analysis of a rotor in forward flight. The coupling analyses, both loose and tight were used to identify the key physical phenomena that affect rotors in different steady flight regimes. The modeling enhancements improved the airloads predictions for a variety of flight conditions. It was found that the tightly coupled method did not impact the loads significantly for steady flight conditions compared to the loosely coupled method. The coupling methodology was extended to maneuvering flight analysis by enhancing the computational and structural models to handle non-periodic flight conditions and vehicle motions in time accurate mode. The flight test control angles were employed to enable the maneuvering flight analysis. The fully coupled model provided the presence of three dynamic stall cycles on the rotor in maneuver. It is important to mention that analysis of maneuvering flight requires knowledge of the pilot input control pitch settings, and the vehicle states. As the result, these computational tools cannot be used for analysis of loads in a maneuver that has not been duplicated in a real flight. This is a significant limitation if these tools are to be selected during the design phase of a helicopter where its handling qualities are evaluated in different trajectories. Therefore, a methodology was developed to couple the CFD/CSD simulation with an inverse flight mechanics simulation to perform the maneuver analysis without using the flight test control input. The methodology showed reasonable convergence in steady flight regime and control angles predictions compared fairly well with test data. In the maneuvering flight regions, the convergence was slower due to relaxation techniques used for the numerical stability. The subsequent computed control angles for the maneuvering flight regions compared well with test data. Further, the enhancement of the rotor inflow computations in the inverse simulation through implementation of a Lagrangian wake model improved the convergence of the coupling methodology.

  5. Integrated Approach to the Dynamics and Control of Maneuvering Flexible Aircraft

    NASA Technical Reports Server (NTRS)

    Waszak, Martin R. (Technical Monitor); Meirovitch, Leonard; Tuzcu, Ilhan

    2003-01-01

    This work uses a fundamental approach to the problem of simulating the flight of flexible aircraft. To this end, it integrates into a single formulation the pertinent disciplines, namely, analytical dynamics, structural dynamics, aerodynamics, and controls. It considers both the rigid body motions of the aircraft, three translations (forward motion, sideslip and plunge) and three rotations (roll, pitch and yaw), and the elastic deformations of every point of the aircraft, as well as the aerodynamic, propulsion, gravity and control forces. The equations of motion are expressed in a form ideally suited for computer processing. A perturbation approach yields a flight dynamics problem for the motions of a quasi-rigid aircraft and an 'extended aeroelasticity' problem for the elastic deformations and perturbations in the rigid body motions, with the solution of the first problem entering as an input into the second problem. The control forces for the flight dynamics problem are obtained by an 'inverse' process and the feedback controls for the extended aeroservoelasticity problem are determined by the LQG theory. A numerical example presents time simulations of rigid body perturbations and elastic deformations about 1) a steady level flight and 2) a level steady turn maneuver.

  6. Laser Powered Launch Vehicle Performance Analyses

    NASA Technical Reports Server (NTRS)

    Chen, Yen-Sen; Liu, Jiwen; Wang, Ten-See (Technical Monitor)

    2001-01-01

    The purpose of this study is to establish the technical ground for modeling the physics of laser powered pulse detonation phenomenon. Laser powered propulsion systems involve complex fluid dynamics, thermodynamics and radiative transfer processes. Successful predictions of the performance of laser powered launch vehicle concepts depend on the sophisticate models that reflects the underlying flow physics including the laser ray tracing the focusing, inverse Bremsstrahlung (IB) effects, finite-rate air chemistry, thermal non-equilibrium, plasma radiation and detonation wave propagation, etc. The proposed work will extend the base-line numerical model to an efficient design analysis tool. The proposed model is suitable for 3-D analysis using parallel computing methods.

  7. Conceptual design and analysis of a large antenna utilizing electrostatic membrane management

    NASA Technical Reports Server (NTRS)

    Brooks, A. L.; Coyner, J. V.; Gardner, W. J.; Mihora, D. J.

    1982-01-01

    Conceptual designs and associated technologies for deployment 100 m class radiometer antennas were developed. An electrostatically suspended and controlled membrane mirror and the supporting structure are discussed. The integrated spacecraft including STS cargo bay stowage and development were analyzed. An antenna performance evaluation was performed as a measure of the quality of the membrane/spacecraft when used as a radiometer in the 1 GHz to 5 GHz region. Several related LSS structural dynamic models differing by their stiffness property (and therefore, lowest modal frequencies) are reported. Control system whose complexity varies inversely with increasing modal frequency regimes are also reported. Interactive computer-aided-design software is discussed.

  8. Kinematic control of robot with degenerate wrist

    NASA Technical Reports Server (NTRS)

    Barker, L. K.; Moore, M. C.

    1984-01-01

    Kinematic resolved rate equations allow an operator with visual feedback to dynamically control a robot hand. When the robot wrist is degenerate, the computed joint angle rates exceed operational limits, and unwanted hand movements can result. The generalized matrix inverse solution can also produce unwanted responses. A method is introduced to control the robot hand in the region of the degenerate robot wrist. The method uses a coordinated movement of the first and third joints of the robot wrist to locate the second wrist joint axis for movement of the robot hand in the commanded direction. The method does not entail infinite joint angle rates.

  9. Robust inverse kinematics using damped least squares with dynamic weighting

    NASA Technical Reports Server (NTRS)

    Schinstock, D. E.; Faddis, T. N.; Greenway, R. B.

    1994-01-01

    This paper presents a general method for calculating the inverse kinematics with singularity and joint limit robustness for both redundant and non-redundant serial-link manipulators. Damped least squares inverse of the Jacobian is used with dynamic weighting matrices in approximating the solution. This reduces specific joint differential vectors. The algorithm gives an exact solution away from the singularities and joint limits, and an approximate solution at or near the singularities and/or joint limits. The procedure is here implemented for a six d.o.f. teleoperator and a well behaved slave manipulator resulted under teleoperational control.

  10. Investigation into the Effects of Weapon Setback on Various Materials and Geometries.

    DTIC Science & Technology

    1978-07-01

    taking the Laplace Transform of the dynamic equation, rearrangement and taking the inverse transform to find the time-dependent strain. The "dynamic...taking the inverse transform of the above equation: ■»-«fa-to*« 1 E’ ♦ ¥®*> B’ (s+-fj- )(S2+f )T If we neglect the residual strain on the system...partial fractions yields: *t) --f (fr JC-> K, K2 —L_ + + K3 »+-^ s+i(f) s-i(f) performing the inverse transform yields: 4©[K,^> ♦ K2

  11. Quo vadis: Hydrologic inverse analyses using high-performance computing and a D-Wave quantum annealer

    NASA Astrophysics Data System (ADS)

    O'Malley, D.; Vesselinov, V. V.

    2017-12-01

    Classical microprocessors have had a dramatic impact on hydrology for decades, due largely to the exponential growth in computing power predicted by Moore's law. However, this growth is not expected to continue indefinitely and has already begun to slow. Quantum computing is an emerging alternative to classical microprocessors. Here, we demonstrated cutting edge inverse model analyses utilizing some of the best available resources in both worlds: high-performance classical computing and a D-Wave quantum annealer. The classical high-performance computing resources are utilized to build an advanced numerical model that assimilates data from O(10^5) observations, including water levels, drawdowns, and contaminant concentrations. The developed model accurately reproduces the hydrologic conditions at a Los Alamos National Laboratory contamination site, and can be leveraged to inform decision-making about site remediation. We demonstrate the use of a D-Wave 2X quantum annealer to solve hydrologic inverse problems. This work can be seen as an early step in quantum-computational hydrology. We compare and contrast our results with an early inverse approach in classical-computational hydrology that is comparable to the approach we use with quantum annealing. Our results show that quantum annealing can be useful for identifying regions of high and low permeability within an aquifer. While the problems we consider are small-scale compared to the problems that can be solved with modern classical computers, they are large compared to the problems that could be solved with early classical CPUs. Further, the binary nature of the high/low permeability problem makes it well-suited to quantum annealing, but challenging for classical computers.

  12. Inverse problem of HIV cell dynamics using Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    González, J. A.; Guzmán, F. S.

    2017-01-01

    In order to describe the cell dynamics of T-cells in a patient infected with HIV, we use a flavour of Perelson's model. This is a non-linear system of Ordinary Differential Equations that describes the evolution of healthy, latently infected, infected T-cell concentrations and the free viral cells. Different parameters in the equations give different dynamics. Considering the concentration of these types of cells is known for a particular patient, the inverse problem consists in estimating the parameters in the model. We solve this inverse problem using a Genetic Algorithm (GA) that minimizes the error between the solutions of the model and the data from the patient. These errors depend on the parameters of the GA, like mutation rate and population, although a detailed analysis of this dependence will be described elsewhere.

  13. Neural computations underlying inverse reinforcement learning in the human brain

    PubMed Central

    Pauli, Wolfgang M; Bossaerts, Peter; O'Doherty, John

    2017-01-01

    In inverse reinforcement learning an observer infers the reward distribution available for actions in the environment solely through observing the actions implemented by another agent. To address whether this computational process is implemented in the human brain, participants underwent fMRI while learning about slot machines yielding hidden preferred and non-preferred food outcomes with varying probabilities, through observing the repeated slot choices of agents with similar and dissimilar food preferences. Using formal model comparison, we found that participants implemented inverse RL as opposed to a simple imitation strategy, in which the actions of the other agent are copied instead of inferring the underlying reward structure of the decision problem. Our computational fMRI analysis revealed that anterior dorsomedial prefrontal cortex encoded inferences about action-values within the value space of the agent as opposed to that of the observer, demonstrating that inverse RL is an abstract cognitive process divorceable from the values and concerns of the observer him/herself. PMID:29083301

  14. Assessing performance of flaw characterization methods through uncertainty propagation

    NASA Astrophysics Data System (ADS)

    Miorelli, R.; Le Bourdais, F.; Artusi, X.

    2018-04-01

    In this work, we assess the inversion performance in terms of crack characterization and localization based on synthetic signals associated to ultrasonic and eddy current physics. More precisely, two different standard iterative inversion algorithms are used to minimize the discrepancy between measurements (i.e., the tested data) and simulations. Furthermore, in order to speed up the computational time and get rid of the computational burden often associated to iterative inversion algorithms, we replace the standard forward solver by a suitable metamodel fit on a database built offline. In a second step, we assess the inversion performance by adding uncertainties on a subset of the database parameters and then, through the metamodel, we propagate these uncertainties within the inversion procedure. The fast propagation of uncertainties enables efficiently evaluating the impact due to the lack of knowledge on some parameters employed to describe the inspection scenarios, which is a situation commonly encountered in the industrial NDE context.

  15. Characterization of robotics parallel algorithms and mapping onto a reconfigurable SIMD machine

    NASA Technical Reports Server (NTRS)

    Lee, C. S. G.; Lin, C. T.

    1989-01-01

    The kinematics, dynamics, Jacobian, and their corresponding inverse computations are six essential problems in the control of robot manipulators. Efficient parallel algorithms for these computations are discussed and analyzed. Their characteristics are identified and a scheme on the mapping of these algorithms to a reconfigurable parallel architecture is presented. Based on the characteristics including type of parallelism, degree of parallelism, uniformity of the operations, fundamental operations, data dependencies, and communication requirement, it is shown that most of the algorithms for robotic computations possess highly regular properties and some common structures, especially the linear recursive structure. Moreover, they are well-suited to be implemented on a single-instruction-stream multiple-data-stream (SIMD) computer with reconfigurable interconnection network. The model of a reconfigurable dual network SIMD machine with internal direct feedback is introduced. A systematic procedure internal direct feedback is introduced. A systematic procedure to map these computations to the proposed machine is presented. A new scheduling problem for SIMD machines is investigated and a heuristic algorithm, called neighborhood scheduling, that reorders the processing sequence of subtasks to reduce the communication time is described. Mapping results of a benchmark algorithm are illustrated and discussed.

  16. Coupling HYDRUS-1D Code with PA-DDS Algorithms for Inverse Calibration

    NASA Astrophysics Data System (ADS)

    Wang, Xiang; Asadzadeh, Masoud; Holländer, Hartmut

    2017-04-01

    Numerical modelling requires calibration to predict future stages. A standard method for calibration is inverse calibration where generally multi-objective optimization algorithms are used to find a solution, e.g. to find an optimal solution of the van Genuchten Mualem (VGM) parameters to predict water fluxes in the vadose zone. We coupled HYDRUS-1D with PA-DDS to add a new, robust function for inverse calibration to the model. The PA-DDS method is a recently developed multi-objective optimization algorithm, which combines Dynamically Dimensioned Search (DDS) and Pareto Archived Evolution Strategy (PAES). The results were compared to a standard method (Marquardt-Levenberg method) implemented in HYDRUS-1D. Calibration performance is evaluated using observed and simulated soil moisture at two soil layers in the Southern Abbotsford, British Columbia, Canada in the terms of the root mean squared error (RMSE) and the Nash-Sutcliffe Efficiency (NSE). Results showed low RMSE values of 0.014 and 0.017 and strong NSE values of 0.961 and 0.939. Compared to the results by the Marquardt-Levenberg method, we received better calibration results for deeper located soil sensors. However, VGM parameters were similar comparing with previous studies. Both methods are equally computational efficient. We claim that a direct implementation of PA-DDS into HYDRUS-1D should reduce the computation effort further. This, the PA-DDS method is efficient for calibrating recharge for complex vadose zone modelling with multiple soil layer and can be a potential tool for calibration of heat and solute transport. Future work should focus on the effectiveness of PA-DDS for calibrating more complex versions of the model with complex vadose zone settings, with more soil layers, and against measured heat and solute transport. Keywords: Recharge, Calibration, HYDRUS-1D, Multi-objective Optimization

  17. Bispectral Inversion: The Construction of a Time Series from Its Bispectrum

    DTIC Science & Technology

    1988-04-13

    take the inverse transform . Since the goal is to compute a time series given its bispectrum, it would also be nice to stay entirely in the frequency...domain and be able to go directly from the bispectrum to the Fourier transform of the time series without the need to inverse transform continuous...the picture. The approximations arise from representing the bicovariance, which is the inverse transform of a continuous function, by the inverse disrte

  18. Interplay of Nitrogen-Atom Inversion and Conformational Inversion in Enantiomerization of 1H-1-Benzazepines.

    PubMed

    Ramig, Keith; Subramaniam, Gopal; Karimi, Sasan; Szalda, David J; Ko, Allen; Lam, Aaron; Li, Jeffrey; Coaderaj, Ani; Cavdar, Leyla; Bogdan, Lukasz; Kwon, Kitae; Greer, Edyta M

    2016-04-15

    A series of 2,4-disubstituted 1H-1-benzazepines, 2a-d, 4, and 6, were studied, varying both the substituents at C2 and C4 and at the nitrogen atom. The conformational inversion (ring-flip) and nitrogen-atom inversion (N-inversion) energetics were studied by variable-temperature NMR spectroscopy and computations. The steric bulk of the nitrogen-atom substituent was found to affect both the conformation of the azepine ring and the geometry around the nitrogen atom. Also affected were the Gibbs free energy barriers for the ring-flip and the N-inversion. When the nitrogen-atom substituent was alkyl, as in 2a-c, the geometry of the nitrogen atom was nearly planar and the azepine ring was highly puckered; the result was a relatively high-energy barrier to ring-flip and a low barrier to N-inversion. Conversely, when the nitrogen-atom substituent was a hydrogen atom, as in 2d, 4, and 6, the nitrogen atom was significantly pyramidalized and the azepine ring was less puckered; the result here was a relatively high energy barrier to N-inversion and a low barrier to ring-flip. In these N-unsubstituted compounds, it was found computationally that the lowest-energy stereodynamic process was ring-flip coupled with N-inversion, as N-inversion alone had a much higher energy barrier.

  19. Acceleration for 2D time-domain elastic full waveform inversion using a single GPU card

    NASA Astrophysics Data System (ADS)

    Jiang, Jinpeng; Zhu, Peimin

    2018-05-01

    Full waveform inversion (FWI) is a challenging procedure due to the high computational cost related to the modeling, especially for the elastic case. The graphics processing unit (GPU) has become a popular device for the high-performance computing (HPC). To reduce the long computation time, we design and implement the GPU-based 2D elastic FWI (EFWI) in time domain using a single GPU card. We parallelize the forward modeling and gradient calculations using the CUDA programming language. To overcome the limitation of relatively small global memory on GPU, the boundary saving strategy is exploited to reconstruct the forward wavefield. Moreover, the L-BFGS optimization method used in the inversion increases the convergence of the misfit function. A multiscale inversion strategy is performed in the workflow to obtain the accurate inversion results. In our tests, the GPU-based implementations using a single GPU device achieve >15 times speedup in forward modeling, and about 12 times speedup in gradient calculation, compared with the eight-core CPU implementations optimized by OpenMP. The test results from the GPU implementations are verified to have enough accuracy by comparing the results obtained from the CPU implementations.

  20. Iterative computation of generalized inverses, with an application to CMG steering laws

    NASA Technical Reports Server (NTRS)

    Steincamp, J. W.

    1971-01-01

    A cubically convergent iterative method for computing the generalized inverse of an arbitrary M X N matrix A is developed and a FORTRAN subroutine by which the method was implemented for real matrices on a CDC 3200 is given, with a numerical example to illustrate accuracy. Application to a redundant single-gimbal CMG assembly steering law is discussed.

  1. Factors Affecting Energy Barriers for Pyramidal Inversion in Amines and Phosphines: A Computational Chemistry Lab Exercise

    ERIC Educational Resources Information Center

    Montgomery, Craig D.

    2013-01-01

    An undergraduate exercise in computational chemistry that investigates the energy barrier for pyramidal inversion of amines and phosphines is presented. Semiempirical calculations (PM3) of the ground-state and transition-state energies for NR[superscript 1]R[superscript 2]R[superscript 3] and PR[superscript 1]R[superscript 2]R[superscript 3] allow…

  2. Efficient computation of the genomic relationship matrix and other matrices used in single-step evaluation.

    PubMed

    Aguilar, I; Misztal, I; Legarra, A; Tsuruta, S

    2011-12-01

    Genomic evaluations can be calculated using a unified procedure that combines phenotypic, pedigree and genomic information. Implementation of such a procedure requires the inverse of the relationship matrix based on pedigree and genomic relationships. The objective of this study was to investigate efficient computing options to create relationship matrices based on genomic markers and pedigree information as well as their inverses. SNP maker information was simulated for a panel of 40 K SNPs, with the number of genotyped animals up to 30 000. Matrix multiplication in the computation of the genomic relationship was by a simple 'do' loop, by two optimized versions of the loop, and by a specific matrix multiplication subroutine. Inversion was by a generalized inverse algorithm and by a LAPACK subroutine. With the most efficient choices and parallel processing, creation of matrices for 30 000 animals would take a few hours. Matrices required to implement a unified approach can be computed efficiently. Optimizations can be either by modifications of existing code or by the use of efficient automatic optimizations provided by open source or third-party libraries. © 2011 Blackwell Verlag GmbH.

  3. A theoretical Deduction from the Hubble law based on a Modified Newtonian Dynamics with field of Yukawa inverse

    NASA Astrophysics Data System (ADS)

    Falcon, N.

    2017-07-01

    At cosmic scales the dynamics of the Universe are almost exclusively prescribed by the force of gravity; however the assumption of the law of gravitation, depending on the inverse of the distance, leads to the known problems of the rotation curves of galaxies and missing mass (dark matter). The problem of the coupling of gravity to changes in scale and deviations from the law of the inverse square is an old problem (Laplace, 1805; Seeliger 1898), which has motivated alternatives to Newtonian dynamics compatible with observations. The present paper postulates a modified Newtonian dynamics by adding an inverse Yukawa potential: U(r)≡U0(M)(r-r0)e-α/r is the the potential per unit mass (in N/kg) as a function of the barionic mass that causes the field, r0 is of the order of 50h-1 Mpc and alpha is a coupling constant of the order of 2.5 h-1 Mpc. This potential is zero within the solar system, slightly attractive at interstellar distances, very attractive in galactic range and repulsive at cosmic scales. Its origin is the barionic matter, it allows to include the Milgrow MoND theory to explain the rotation curves, it is compatible with the experiments Eovos type, and allows to deduce the law of Hubble to cosmic scales, in the form H0=100h km/s Mpc≍U0(M)/c, where U0(M)≍ 4pi×6.67 10-11m/s2, is obtained from the Laplace's equation, assuming that the gravitational force is the law of the inverse of the square plus a non-linear term type Yukawa inverse. It is concluded that the modification of the law of gravity with nonlinear terms, allows to model the dynamics of the Universe on a large scale and include non-locality without dark matter. (See Falcon et al. 2014, International Journal of Astronomy and Astrophysics, 4, 551-559).

  4. Third International Conference on Inverse Design Concepts and Optimization in Engineering Sciences (ICIDES-3)

    NASA Technical Reports Server (NTRS)

    Dulikravich, George S. (Editor)

    1991-01-01

    Papers from the Third International Conference on Inverse Design Concepts and Optimization in Engineering Sciences (ICIDES) are presented. The papers discuss current research in the general field of inverse, semi-inverse, and direct design and optimization in engineering sciences. The rapid growth of this relatively new field is due to the availability of faster and larger computing machines.

  5. The whole space three-dimensional magnetotelluric inversion algorithm with static shift correction

    NASA Astrophysics Data System (ADS)

    Zhang, K.

    2016-12-01

    Base on the previous studies on the static shift correction and 3D inversion algorithms, we improve the NLCG 3D inversion method and propose a new static shift correction method which work in the inversion. The static shift correction method is based on the 3D theory and real data. The static shift can be detected by the quantitative analysis of apparent parameters (apparent resistivity and impedance phase) of MT in high frequency range, and completed correction with inversion. The method is an automatic processing technology of computer with 0 cost, and avoids the additional field work and indoor processing with good results.The 3D inversion algorithm is improved (Zhang et al., 2013) base on the NLCG method of Newman & Alumbaugh (2000) and Rodi & Mackie (2001). For the algorithm, we added the parallel structure, improved the computational efficiency, reduced the memory of computer and added the topographic and marine factors. So the 3D inversion could work in general PC with high efficiency and accuracy. And all the MT data of surface stations, seabed stations and underground stations can be used in the inversion algorithm. The verification and application example of 3D inversion algorithm is shown in Figure 1. From the comparison of figure 1, the inversion model can reflect all the abnormal bodies and terrain clearly regardless of what type of data (impedance/tipper/impedance and tipper). And the resolution of the bodies' boundary can be improved by using tipper data. The algorithm is very effective for terrain inversion. So it is very useful for the study of continental shelf with continuous exploration of land, marine and underground.The three-dimensional electrical model of the ore zone reflects the basic information of stratum, rock and structure. Although it cannot indicate the ore body position directly, the important clues are provided for prospecting work by the delineation of diorite pluton uplift range. The test results show that, the high quality of the data processing and efficient inversion method for electromagnetic method is an important guarantee for porphyry ore.

  6. Nonlinear PP and PS joint inversion based on the exact Zoeppritz equations: a two-stage procedure

    NASA Astrophysics Data System (ADS)

    Zhi, Lixia; Chen, Shuangquan; Song, Baoshan; Li, Xiang-yang

    2018-04-01

    S-velocity and density are very important parameters in distinguishing lithology and estimating other petrophysical properties. A reliable estimate of S-velocity and density is very difficult to obtain, even from long-offset gather data. Joint inversion of PP and PS data provides a promising strategy for stabilizing and improving the results of inversion in estimating elastic parameters and density. For 2D or 3D inversion, the trace-by-trace strategy is still the most widely used method although it often suffers from a lack of clarity because of its high efficiency, which is due to parallel computing. This paper describes a two-stage inversion method for nonlinear PP and PS joint inversion based on the exact Zoeppritz equations. There are several advantages for our proposed methods as follows: (1) Thanks to the exact Zoeppritz equation, our joint inversion method is applicable for wide angle amplitude-versus-angle inversion; (2) The use of both P- and S-wave information can further enhance the stability and accuracy of parameter estimation, especially for the S-velocity and density; (3) The two-stage inversion procedure proposed in this paper can achieve a good compromise between efficiency and precision. On the one hand, the trace-by-trace strategy used in the first stage can be processed in parallel so that it has high computational efficiency. On the other hand, to deal with the indistinctness of and undesired disturbances to the inversion results obtained from the first stage, we apply the second stage—total variation (TV) regularization. By enforcing spatial and temporal constraints, the TV regularization stage deblurs the inversion results and leads to parameter estimation with greater precision. Notably, the computation consumption of the TV regularization stage can be ignored compared to the first stage because it is solved using the fast split Bregman iterations. Numerical examples using a well log and the Marmousi II model show that the proposed joint inversion is a reliable method capable of accurately estimating the density parameter as well as P-wave velocity and S-wave velocity, even when the seismic data is noisy with signal-to-noise ratio of 5.

  7. Application of Nonlinear Systems Inverses to Automatic Flight Control Design: System Concepts and Flight Evaluations

    NASA Technical Reports Server (NTRS)

    Meyer, G.; Cicolani, L.

    1981-01-01

    A practical method for the design of automatic flight control systems for aircraft with complex characteristics and operational requirements, such as the powered lift STOL and V/STOL configurations, is presented. The method is effective for a large class of dynamic systems requiring multi-axis control which have highly coupled nonlinearities, redundant controls, and complex multidimensional operational envelopes. It exploits the concept of inverse dynamic systems, and an algorithm for the construction of inverse is given. A hierarchic structure for the total control logic with inverses is presented. The method is illustrated with an application to the Augmentor Wing Jet STOL Research Aircraft equipped with a digital flight control system. Results of flight evaluation of the control concept on this aircraft are presented.

  8. The experimental identification of magnetorheological dampers and evaluation of their controllers

    NASA Astrophysics Data System (ADS)

    Metered, H.; Bonello, P.; Oyadiji, S. O.

    2010-05-01

    Magnetorheological (MR) fluid dampers are semi-active control devices that have been applied over a wide range of practical vibration control applications. This paper concerns the experimental identification of the dynamic behaviour of an MR damper and the use of the identified parameters in the control of such a damper. Feed-forward and recurrent neural networks are used to model both the direct and inverse dynamics of the damper. Training and validation of the proposed neural networks are achieved by using the data generated through dynamic tests with the damper mounted on a tensile testing machine. The validation test results clearly show that the proposed neural networks can reliably represent both the direct and inverse dynamic behaviours of an MR damper. The effect of the cylinder's surface temperature on both the direct and inverse dynamics of the damper is studied, and the neural network model is shown to be reasonably robust against significant temperature variation. The inverse recurrent neural network model is introduced as a damper controller and experimentally evaluated against alternative controllers proposed in the literature. The results reveal that the neural-based damper controller offers superior damper control. This observation and the added advantages of low-power requirement, extended service life of the damper and the minimal use of sensors, indicate that a neural-based damper controller potentially offers the most cost-effective vibration control solution among the controllers investigated.

  9. Towards reversible basic linear algebra subprograms: A performance study

    DOE PAGES

    Perumalla, Kalyan S.; Yoginath, Srikanth B.

    2014-12-06

    Problems such as fault tolerance and scalable synchronization can be efficiently solved using reversibility of applications. Making applications reversible by relying on computation rather than on memory is ideal for large scale parallel computing, especially for the next generation of supercomputers in which memory is expensive in terms of latency, energy, and price. In this direction, a case study is presented here in reversing a computational core, namely, Basic Linear Algebra Subprograms, which is widely used in scientific applications. A new Reversible BLAS (RBLAS) library interface has been designed, and a prototype has been implemented with two modes: (1) amore » memory-mode in which reversibility is obtained by checkpointing to memory in forward and restoring from memory in reverse, and (2) a computational-mode in which nothing is saved in the forward, but restoration is done entirely via inverse computation in reverse. The article is focused on detailed performance benchmarking to evaluate the runtime dynamics and performance effects, comparing reversible computation with checkpointing on both traditional CPU platforms and recent GPU accelerator platforms. For BLAS Level-1 subprograms, data indicates over an order of magnitude better speed of reversible computation compared to checkpointing. For BLAS Level-2 and Level-3, a more complex tradeoff is observed between reversible computation and checkpointing, depending on computational and memory complexities of the subprograms.« less

  10. Efficient stabilization and acceleration of numerical simulation of fluid flows by residual recombination

    NASA Astrophysics Data System (ADS)

    Citro, V.; Luchini, P.; Giannetti, F.; Auteri, F.

    2017-09-01

    The study of the stability of a dynamical system described by a set of partial differential equations (PDEs) requires the computation of unstable states as the control parameter exceeds its critical threshold. Unfortunately, the discretization of the governing equations, especially for fluid dynamic applications, often leads to very large discrete systems. As a consequence, matrix based methods, like for example the Newton-Raphson algorithm coupled with a direct inversion of the Jacobian matrix, lead to computational costs too large in terms of both memory and execution time. We present a novel iterative algorithm, inspired by Krylov-subspace methods, which is able to compute unstable steady states and/or accelerate the convergence to stable configurations. Our new algorithm is based on the minimization of the residual norm at each iteration step with a projection basis updated at each iteration rather than at periodic restarts like in the classical GMRES method. The algorithm is able to stabilize any dynamical system without increasing the computational time of the original numerical procedure used to solve the governing equations. Moreover, it can be easily inserted into a pre-existing relaxation (integration) procedure with a call to a single black-box subroutine. The procedure is discussed for problems of different sizes, ranging from a small two-dimensional system to a large three-dimensional problem involving the Navier-Stokes equations. We show that the proposed algorithm is able to improve the convergence of existing iterative schemes. In particular, the procedure is applied to the subcritical flow inside a lid-driven cavity. We also discuss the application of Boostconv to compute the unstable steady flow past a fixed circular cylinder (2D) and boundary-layer flow over a hemispherical roughness element (3D) for supercritical values of the Reynolds number. We show that Boostconv can be used effectively with any spatial discretization, be it a finite-difference, finite-volume, finite-element or spectral method.

  11. A Reconfiguration Scheme for Accommodating Actuator Failures in Multi-Input, Multi-Output Flight Control Systems

    NASA Technical Reports Server (NTRS)

    Siwakosit, W.; Hess, R. A.; Bacon, Bart (Technical Monitor); Burken, John (Technical Monitor)

    2000-01-01

    A multi-input, multi-output reconfigurable flight control system design utilizing a robust controller and an adaptive filter is presented. The robust control design consists of a reduced-order, linear dynamic inversion controller with an outer-loop compensation matrix derived from Quantitative Feedback Theory (QFT). A principle feature of the scheme is placement of the adaptive filter in series with the QFT compensator thus exploiting the inherent robustness of the nominal flight control system in the presence of plant uncertainties. An example of the scheme is presented in a pilot-in-the-loop computer simulation using a simplified model of the lateral-directional dynamics of the NASA F18 High Angle of Attack Research Vehicle (HARV) that included nonlinear anti-wind up logic and actuator limitations. Prediction of handling qualities and pilot-induced oscillation tendencies in the presence of these nonlinearities is included in the example.

  12. Application of the concept of dynamic trim control to automatic landing of carrier aircraft. [utilizing digital feedforeward control

    NASA Technical Reports Server (NTRS)

    Smith, G. A.; Meyer, G.

    1980-01-01

    The results of a simulation study of an alternative design concept for an automatic landing control system are presented. The alternative design concept for an automatic landing control system is described. The design concept is the total aircraft flight control system (TAFCOS). TAFCOS is an open loop, feed forward system that commands the proper instantaneous thrust, angle of attack, and roll angle to achieve the forces required to follow the desired trajector. These dynamic trim conditions are determined by an inversion of the aircraft nonlinear force characteristics. The concept was applied to an A-7E aircraft approaching an aircraft carrier. The implementation details with an airborne digital computer are discussed. The automatic carrier landing situation is described. The simulation results are presented for a carrier approach with atmospheric disturbances, an approach with no disturbances, and for tailwind and headwind gusts.

  13. Controlled decoherence in a quantum Lévy kicked rotator

    NASA Astrophysics Data System (ADS)

    Schomerus, Henning; Lutz, Eric

    2008-06-01

    We develop a theory describing the dynamics of quantum kicked rotators (modeling cold atoms in a pulsed optical field) which are subjected to combined amplitude and timing noise generated by a renewal process (acting as an engineered reservoir). For waiting-time distributions of variable exponent (Lévy noise), we demonstrate the existence of a regime of nonexponential loss of phase coherence. In this regime, the momentum dynamics is subdiffusive, which also manifests itself in a non-Gaussian limiting distribution and a fractional power-law decay of the inverse participation ratio. The purity initially decays with a stretched exponential which is followed by two regimes of power-law decay with different exponents. The averaged logarithm of the fidelity probes the sprinkling distribution of the renewal process. These analytical results are confirmed by numerical computations on quantum kicked rotators subjected to noise events generated by a Yule-Simon distribution.

  14. Adaptive guidance for an aero-assisted boost vehicle

    NASA Astrophysics Data System (ADS)

    Pamadi, Bandu N.; Taylor, Lawrence W., Jr.; Price, Douglas B.

    An adaptive guidance system incorporating dynamic pressure constraint is studied for a single stage to low earth orbit (LEO) aero-assist booster with thrust gimbal angle as the control variable. To derive an adaptive guidance law, cubic spline functions are used to represent the ascent profile. The booster flight to LEO is divided into initial and terminal phases. In the initial phase, the ascent profile is continuously updated to maximize the performance of the boost vehicle enroute. A linear feedback control is used in the terminal phase to guide the aero-assisted booster onto the desired LEO. The computer simulation of the vehicle dynamics considers a rotating spherical earth, inverse square (Newtonian) gravity field and an exponential model for the earth's atmospheric density. This adaptive guidance algorithm is capable of handling large deviations in both atmospheric conditions and modeling uncertainties, while ensuring maximum booster performance.

  15. High-resolution mapping of bifurcations in nonlinear biochemical circuits

    NASA Astrophysics Data System (ADS)

    Genot, A. J.; Baccouche, A.; Sieskind, R.; Aubert-Kato, N.; Bredeche, N.; Bartolo, J. F.; Taly, V.; Fujii, T.; Rondelez, Y.

    2016-08-01

    Analog molecular circuits can exploit the nonlinear nature of biochemical reaction networks to compute low-precision outputs with fewer resources than digital circuits. This analog computation is similar to that employed by gene-regulation networks. Although digital systems have a tractable link between structure and function, the nonlinear and continuous nature of analog circuits yields an intricate functional landscape, which makes their design counter-intuitive, their characterization laborious and their analysis delicate. Here, using droplet-based microfluidics, we map with high resolution and dimensionality the bifurcation diagrams of two synthetic, out-of-equilibrium and nonlinear programs: a bistable DNA switch and a predator-prey DNA oscillator. The diagrams delineate where function is optimal, dynamics bifurcates and models fail. Inverse problem solving on these large-scale data sets indicates interference from enzymatic coupling. Additionally, data mining exposes the presence of rare, stochastically bursting oscillators near deterministic bifurcations.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Sun Ung, E-mail: sunung@umich.edu; Monroe, Charles W., E-mail: cwmonroe@umich.edu

    The inverse problem of parameterizing intermolecular potentials given macroscopic transport and thermodynamic data is addressed. Procedures are developed to create arbitrary-precision algorithms for transport collision integrals, using the Lennard-Jones (12–6) potential as an example. Interpolation formulas are produced that compute these collision integrals to four-digit accuracy over the reduced-temperature range 0.3≤T{sup ⁎}≤400, allowing very fast computation. Lennard-Jones parameters for neon, argon, and krypton are determined by simultaneously fitting the observed temperature dependences of their viscosities and second virial coefficients—one of the first times that a thermodynamic and a dynamic property have been used simultaneously for Lennard-Jones parameterization. In addition tomore » matching viscosities and second virial coefficients within the bounds of experimental error, the determined Lennard-Jones parameters are also found to predict the thermal conductivity and self-diffusion coefficient accurately, supporting the value of the Lennard-Jones (12–6) potential for noble-gas transport-property correlation.« less

  17. Software systems for modeling articulated figures

    NASA Technical Reports Server (NTRS)

    Phillips, Cary B.

    1989-01-01

    Research in computer animation and simulation of human task performance requires sophisticated geometric modeling and user interface tools. The software for a research environment should present the programmer with a powerful but flexible substrate of facilities for displaying and manipulating geometric objects, yet insure that future tools have a consistent and friendly user interface. Jack is a system which provides a flexible and extensible programmer and user interface for displaying and manipulating complex geometric figures, particularly human figures in a 3D working environment. It is a basic software framework for high-performance Silicon Graphics IRIS workstations for modeling and manipulating geometric objects in a general but powerful way. It provides a consistent and user-friendly interface across various applications in computer animation and simulation of human task performance. Currently, Jack provides input and control for applications including lighting specification and image rendering, anthropometric modeling, figure positioning, inverse kinematics, dynamic simulation, and keyframe animation.

  18. Solving large-scale dynamic systems using band Lanczos method in Rockwell NASTRAN on CRAY X-MP

    NASA Technical Reports Server (NTRS)

    Gupta, V. K.; Zillmer, S. D.; Allison, R. E.

    1986-01-01

    The improved cost effectiveness using better models, more accurate and faster algorithms and large scale computing offers more representative dynamic analyses. The band Lanczos eigen-solution method was implemented in Rockwell's version of 1984 COSMIC-released NASTRAN finite element structural analysis computer program to effectively solve for structural vibration modes including those of large complex systems exceeding 10,000 degrees of freedom. The Lanczos vectors were re-orthogonalized locally using the Lanczos Method and globally using the modified Gram-Schmidt method for sweeping rigid-body modes and previously generated modes and Lanczos vectors. The truncated band matrix was solved for vibration frequencies and mode shapes using Givens rotations. Numerical examples are included to demonstrate the cost effectiveness and accuracy of the method as implemented in ROCKWELL NASTRAN. The CRAY version is based on RPK's COSMIC/NASTRAN. The band Lanczos method was more reliable and accurate and converged faster than the single vector Lanczos Method. The band Lanczos method was comparable to the subspace iteration method which was a block version of the inverse power method. However, the subspace matrix tended to be fully populated in the case of subspace iteration and not as sparse as a band matrix.

  19. The Approximate Bayesian Computation methods in the localization of the atmospheric contamination source

    NASA Astrophysics Data System (ADS)

    Kopka, P.; Wawrzynczak, A.; Borysiewicz, M.

    2015-09-01

    In many areas of application, a central problem is a solution to the inverse problem, especially estimation of the unknown model parameters to model the underlying dynamics of a physical system precisely. In this situation, the Bayesian inference is a powerful tool to combine observed data with prior knowledge to gain the probability distribution of searched parameters. We have applied the modern methodology named Sequential Approximate Bayesian Computation (S-ABC) to the problem of tracing the atmospheric contaminant source. The ABC is technique commonly used in the Bayesian analysis of complex models and dynamic system. Sequential methods can significantly increase the efficiency of the ABC. In the presented algorithm, the input data are the on-line arriving concentrations of released substance registered by distributed sensor network from OVER-LAND ATMOSPHERIC DISPERSION (OLAD) experiment. The algorithm output are the probability distributions of a contamination source parameters i.e. its particular location, release rate, speed and direction of the movement, start time and duration. The stochastic approach presented in this paper is completely general and can be used in other fields where the parameters of the model bet fitted to the observable data should be found.

  20. Modular Approaches to Earth Science Scientific Computing: 3D Electromagnetic Induction Modeling as an Example

    NASA Astrophysics Data System (ADS)

    Tandon, K.; Egbert, G.; Siripunvaraporn, W.

    2003-12-01

    We are developing a modular system for three-dimensional inversion of electromagnetic (EM) induction data, using an object oriented programming approach. This approach allows us to modify the individual components of the inversion scheme proposed, and also reuse the components for variety of problems in earth science computing howsoever diverse they might be. In particular, the modularity allows us to (a) change modeling codes independently of inversion algorithm details; (b) experiment with new inversion algorithms; and (c) modify the way prior information is imposed in the inversion to test competing hypothesis and techniques required to solve an earth science problem. Our initial code development is for EM induction equations on a staggered grid, using iterative solution techniques in 3D. An example illustrated here is an experiment with the sensitivity of 3D magnetotelluric inversion to uncertainties in the boundary conditions required for regional induction problems. These boundary conditions should reflect the large-scale geoelectric structure of the study area, which is usually poorly constrained. In general for inversion of MT data, one fixes boundary conditions at the edge of the model domain, and adjusts the earth?s conductivity structure within the modeling domain. Allowing for errors in specification of the open boundary values is simple in principle, but no existing inversion codes that we are aware of have this feature. Adding a feature such as this is straightforward within the context of the modular approach. More generally, a modular approach provides an efficient methodology for setting up earth science computing problems to test various ideas. As a concrete illustration relevant to EM induction problems, we investigate the sensitivity of MT data near San Andreas Fault at Parkfield (California) to uncertainties in the regional geoelectric structure.

  1. Large Scale Document Inversion using a Multi-threaded Computing System

    PubMed Central

    Jung, Sungbo; Chang, Dar-Jen; Park, Juw Won

    2018-01-01

    Current microprocessor architecture is moving towards multi-core/multi-threaded systems. This trend has led to a surge of interest in using multi-threaded computing devices, such as the Graphics Processing Unit (GPU), for general purpose computing. We can utilize the GPU in computation as a massive parallel coprocessor because the GPU consists of multiple cores. The GPU is also an affordable, attractive, and user-programmable commodity. Nowadays a lot of information has been flooded into the digital domain around the world. Huge volume of data, such as digital libraries, social networking services, e-commerce product data, and reviews, etc., is produced or collected every moment with dramatic growth in size. Although the inverted index is a useful data structure that can be used for full text searches or document retrieval, a large number of documents will require a tremendous amount of time to create the index. The performance of document inversion can be improved by multi-thread or multi-core GPU. Our approach is to implement a linear-time, hash-based, single program multiple data (SPMD), document inversion algorithm on the NVIDIA GPU/CUDA programming platform utilizing the huge computational power of the GPU, to develop high performance solutions for document indexing. Our proposed parallel document inversion system shows 2-3 times faster performance than a sequential system on two different test datasets from PubMed abstract and e-commerce product reviews. CCS Concepts •Information systems➝Information retrieval • Computing methodologies➝Massively parallel and high-performance simulations. PMID:29861701

  2. Large Scale Document Inversion using a Multi-threaded Computing System.

    PubMed

    Jung, Sungbo; Chang, Dar-Jen; Park, Juw Won

    2017-06-01

    Current microprocessor architecture is moving towards multi-core/multi-threaded systems. This trend has led to a surge of interest in using multi-threaded computing devices, such as the Graphics Processing Unit (GPU), for general purpose computing. We can utilize the GPU in computation as a massive parallel coprocessor because the GPU consists of multiple cores. The GPU is also an affordable, attractive, and user-programmable commodity. Nowadays a lot of information has been flooded into the digital domain around the world. Huge volume of data, such as digital libraries, social networking services, e-commerce product data, and reviews, etc., is produced or collected every moment with dramatic growth in size. Although the inverted index is a useful data structure that can be used for full text searches or document retrieval, a large number of documents will require a tremendous amount of time to create the index. The performance of document inversion can be improved by multi-thread or multi-core GPU. Our approach is to implement a linear-time, hash-based, single program multiple data (SPMD), document inversion algorithm on the NVIDIA GPU/CUDA programming platform utilizing the huge computational power of the GPU, to develop high performance solutions for document indexing. Our proposed parallel document inversion system shows 2-3 times faster performance than a sequential system on two different test datasets from PubMed abstract and e-commerce product reviews. •Information systems➝Information retrieval • Computing methodologies➝Massively parallel and high-performance simulations.

  3. The Self-Organization of a Spoken Word

    PubMed Central

    Holden, John G.; Rajaraman, Srinivasan

    2012-01-01

    Pronunciation time probability density and hazard functions from large speeded word naming data sets were assessed for empirical patterns consistent with multiplicative and reciprocal feedback dynamics – interaction dominant dynamics. Lognormal and inverse power law distributions are associated with multiplicative and interdependent dynamics in many natural systems. Mixtures of lognormal and inverse power law distributions offered better descriptions of the participant’s distributions than the ex-Gaussian or ex-Wald – alternatives corresponding to additive, superposed, component processes. The evidence for interaction dominant dynamics suggests fundamental links between the observed coordinative synergies that support speech production and the shapes of pronunciation time distributions. PMID:22783213

  4. A "Kane's Dynamics" Model for the Active Rack Isolation System Part Two: Nonlinear Model Development, Verification, and Simplification

    NASA Technical Reports Server (NTRS)

    Beech, G. S.; Hampton, R. D.; Rupert, J. K.

    2004-01-01

    Many microgravity space-science experiments require vibratory acceleration levels that are unachievable without active isolation. The Boeing Corporation's active rack isolation system (ARIS) employs a novel combination of magnetic actuation and mechanical linkages to address these isolation requirements on the International Space Station. Effective model-based vibration isolation requires: (1) An isolation device, (2) an adequate dynamic; i.e., mathematical, model of that isolator, and (3) a suitable, corresponding controller. This Technical Memorandum documents the validation of that high-fidelity dynamic model of ARIS. The verification of this dynamics model was achieved by utilizing two commercial off-the-shelf (COTS) software tools: Deneb's ENVISION(registered trademark), and Online Dynamics Autolev(trademark). ENVISION is a robotics software package developed for the automotive industry that employs three-dimensional computer-aided design models to facilitate both forward and inverse kinematics analyses. Autolev is a DOS-based interpreter designed, in general, to solve vector-based mathematical problems and specifically to solve dynamics problems using Kane's method. The simplification of this model was achieved using the small-angle theorem for the joint angle of the ARIS actuators. This simplification has a profound effect on the overall complexity of the closed-form solution while yielding a closed-form solution easily employed using COTS control hardware.

  5. Efficient calculation of full waveform time domain inversion for electromagnetic problem using fictitious wave domain method and cascade decimation decomposition

    NASA Astrophysics Data System (ADS)

    Imamura, N.; Schultz, A.

    2016-12-01

    Recently, a full waveform time domain inverse solution has been developed for the magnetotelluric (MT) and controlled-source electromagnetic (CSEM) methods. The ultimate goal of this approach is to obtain a computationally tractable direct waveform joint inversion to solve simultaneously for source fields and earth conductivity structure in three and four dimensions. This is desirable on several grounds, including the improved spatial resolving power expected from use of a multitude of source illuminations, the ability to operate in areas of high levels of source signal spatial complexity, and non-stationarity. This goal would not be obtainable if one were to adopt the pure time domain solution for the inverse problem. This is particularly true for the case of MT surveys, since an enormous number of degrees of freedom are required to represent the observed MT waveforms across a large frequency bandwidth. This means that for the forward simulation, the smallest time steps should be finer than that required to represent the highest frequency, while the number of time steps should also cover the lowest frequency. This leads to a sensitivity matrix that is computationally burdensome to solve a model update. We have implemented a code that addresses this situation through the use of cascade decimation decomposition to reduce the size of the sensitivity matrix substantially, through quasi-equivalent time domain decomposition. We also use a fictitious wave domain method to speed up computation time of the forward simulation in the time domain. By combining these refinements, we have developed a full waveform joint source field/earth conductivity inverse modeling method. We found that cascade decimation speeds computations of the sensitivity matrices dramatically, keeping the solution close to that of the undecimated case. For example, for a model discretized into 2.6x105 cells, we obtain model updates in less than 1 hour on a 4U rack-mounted workgroup Linux server, which is a practical computational time for the inverse problem.

  6. The Collaborative Seismic Earth Model: Generation 1

    NASA Astrophysics Data System (ADS)

    Fichtner, Andreas; van Herwaarden, Dirk-Philip; Afanasiev, Michael; SimutÄ--, SaulÄ--; Krischer, Lion; ćubuk-Sabuncu, Yeşim; Taymaz, Tuncay; Colli, Lorenzo; Saygin, Erdinc; Villaseñor, Antonio; Trampert, Jeannot; Cupillard, Paul; Bunge, Hans-Peter; Igel, Heiner

    2018-05-01

    We present a general concept for evolutionary, collaborative, multiscale inversion of geophysical data, specifically applied to the construction of a first-generation Collaborative Seismic Earth Model. This is intended to address the limited resources of individual researchers and the often limited use of previously accumulated knowledge. Model evolution rests on a Bayesian updating scheme, simplified into a deterministic method that honors today's computational restrictions. The scheme is able to harness distributed human and computing power. It furthermore handles conflicting updates, as well as variable parameterizations of different model refinements or different inversion techniques. The first-generation Collaborative Seismic Earth Model comprises 12 refinements from full seismic waveform inversion, ranging from regional crustal- to continental-scale models. A global full-waveform inversion ensures that regional refinements translate into whole-Earth structure.

  7. Elucidating Ligand-Modulated Conformational Landscape of GPCRs Using Cloud-Computing Approaches.

    PubMed

    Shukla, Diwakar; Lawrenz, Morgan; Pande, Vijay S

    2015-01-01

    G-protein-coupled receptors (GPCRs) are a versatile family of membrane-bound signaling proteins. Despite the recent successes in obtaining crystal structures of GPCRs, much needs to be learned about the conformational changes associated with their activation. Furthermore, the mechanism by which ligands modulate the activation of GPCRs has remained elusive. Molecular simulations provide a way of obtaining detailed an atomistic description of GPCR activation dynamics. However, simulating GPCR activation is challenging due to the long timescales involved and the associated challenge of gaining insights from the "Big" simulation datasets. Here, we demonstrate how cloud-computing approaches have been used to tackle these challenges and obtain insights into the activation mechanism of GPCRs. In particular, we review the use of Markov state model (MSM)-based sampling algorithms for sampling milliseconds of dynamics of a major drug target, the G-protein-coupled receptor β2-AR. MSMs of agonist and inverse agonist-bound β2-AR reveal multiple activation pathways and how ligands function via modulation of the ensemble of activation pathways. We target this ensemble of conformations with computer-aided drug design approaches, with the goal of designing drugs that interact more closely with diverse receptor states, for overall increased efficacy and specificity. We conclude by discussing how cloud-based approaches present a powerful and broadly available tool for studying the complex biological systems routinely. © 2015 Elsevier Inc. All rights reserved.

  8. Adaptive Filtering in the Wavelet Transform Domain Via Genetic Algorithms

    DTIC Science & Technology

    2004-08-01

    inverse transform process. 2. BACKGROUND The image processing research conducted at the AFRL/IFTA Reconfigurable Computing Laboratory has been...coefficients from the wavelet domain back into the original signal domain. In other words, the inverse transform produces the original signal x(t) from the...coefficients for an inverse wavelet transform, such that the MSE of images reconstructed by this inverse transform is significantly less than the mean squared

  9. Computed inverse resonance imaging for magnetic susceptibility map reconstruction.

    PubMed

    Chen, Zikuan; Calhoun, Vince

    2012-01-01

    This article reports a computed inverse magnetic resonance imaging (CIMRI) model for reconstructing the magnetic susceptibility source from MRI data using a 2-step computational approach. The forward T2*-weighted MRI (T2*MRI) process is broken down into 2 steps: (1) from magnetic susceptibility source to field map establishment via magnetization in the main field and (2) from field map to MR image formation by intravoxel dephasing average. The proposed CIMRI model includes 2 inverse steps to reverse the T2*MRI procedure: field map calculation from MR-phase image and susceptibility source calculation from the field map. The inverse step from field map to susceptibility map is a 3-dimensional ill-posed deconvolution problem, which can be solved with 3 kinds of approaches: the Tikhonov-regularized matrix inverse, inverse filtering with a truncated filter, and total variation (TV) iteration. By numerical simulation, we validate the CIMRI model by comparing the reconstructed susceptibility maps for a predefined susceptibility source. Numerical simulations of CIMRI show that the split Bregman TV iteration solver can reconstruct the susceptibility map from an MR-phase image with high fidelity (spatial correlation ≈ 0.99). The split Bregman TV iteration solver includes noise reduction, edge preservation, and image energy conservation. For applications to brain susceptibility reconstruction, it is important to calibrate the TV iteration program by selecting suitable values of the regularization parameter. The proposed CIMRI model can reconstruct the magnetic susceptibility source of T2*MRI by 2 computational steps: calculating the field map from the phase image and reconstructing the susceptibility map from the field map. The crux of CIMRI lies in an ill-posed 3-dimensional deconvolution problem, which can be effectively solved by the split Bregman TV iteration algorithm.

  10. Computed inverse MRI for magnetic susceptibility map reconstruction

    PubMed Central

    Chen, Zikuan; Calhoun, Vince

    2015-01-01

    Objective This paper reports on a computed inverse magnetic resonance imaging (CIMRI) model for reconstructing the magnetic susceptibility source from MRI data using a two-step computational approach. Methods The forward T2*-weighted MRI (T2*MRI) process is decomposed into two steps: 1) from magnetic susceptibility source to fieldmap establishment via magnetization in a main field, and 2) from fieldmap to MR image formation by intravoxel dephasing average. The proposed CIMRI model includes two inverse steps to reverse the T2*MRI procedure: fieldmap calculation from MR phase image and susceptibility source calculation from the fieldmap. The inverse step from fieldmap to susceptibility map is a 3D ill-posed deconvolution problem, which can be solved by three kinds of approaches: Tikhonov-regularized matrix inverse, inverse filtering with a truncated filter, and total variation (TV) iteration. By numerical simulation, we validate the CIMRI model by comparing the reconstructed susceptibility maps for a predefined susceptibility source. Results Numerical simulations of CIMRI show that the split Bregman TV iteration solver can reconstruct the susceptibility map from a MR phase image with high fidelity (spatial correlation≈0.99). The split Bregman TV iteration solver includes noise reduction, edge preservation, and image energy conservation. For applications to brain susceptibility reconstruction, it is important to calibrate the TV iteration program by selecting suitable values of the regularization parameter. Conclusions The proposed CIMRI model can reconstruct the magnetic susceptibility source of T2*MRI by two computational steps: calculating the fieldmap from the phase image and reconstructing the susceptibility map from the fieldmap. The crux of CIMRI lies in an ill-posed 3D deconvolution problem, which can be effectively solved by the split Bregman TV iteration algorithm. PMID:22446372

  11. A flexible and accurate digital volume correlation method applicable to high-resolution volumetric images

    NASA Astrophysics Data System (ADS)

    Pan, Bing; Wang, Bo

    2017-10-01

    Digital volume correlation (DVC) is a powerful technique for quantifying interior deformation within solid opaque materials and biological tissues. In the last two decades, great efforts have been made to improve the accuracy and efficiency of the DVC algorithm. However, there is still a lack of a flexible, robust and accurate version that can be efficiently implemented in personal computers with limited RAM. This paper proposes an advanced DVC method that can realize accurate full-field internal deformation measurement applicable to high-resolution volume images with up to billions of voxels. Specifically, a novel layer-wise reliability-guided displacement tracking strategy combined with dynamic data management is presented to guide the DVC computation from slice to slice. The displacements at specified calculation points in each layer are computed using the advanced 3D inverse-compositional Gauss-Newton algorithm with the complete initial guess of the deformation vector accurately predicted from the computed calculation points. Since only limited slices of interest in the reference and deformed volume images rather than the whole volume images are required, the DVC calculation can thus be efficiently implemented on personal computers. The flexibility, accuracy and efficiency of the presented DVC approach are demonstrated by analyzing computer-simulated and experimentally obtained high-resolution volume images.

  12. Computer Model Inversion and Uncertainty Quantification in the Geosciences

    NASA Astrophysics Data System (ADS)

    White, Jeremy T.

    The subject of this dissertation is use of computer models as data analysis tools in several different geoscience settings, including integrated surface water/groundwater modeling, tephra fallout modeling, geophysical inversion, and hydrothermal groundwater modeling. The dissertation is organized into three chapters, which correspond to three individual publication manuscripts. In the first chapter, a linear framework is developed to identify and estimate the potential predictive consequences of using a simple computer model as a data analysis tool. The framework is applied to a complex integrated surface-water/groundwater numerical model with thousands of parameters. Several types of predictions are evaluated, including particle travel time and surface-water/groundwater exchange volume. The analysis suggests that model simplifications have the potential to corrupt many types of predictions. The implementation of the inversion, including how the objective function is formulated, what minimum of the objective function value is acceptable, and how expert knowledge is enforced on parameters, can greatly influence the manifestation of model simplification. Depending on the prediction, failure to specifically address each of these important issues during inversion is shown to degrade the reliability of some predictions. In some instances, inversion is shown to increase, rather than decrease, the uncertainty of a prediction, which defeats the purpose of using a model as a data analysis tool. In the second chapter, an efficient inversion and uncertainty quantification approach is applied to a computer model of volcanic tephra transport and deposition. The computer model simulates many physical processes related to tephra transport and fallout. The utility of the approach is demonstrated for two eruption events. In both cases, the importance of uncertainty quantification is highlighted by exposing the variability in the conditioning provided by the observations used for inversion. The worth of different types of tephra data to reduce parameter uncertainty is evaluated, as is the importance of different observation error models. The analyses reveal the importance using tephra granulometry data for inversion, which results in reduced uncertainty for most eruption parameters. In the third chapter, geophysical inversion is combined with hydrothermal modeling to evaluate the enthalpy of an undeveloped geothermal resource in a pull-apart basin located in southeastern Armenia. A high-dimensional gravity inversion is used to define the depth to the contact between the lower-density valley fill sediments and the higher-density surrounding host rock. The inverted basin depth distribution was used to define the hydrostratigraphy for the coupled groundwater-flow and heat-transport model that simulates the circulation of hydrothermal fluids in the system. Evaluation of several different geothermal system configurations indicates that the most likely system configuration is a low-enthalpy, liquid-dominated geothermal system.

  13. Angular dependence of multiangle dynamic light scattering for particle size distribution inversion using a self-adapting regularization algorithm

    NASA Astrophysics Data System (ADS)

    Li, Lei; Yu, Long; Yang, Kecheng; Li, Wei; Li, Kai; Xia, Min

    2018-04-01

    The multiangle dynamic light scattering (MDLS) technique can better estimate particle size distributions (PSDs) than single-angle dynamic light scattering. However, determining the inversion range, angular weighting coefficients, and scattering angle combination is difficult but fundamental to the reconstruction for both unimodal and multimodal distributions. In this paper, we propose a self-adapting regularization method called the wavelet iterative recursion nonnegative Tikhonov-Phillips-Twomey (WIRNNT-PT) algorithm. This algorithm combines a wavelet multiscale strategy with an appropriate inversion method and could self-adaptively optimize several noteworthy issues containing the choices of the weighting coefficients, the inversion range and the optimal inversion method from two regularization algorithms for estimating the PSD from MDLS measurements. In addition, the angular dependence of the MDLS for estimating the PSDs of polymeric latexes is thoroughly analyzed. The dependence of the results on the number and range of measurement angles was analyzed in depth to identify the optimal scattering angle combination. Numerical simulations and experimental results for unimodal and multimodal distributions are presented to demonstrate both the validity of the WIRNNT-PT algorithm and the angular dependence of MDLS and show that the proposed algorithm with a six-angle analysis in the 30-130° range can be satisfactorily applied to retrieve PSDs from MDLS measurements.

  14. Improving waveform inversion using modified interferometric imaging condition

    NASA Astrophysics Data System (ADS)

    Guo, Xuebao; Liu, Hong; Shi, Ying; Wang, Weihong; Zhang, Zhen

    2017-12-01

    Similar to the reverse-time migration, full waveform inversion in the time domain is a memory-intensive processing method. The computational storage size for waveform inversion mainly depends on the model size and time recording length. In general, 3D and 4D data volumes need to be saved for 2D and 3D waveform inversion gradient calculations, respectively. Even the boundary region wavefield-saving strategy creates a huge storage demand. Using the last two slices of the wavefield to reconstruct wavefields at other moments through the random boundary, avoids the need to store a large number of wavefields; however, traditional random boundary method is less effective at low frequencies. In this study, we follow a new random boundary designed to regenerate random velocity anomalies in the boundary region for each shot of each iteration. The results obtained using the random boundary condition in less illuminated areas are more seriously affected by random scattering than other areas due to the lack of coverage. In this paper, we have replaced direct correlation for computing the waveform inversion gradient by modified interferometric imaging, which enhances the continuity of the imaging path and reduces noise interference. The new imaging condition is a weighted average of extended imaging gathers can be directly used in the gradient computation. In this process, we have not changed the objective function, and the role of the imaging condition is similar to regularization. The window size for the modified interferometric imaging condition-based waveform inversion plays an important role in this process. The numerical examples show that the proposed method significantly enhances waveform inversion performance.

  15. Efficient Monte Carlo sampling of inverse problems using a neural network-based forward—applied to GPR crosshole traveltime inversion

    NASA Astrophysics Data System (ADS)

    Hansen, T. M.; Cordua, K. S.

    2017-12-01

    Probabilistically formulated inverse problems can be solved using Monte Carlo-based sampling methods. In principle, both advanced prior information, based on for example, complex geostatistical models and non-linear forward models can be considered using such methods. However, Monte Carlo methods may be associated with huge computational costs that, in practice, limit their application. This is not least due to the computational requirements related to solving the forward problem, where the physical forward response of some earth model has to be evaluated. Here, it is suggested to replace a numerical complex evaluation of the forward problem, with a trained neural network that can be evaluated very fast. This will introduce a modeling error that is quantified probabilistically such that it can be accounted for during inversion. This allows a very fast and efficient Monte Carlo sampling of the solution to an inverse problem. We demonstrate the methodology for first arrival traveltime inversion of crosshole ground penetrating radar data. An accurate forward model, based on 2-D full-waveform modeling followed by automatic traveltime picking, is replaced by a fast neural network. This provides a sampling algorithm three orders of magnitude faster than using the accurate and computationally expensive forward model, and also considerably faster and more accurate (i.e. with better resolution), than commonly used approximate forward models. The methodology has the potential to dramatically change the complexity of non-linear and non-Gaussian inverse problems that have to be solved using Monte Carlo sampling techniques.

  16. Improving waveform inversion using modified interferometric imaging condition

    NASA Astrophysics Data System (ADS)

    Guo, Xuebao; Liu, Hong; Shi, Ying; Wang, Weihong; Zhang, Zhen

    2018-02-01

    Similar to the reverse-time migration, full waveform inversion in the time domain is a memory-intensive processing method. The computational storage size for waveform inversion mainly depends on the model size and time recording length. In general, 3D and 4D data volumes need to be saved for 2D and 3D waveform inversion gradient calculations, respectively. Even the boundary region wavefield-saving strategy creates a huge storage demand. Using the last two slices of the wavefield to reconstruct wavefields at other moments through the random boundary, avoids the need to store a large number of wavefields; however, traditional random boundary method is less effective at low frequencies. In this study, we follow a new random boundary designed to regenerate random velocity anomalies in the boundary region for each shot of each iteration. The results obtained using the random boundary condition in less illuminated areas are more seriously affected by random scattering than other areas due to the lack of coverage. In this paper, we have replaced direct correlation for computing the waveform inversion gradient by modified interferometric imaging, which enhances the continuity of the imaging path and reduces noise interference. The new imaging condition is a weighted average of extended imaging gathers can be directly used in the gradient computation. In this process, we have not changed the objective function, and the role of the imaging condition is similar to regularization. The window size for the modified interferometric imaging condition-based waveform inversion plays an important role in this process. The numerical examples show that the proposed method significantly enhances waveform inversion performance.

  17. FWT2D: A massively parallel program for frequency-domain full-waveform tomography of wide-aperture seismic data—Part 1: Algorithm

    NASA Astrophysics Data System (ADS)

    Sourbier, Florent; Operto, Stéphane; Virieux, Jean; Amestoy, Patrick; L'Excellent, Jean-Yves

    2009-03-01

    This is the first paper in a two-part series that describes a massively parallel code that performs 2D frequency-domain full-waveform inversion of wide-aperture seismic data for imaging complex structures. Full-waveform inversion methods, namely quantitative seismic imaging methods based on the resolution of the full wave equation, are computationally expensive. Therefore, designing efficient algorithms which take advantage of parallel computing facilities is critical for the appraisal of these approaches when applied to representative case studies and for further improvements. Full-waveform modelling requires the resolution of a large sparse system of linear equations which is performed with the massively parallel direct solver MUMPS for efficient multiple-shot simulations. Efficiency of the multiple-shot solution phase (forward/backward substitutions) is improved by using the BLAS3 library. The inverse problem relies on a classic local optimization approach implemented with a gradient method. The direct solver returns the multiple-shot wavefield solutions distributed over the processors according to a domain decomposition driven by the distribution of the LU factors. The domain decomposition of the wavefield solutions is used to compute in parallel the gradient of the objective function and the diagonal Hessian, this latter providing a suitable scaling of the gradient. The algorithm allows one to test different strategies for multiscale frequency inversion ranging from successive mono-frequency inversion to simultaneous multifrequency inversion. These different inversion strategies will be illustrated in the following companion paper. The parallel efficiency and the scalability of the code will also be quantified.

  18. Inverse Symmetry in Complete Genomes and Whole-Genome Inverse Duplication

    PubMed Central

    Kong, Sing-Guan; Fan, Wen-Lang; Chen, Hong-Da; Hsu, Zi-Ting; Zhou, Nengji; Zheng, Bo; Lee, Hoong-Chien

    2009-01-01

    The cause of symmetry is usually subtle, and its study often leads to a deeper understanding of the bearer of the symmetry. To gain insight into the dynamics driving the growth and evolution of genomes, we conducted a comprehensive study of textual symmetries in 786 complete chromosomes. We focused on symmetry based on our belief that, in spite of their extreme diversity, genomes must share common dynamical principles and mechanisms that drive their growth and evolution, and that the most robust footprints of such dynamics are symmetry related. We found that while complement and reverse symmetries are essentially absent in genomic sequences, inverse–complement plus reverse–symmetry is prevalent in complex patterns in most chromosomes, a vast majority of which have near maximum global inverse symmetry. We also discovered relations that can quantitatively account for the long observed but unexplained phenomenon of -mer skews in genomes. Our results suggest segmental and whole-genome inverse duplications are important mechanisms in genome growth and evolution, probably because they are efficient means by which the genome can exploit its double-stranded structure to enrich its code-inventory. PMID:19898631

  19. A New Approach to the Numerical Evaluation of the Inverse Radon Transform with Discrete, Noisy Data.

    DTIC Science & Technology

    1980-07-01

    spline form. The resulting analytic expression for the inner integral in the inverse transform is then readily evaluated, and the outer (periodic...integral is replaced by a sum. The work involved to obtain the inverse transform appears to be within the capability of existing computing equipment for

  20. Approximate non-linear multiparameter inversion for multicomponent single and double P-wave scattering in isotropic elastic media

    NASA Astrophysics Data System (ADS)

    Ouyang, Wei; Mao, Weijian

    2018-03-01

    An asymptotic quadratic true-amplitude inversion method for isotropic elastic P waves is proposed to invert medium parameters. The multicomponent P-wave scattered wavefield is computed based on a forward relationship using second-order Born approximation and corresponding high-frequency ray theoretical methods. Within the local double scattering mechanism, the P-wave transmission factors are elaborately calculated, which results in the radiation pattern for P-waves scattering being a quadratic combination of the density and Lamé's moduli perturbation parameters. We further express the elastic P-wave scattered wavefield in a form of generalized Radon transform (GRT). After introducing classical backprojection operators, we obtain an approximate solution of the inverse problem by solving a quadratic non-linear system. Numerical tests with synthetic data computed by finite-differences scheme demonstrate that our quadratic inversion can accurately invert perturbation parameters for strong perturbations, compared with the P-wave single-scattering linear inversion method. Although our inversion strategy here is only syncretized with P-wave scattering, it can be extended to invert multicomponent elastic data containing both P-wave and S-wave information.

  1. Approximate nonlinear multiparameter inversion for multicomponent single and double P-wave scattering in isotropic elastic media

    NASA Astrophysics Data System (ADS)

    Ouyang, Wei; Mao, Weijian

    2018-07-01

    An asymptotic quadratic true-amplitude inversion method for isotropic elastic P waves is proposed to invert medium parameters. The multicomponent P-wave scattered wavefield is computed based on a forward relationship using second-order Born approximation and corresponding high-frequency ray theoretical methods. Within the local double scattering mechanism, the P-wave transmission factors are elaborately calculated, which results in the radiation pattern for P-wave scattering being a quadratic combination of the density and Lamé's moduli perturbation parameters. We further express the elastic P-wave scattered wavefield in a form of generalized Radon transform. After introducing classical backprojection operators, we obtain an approximate solution of the inverse problem by solving a quadratic nonlinear system. Numerical tests with synthetic data computed by finite-differences scheme demonstrate that our quadratic inversion can accurately invert perturbation parameters for strong perturbations, compared with the P-wave single-scattering linear inversion method. Although our inversion strategy here is only syncretized with P-wave scattering, it can be extended to invert multicomponent elastic data containing both P- and S-wave information.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kitanidis, Peter

    As large-scale, commercial storage projects become operational, the problem of utilizing information from diverse sources becomes more critically important. In this project, we developed, tested, and applied an advanced joint data inversion system for CO 2 storage modeling with large data sets for use in site characterization and real-time monitoring. Emphasis was on the development of advanced and efficient computational algorithms for joint inversion of hydro-geophysical data, coupled with state-of-the-art forward process simulations. The developed system consists of (1) inversion tools using characterization data, such as 3D seismic survey (amplitude images), borehole log and core data, as well as hydraulic,more » tracer and thermal tests before CO 2 injection, (2) joint inversion tools for updating the geologic model with the distribution of rock properties, thus reducing uncertainty, using hydro-geophysical monitoring data, and (3) highly efficient algorithms for directly solving the dense or sparse linear algebra systems derived from the joint inversion. The system combines methods from stochastic analysis, fast linear algebra, and high performance computing. The developed joint inversion tools have been tested through synthetic CO 2 storage examples.« less

  3. Modeling and control of magnetorheological fluid dampers using neural networks

    NASA Astrophysics Data System (ADS)

    Wang, D. H.; Liao, W. H.

    2005-02-01

    Due to the inherent nonlinear nature of magnetorheological (MR) fluid dampers, one of the challenging aspects for utilizing these devices to achieve high system performance is the development of accurate models and control algorithms that can take advantage of their unique characteristics. In this paper, the direct identification and inverse dynamic modeling for MR fluid dampers using feedforward and recurrent neural networks are studied. The trained direct identification neural network model can be used to predict the damping force of the MR fluid damper on line, on the basis of the dynamic responses across the MR fluid damper and the command voltage, and the inverse dynamic neural network model can be used to generate the command voltage according to the desired damping force through supervised learning. The architectures and the learning methods of the dynamic neural network models and inverse neural network models for MR fluid dampers are presented, and some simulation results are discussed. Finally, the trained neural network models are applied to predict and control the damping force of the MR fluid damper. Moreover, validation methods for the neural network models developed are proposed and used to evaluate their performance. Validation results with different data sets indicate that the proposed direct identification dynamic model using the recurrent neural network can be used to predict the damping force accurately and the inverse identification dynamic model using the recurrent neural network can act as a damper controller to generate the command voltage when the MR fluid damper is used in a semi-active mode.

  4. Numerical Modeling in Geodynamics: Success, Failure and Perspective

    NASA Astrophysics Data System (ADS)

    Ismail-Zadeh, A.

    2005-12-01

    A real success in numerical modeling of dynamics of the Earth can be achieved only by multidisciplinary research teams of experts in geodynamics, applied and pure mathematics, and computer science. The success in numerical modeling is based on the following basic, but simple, rules. (i) People need simplicity most, but they understand intricacies best (B. Pasternak, writer). Start from a simple numerical model, which describes basic physical laws by a set of mathematical equations, and move then to a complex model. Never start from a complex model, because you cannot understand the contribution of each term of the equations to the modeled geophysical phenomenon. (ii) Study the numerical methods behind your computer code. Otherwise it becomes difficult to distinguish true and erroneous solutions to the geodynamic problem, especially when your problem is complex enough. (iii) Test your model versus analytical and asymptotic solutions, simple 2D and 3D model examples. Develop benchmark analysis of different numerical codes and compare numerical results with laboratory experiments. Remember that the numerical tool you employ is not perfect, and there are small bugs in every computer code. Therefore the testing is the most important part of your numerical modeling. (iv) Prove (if possible) or learn relevant statements concerning the existence, uniqueness and stability of the solution to the mathematical and discrete problems. Otherwise you can solve an improperly-posed problem, and the results of the modeling will be far from the true solution of your model problem. (v) Try to analyze numerical models of a geological phenomenon using as less as possible tuning model variables. Already two tuning variables give enough possibilities to constrain your model well enough with respect to observations. The data fitting sometimes is quite attractive and can take you far from a principal aim of your numerical modeling: to understand geophysical phenomena. (vi) If the number of tuning model variables are greater than two, test carefully the effect of each of the variables on the modeled phenomenon. Remember: With four exponents I can fit an elephant (E. Fermi, physicist). (vii) Make your numerical model as accurate as possible, but never put the aim to reach a great accuracy: Undue precision of computations is the first symptom of mathematical illiteracy (N. Krylov, mathematician). How complex should be a numerical model? A model which images any detail of the reality is as useful as a map of scale 1:1 (J. Robinson, economist). This message is quite important for geoscientists, who study numerical models of complex geodynamical processes. I believe that geoscientists will never create a model of the real Earth dynamics, but we should try to model the dynamics such a way to simulate basic geophysical processes and phenomena. Does a particular model have a predictive power? Each numerical model has a predictive power, otherwise the model is useless. The predictability of the model varies with its complexity. Remember that a solution to the numerical model is an approximate solution to the equations, which have been chosen in believe that they describe dynamic processes of the Earth. Hence a numerical model predicts dynamics of the Earth as well as the mathematical equations describe this dynamics. What methodological advances are still needed for testable geodynamic modeling? Inverse (time-reverse) numerical modeling and data assimilation are new methodologies in geodynamics. The inverse modeling can allow to test geodynamic models forward in time using restored (from present-day observations) initial conditions instead of unknown conditions.

  5. On parameters identification of computational models of vibrations during quiet standing of humans

    NASA Astrophysics Data System (ADS)

    Barauskas, R.; Krušinskienė, R.

    2007-12-01

    Vibration of the center of pressure (COP) of human body on the base of support during quiet standing is a very popular clinical research, which provides useful information about the physical and health condition of an individual. In this work, vibrations of COP of a human body in forward-backward direction during still standing are generated using controlled inverted pendulum (CIP) model with a single degree of freedom (dof) supplied with proportional, integral and differential (PID) controller, which represents the behavior of the central neural system of a human body and excited by cumulative disturbance vibration, generated within the body due to breathing or any other physical condition. The identification of the model and disturbance parameters is an important stage while creating a close-to-reality computational model able to evaluate features of disturbance. The aim of this study is to present the CIP model parameters identification approach based on the information captured by time series of the COP signal. The identification procedure is based on an error function minimization. Error function is formulated in terms of time laws of computed and experimentally measured COP vibrations. As an alternative, error function is formulated in terms of the stabilogram diffusion function (SDF). The minimization of error functions is carried out by employing methods based on sensitivity functions of the error with respect to model and excitation parameters. The sensitivity functions are obtained by using the variational techniques. The inverse dynamic problem approach has been employed in order to establish the properties of the disturbance time laws ensuring the satisfactory coincidence of measured and computed COP vibration laws. The main difficulty of the investigated problem is encountered during the model validation stage. Generally, neither the PID controller parameter set nor the disturbance time law are known in advance. In this work, an error function formulated in terms of time derivative of disturbance torque has been proposed in order to obtain PID controller parameters, as well as the reference time law of the disturbance. The disturbance torque is calculated from experimental data using the inverse dynamic approach. Experiments presented in this study revealed that vibrations of disturbance torque and PID controller parameters identified by the method may be qualified as feasible in humans. Presented approach may be easily extended to structural models with any number of dof or higher structural complexity.

  6. Recursive partitioned inversion of large (1500 x 1500) symmetric matrices

    NASA Technical Reports Server (NTRS)

    Putney, B. H.; Brownd, J. E.; Gomez, R. A.

    1976-01-01

    A recursive algorithm was designed to invert large, dense, symmetric, positive definite matrices using small amounts of computer core, i.e., a small fraction of the core needed to store the complete matrix. The described algorithm is a generalized Gaussian elimination technique. Other algorithms are also discussed for the Cholesky decomposition and step inversion techniques. The purpose of the inversion algorithm is to solve large linear systems of normal equations generated by working geodetic problems. The algorithm was incorporated into a computer program called SOLVE. In the past the SOLVE program has been used in obtaining solutions published as the Goddard earth models.

  7. A Fortran 77 computer code for damped least-squares inversion of Slingram electromagnetic anomalies over thin tabular conductors

    NASA Astrophysics Data System (ADS)

    Dondurur, Derman; Sarı, Coşkun

    2004-07-01

    A FORTRAN 77 computer code is presented that permits the inversion of Slingram electromagnetic anomalies to an optimal conductor model. Damped least-squares inversion algorithm is used to estimate the anomalous body parameters, e.g. depth, dip and surface projection point of the target. Iteration progress is controlled by maximum relative error value and iteration continued until a tolerance value was satisfied, while the modification of Marquardt's parameter is controlled by sum of the squared errors value. In order to form the Jacobian matrix, the partial derivatives of theoretical anomaly expression with respect to the parameters being optimised are calculated by numerical differentiation by using first-order forward finite differences. A theoretical and two field anomalies are inserted to test the accuracy and applicability of the present inversion program. Inversion of the field data indicated that depth and the surface projection point parameters of the conductor are estimated correctly, however, considerable discrepancies appeared on the estimated dip angles. It is therefore concluded that the most important factor resulting in the misfit between observed and calculated data is due to the fact that the theory used for computing Slingram anomalies is valid for only thin conductors and this assumption might have caused incorrect dip estimates in the case of wide conductors.

  8. Novel Scalable 3-D MT Inverse Solver

    NASA Astrophysics Data System (ADS)

    Kuvshinov, A. V.; Kruglyakov, M.; Geraskin, A.

    2016-12-01

    We present a new, robust and fast, three-dimensional (3-D) magnetotelluric (MT) inverse solver. As a forward modelling engine a highly-scalable solver extrEMe [1] is used. The (regularized) inversion is based on an iterative gradient-type optimization (quasi-Newton method) and exploits adjoint sources approach for fast calculation of the gradient of the misfit. The inverse solver is able to deal with highly detailed and contrasting models, allows for working (separately or jointly) with any type of MT (single-site and/or inter-site) responses, and supports massive parallelization. Different parallelization strategies implemented in the code allow for optimal usage of available computational resources for a given problem set up. To parameterize an inverse domain a mask approach is implemented, which means that one can merge any subset of forward modelling cells in order to account for (usually) irregular distribution of observation sites. We report results of 3-D numerical experiments aimed at analysing the robustness, performance and scalability of the code. In particular, our computational experiments carried out at different platforms ranging from modern laptops to high-performance clusters demonstrate practically linear scalability of the code up to thousands of nodes. 1. Kruglyakov, M., A. Geraskin, A. Kuvshinov, 2016. Novel accurate and scalable 3-D MT forward solver based on a contracting integral equation method, Computers and Geosciences, in press.

  9. HT2DINV: A 2D forward and inverse code for steady-state and transient hydraulic tomography problems

    NASA Astrophysics Data System (ADS)

    Soueid Ahmed, A.; Jardani, A.; Revil, A.; Dupont, J. P.

    2015-12-01

    Hydraulic tomography is a technique used to characterize the spatial heterogeneities of storativity and transmissivity fields. The responses of an aquifer to a source of hydraulic stimulations are used to recover the features of the estimated fields using inverse techniques. We developed a 2D free source Matlab package for performing hydraulic tomography analysis in steady state and transient regimes. The package uses the finite elements method to solve the ground water flow equation for simple or complex geometries accounting for the anisotropy of the material properties. The inverse problem is based on implementing the geostatistical quasi-linear approach of Kitanidis combined with the adjoint-state method to compute the required sensitivity matrices. For undetermined inverse problems, the adjoint-state method provides a faster and more accurate approach for the evaluation of sensitivity matrices compared with the finite differences method. Our methodology is organized in a way that permits the end-user to activate parallel computing in order to reduce the computational burden. Three case studies are investigated demonstrating the robustness and efficiency of our approach for inverting hydraulic parameters.

  10. Fast Geostatistical Inversion using Randomized Matrix Decompositions and Sketchings for Heterogeneous Aquifer Characterization

    NASA Astrophysics Data System (ADS)

    O'Malley, D.; Le, E. B.; Vesselinov, V. V.

    2015-12-01

    We present a fast, scalable, and highly-implementable stochastic inverse method for characterization of aquifer heterogeneity. The method utilizes recent advances in randomized matrix algebra and exploits the structure of the Quasi-Linear Geostatistical Approach (QLGA), without requiring a structured grid like Fast-Fourier Transform (FFT) methods. The QLGA framework is a more stable version of Gauss-Newton iterates for a large number of unknown model parameters, but provides unbiased estimates. The methods are matrix-free and do not require derivatives or adjoints, and are thus ideal for complex models and black-box implementation. We also incorporate randomized least-square solvers and data-reduction methods, which speed up computation and simulate missing data points. The new inverse methodology is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). Julia is an advanced high-level scientific programing language that allows for efficient memory management and utilization of high-performance computational resources. Inversion results based on series of synthetic problems with steady-state and transient calibration data are presented.

  11. Inverse modeling of geochemical and mechanical compaction in sedimentary basins

    NASA Astrophysics Data System (ADS)

    Colombo, Ivo; Porta, Giovanni Michele; Guadagnini, Alberto

    2015-04-01

    We study key phenomena driving the feedback between sediment compaction processes and fluid flow in stratified sedimentary basins formed through lithification of sand and clay sediments after deposition. Processes we consider are mechanic compaction of the host rock and the geochemical compaction due to quartz cementation in sandstones. Key objectives of our study include (i) the quantification of the influence of the uncertainty of the model input parameters on the model output and (ii) the application of an inverse modeling technique to field scale data. Proper accounting of the feedback between sediment compaction processes and fluid flow in the subsurface is key to quantify a wide set of environmentally and industrially relevant phenomena. These include, e.g., compaction-driven brine and/or saltwater flow at deep locations and its influence on (a) tracer concentrations observed in shallow sediments, (b) build up of fluid overpressure, (c) hydrocarbon generation and migration, (d) subsidence due to groundwater and/or hydrocarbons withdrawal, and (e) formation of ore deposits. Main processes driving the diagenesis of sediments after deposition are mechanical compaction due to overburden and precipitation/dissolution associated with reactive transport. The natural evolution of sedimentary basins is characterized by geological time scales, thus preventing direct and exhaustive measurement of the system dynamical changes. The outputs of compaction models are plagued by uncertainty because of the incomplete knowledge of the models and parameters governing diagenesis. Development of robust methodologies for inverse modeling and parameter estimation under uncertainty is therefore crucial to the quantification of natural compaction phenomena. We employ a numerical methodology based on three building blocks: (i) space-time discretization of the compaction process; (ii) representation of target output variables through a Polynomial Chaos Expansion (PCE); and (iii) model inversion (parameter estimation) within a maximum likelihood framework. In this context, the PCE-based surrogate model enables one to (i) minimize the computational cost associated with the (forward and inverse) modeling procedures leading to uncertainty quantification and parameter estimation, and (ii) compute the full set of Sobol indices quantifying the contribution of each uncertain parameter to the variability of target state variables. Results are illustrated through the simulation of one-dimensional test cases. The analyses focuses on the calibration of model parameters through literature field cases. The quality of parameter estimates is then analyzed as a function of number, type and location of data.

  12. Geophysical approaches to inverse problems: A methodological comparison. Part 1: A Posteriori approach

    NASA Technical Reports Server (NTRS)

    Seidman, T. I.; Munteanu, M. J.

    1979-01-01

    The relationships of a variety of general computational methods (and variances) for treating illposed problems such as geophysical inverse problems are considered. Differences in approach and interpretation based on varying assumptions as to, e.g., the nature of measurement uncertainties are discussed along with the factors to be considered in selecting an approach. The reliability of the results of such computation is addressed.

  13. a Novel Discrete Optimal Transport Method for Bayesian Inverse Problems

    NASA Astrophysics Data System (ADS)

    Bui-Thanh, T.; Myers, A.; Wang, K.; Thiery, A.

    2017-12-01

    We present the Augmented Ensemble Transform (AET) method for generating approximate samples from a high-dimensional posterior distribution as a solution to Bayesian inverse problems. Solving large-scale inverse problems is critical for some of the most relevant and impactful scientific endeavors of our time. Therefore, constructing novel methods for solving the Bayesian inverse problem in more computationally efficient ways can have a profound impact on the science community. This research derives the novel AET method for exploring a posterior by solving a sequence of linear programming problems, resulting in a series of transport maps which map prior samples to posterior samples, allowing for the computation of moments of the posterior. We show both theoretical and numerical results, indicating this method can offer superior computational efficiency when compared to other SMC methods. Most of this efficiency is derived from matrix scaling methods to solve the linear programming problem and derivative-free optimization for particle movement. We use this method to determine inter-well connectivity in a reservoir and the associated uncertainty related to certain parameters. The attached file shows the difference between the true parameter and the AET parameter in an example 3D reservoir problem. The error is within the Morozov discrepancy allowance with lower computational cost than other particle methods.

  14. Pareto Joint Inversion of Love and Quasi Rayleigh's waves - synthetic study

    NASA Astrophysics Data System (ADS)

    Bogacz, Adrian; Dalton, David; Danek, Tomasz; Miernik, Katarzyna; Slawinski, Michael A.

    2017-04-01

    In this contribution the specific application of Pareto joint inversion in solving geophysical problem is presented. Pareto criterion combine with Particle Swarm Optimization were used to solve geophysical inverse problems for Love and Quasi Rayleigh's waves. Basic theory of forward problem calculation for chosen surface waves is described. To avoid computational problems some simplification were made. This operation allowed foster and more straightforward calculation without lost of solution generality. According to the solving scheme restrictions, considered model must have exact two layers, elastic isotropic surface layer and elastic isotropic half space with infinite thickness. The aim of the inversion is to obain elastic parameters and model geometry using dispersion data. In calculations different case were considered, such as different number of modes for different wave types and different frequencies. Created solutions are using OpenMP standard for parallel computing, which help in reduction of computational times. The results of experimental computations are presented and commented. This research was performed in the context of The Geomechanics Project supported by Husky Energy. Also, this research was partially supported by the Natural Sciences and Engineering Research Council of Canada, grant 238416-2013, and by the Polish National Science Center under contract No. DEC-2013/11/B/ST10/0472.

  15. Voxel inversion of airborne electromagnetic data for improved model integration

    NASA Astrophysics Data System (ADS)

    Fiandaca, Gianluca; Auken, Esben; Kirkegaard, Casper; Vest Christiansen, Anders

    2014-05-01

    Inversion of electromagnetic data has migrated from single site interpretations to inversions including entire surveys using spatial constraints to obtain geologically reasonable results. Though, the model space is usually linked to the actual observation points. For airborne electromagnetic (AEM) surveys the spatial discretization of the model space reflects the flight lines. On the contrary, geological and groundwater models most often refer to a regular voxel grid, not correlated to the geophysical model space, and the geophysical information has to be relocated for integration in (hydro)geological models. We have developed a new geophysical inversion algorithm working directly in a voxel grid disconnected from the actual measuring points, which then allows for informing directly geological/hydrogeological models. The new voxel model space defines the soil properties (like resistivity) on a set of nodes, and the distribution of the soil properties is computed everywhere by means of an interpolation function (e.g. inverse distance or kriging). Given this definition of the voxel model space, the 1D forward responses of the AEM data are computed as follows: 1) a 1D model subdivision, in terms of model thicknesses, is defined for each 1D data set, creating "virtual" layers. 2) the "virtual" 1D models at the sounding positions are finalized by interpolating the soil properties (the resistivity) in the center of the "virtual" layers. 3) the forward response is computed in 1D for each "virtual" model. We tested the new inversion scheme on an AEM survey carried out with the SkyTEM system close to Odder, in Denmark. The survey comprises 106054 dual mode AEM soundings, and covers an area of approximately 13 km X 16 km. The voxel inversion was carried out on a structured grid of 260 X 325 X 29 xyz nodes (50 m xy spacing), for a total of 2450500 inversion parameters. A classical spatially constrained inversion (SCI) was carried out on the same data set, using 106054 spatially constrained 1D models with 29 layers. For comparison, the SCI inversion models have been gridded on the same grid of the voxel inversion. The new voxel inversion and the classic SCI give similar data fit and inversion models. The voxel inversion decouples the geophysical model from the position of acquired data, and at the same time fits the data as well as the classic SCI inversion. Compared to the classic approach, the voxel inversion is better suited for informing directly (hydro)geological models and for sequential/Joint/Coupled (hydro)geological inversion. We believe that this new approach will facilitate the integration of geophysics, geology and hydrology for improved groundwater and environmental management.

  16. Workflows for Full Waveform Inversions

    NASA Astrophysics Data System (ADS)

    Boehm, Christian; Krischer, Lion; Afanasiev, Michael; van Driel, Martin; May, Dave A.; Rietmann, Max; Fichtner, Andreas

    2017-04-01

    Despite many theoretical advances and the increasing availability of high-performance computing clusters, full seismic waveform inversions still face considerable challenges regarding data and workflow management. While the community has access to solvers which can harness modern heterogeneous computing architectures, the computational bottleneck has fallen to these often manpower-bounded issues that need to be overcome to facilitate further progress. Modern inversions involve huge amounts of data and require a tight integration between numerical PDE solvers, data acquisition and processing systems, nonlinear optimization libraries, and job orchestration frameworks. To this end we created a set of libraries and applications revolving around Salvus (http://salvus.io), a novel software package designed to solve large-scale full waveform inverse problems. This presentation focuses on solving passive source seismic full waveform inversions from local to global scales with Salvus. We discuss (i) design choices for the aforementioned components required for full waveform modeling and inversion, (ii) their implementation in the Salvus framework, and (iii) how it is all tied together by a usable workflow system. We combine state-of-the-art algorithms ranging from high-order finite-element solutions of the wave equation to quasi-Newton optimization algorithms using trust-region methods that can handle inexact derivatives. All is steered by an automated interactive graph-based workflow framework capable of orchestrating all necessary pieces. This naturally facilitates the creation of new Earth models and hopefully sparks new scientific insights. Additionally, and even more importantly, it enhances reproducibility and reliability of the final results.

  17. Dynamic clearance measure to evaluate locomotor and perceptuo-motor strategies used for obstacle circumvention in a virtual environment.

    PubMed

    Darekar, Anuja; Lamontagne, Anouk; Fung, Joyce

    2015-04-01

    Circumvention around an obstacle entails a dynamic interaction with the obstacle to maintain a safe clearance. We used a novel mathematical interpolation method based on the modified Shepard's method of Inverse Distance Weighting to compute dynamic clearance that reflected this interaction as well as minimal clearance. This proof-of-principle study included seven young healthy, four post-stroke and four healthy age-matched individuals. A virtual environment designed to assess obstacle circumvention was used to administer a locomotor (walking) and a perceptuo-motor (navigation with a joystick) task. In both tasks, participants were asked to navigate towards a target while avoiding collision with a moving obstacle that approached from either head-on, or 30° left or right. Among young individuals, dynamic clearance did not differ significantly between obstacle approach directions in both tasks. Post-stroke individuals maintained larger and smaller dynamic clearance during the locomotor and the perceptuo-motor task respectively as compared to age-matched controls. Dynamic clearance was larger than minimal distance from the obstacle irrespective of the group, task and obstacle approach direction. Also, in contrast to minimal distance, dynamic clearance can respond differently to different avoidance behaviors. Such a measure can be beneficial in contrasting obstacle avoidance behaviors in different populations with mobility problems. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. Dynamic Forms. Part 1: Functions

    NASA Technical Reports Server (NTRS)

    Meyer, George; Smith, G. Allan

    1993-01-01

    The formalism of dynamic forms is developed as a means for organizing and systematizing the design control systems. The formalism allows the designer to easily compute derivatives to various orders of large composite functions that occur in flight-control design. Such functions involve many function-of-a-function calls that may be nested to many levels. The component functions may be multiaxis, nonlinear, and they may include rotation transformations. A dynamic form is defined as a variable together with its time derivatives up to some fixed but arbitrary order. The variable may be a scalar, a vector, a matrix, a direction cosine matrix, Euler angles, or Euler parameters. Algorithms for standard elementary functions and operations of scalar dynamic forms are developed first. Then vector and matrix operations and transformations between parameterization of rotations are developed in the next level in the hierarchy. Commonly occurring algorithms in control-system design, including inversion of pure feedback systems, are developed in the third level. A large-angle, three-axis attitude servo and other examples are included to illustrate the effectiveness of the developed formalism. All algorithms were implemented in FORTRAN code. Practical experience shows that the proposed formalism may significantly improve the productivity of the design and coding process.

  19. Variational Bayesian identification and prediction of stochastic nonlinear dynamic causal models.

    PubMed

    Daunizeau, J; Friston, K J; Kiebel, S J

    2009-11-01

    In this paper, we describe a general variational Bayesian approach for approximate inference on nonlinear stochastic dynamic models. This scheme extends established approximate inference on hidden-states to cover: (i) nonlinear evolution and observation functions, (ii) unknown parameters and (precision) hyperparameters and (iii) model comparison and prediction under uncertainty. Model identification or inversion entails the estimation of the marginal likelihood or evidence of a model. This difficult integration problem can be finessed by optimising a free-energy bound on the evidence using results from variational calculus. This yields a deterministic update scheme that optimises an approximation to the posterior density on the unknown model variables. We derive such a variational Bayesian scheme in the context of nonlinear stochastic dynamic hierarchical models, for both model identification and time-series prediction. The computational complexity of the scheme is comparable to that of an extended Kalman filter, which is critical when inverting high dimensional models or long time-series. Using Monte-Carlo simulations, we assess the estimation efficiency of this variational Bayesian approach using three stochastic variants of chaotic dynamic systems. We also demonstrate the model comparison capabilities of the method, its self-consistency and its predictive power.

  20. Principal Component Geostatistical Approach for large-dimensional inverse problems

    PubMed Central

    Kitanidis, P K; Lee, J

    2014-01-01

    The quasi-linear geostatistical approach is for weakly nonlinear underdetermined inverse problems, such as Hydraulic Tomography and Electrical Resistivity Tomography. It provides best estimates as well as measures for uncertainty quantification. However, for its textbook implementation, the approach involves iterations, to reach an optimum, and requires the determination of the Jacobian matrix, i.e., the derivative of the observation function with respect to the unknown. Although there are elegant methods for the determination of the Jacobian, the cost is high when the number of unknowns, m, and the number of observations, n, is high. It is also wasteful to compute the Jacobian for points away from the optimum. Irrespective of the issue of computing derivatives, the computational cost of implementing the method is generally of the order of m2n, though there are methods to reduce the computational cost. In this work, we present an implementation that utilizes a matrix free in terms of the Jacobian matrix Gauss-Newton method and improves the scalability of the geostatistical inverse problem. For each iteration, it is required to perform K runs of the forward problem, where K is not just much smaller than m but can be smaller that n. The computational and storage cost of implementation of the inverse procedure scales roughly linearly with m instead of m2 as in the textbook approach. For problems of very large m, this implementation constitutes a dramatic reduction in computational cost compared to the textbook approach. Results illustrate the validity of the approach and provide insight in the conditions under which this method perform best. PMID:25558113

  1. Principal Component Geostatistical Approach for large-dimensional inverse problems.

    PubMed

    Kitanidis, P K; Lee, J

    2014-07-01

    The quasi-linear geostatistical approach is for weakly nonlinear underdetermined inverse problems, such as Hydraulic Tomography and Electrical Resistivity Tomography. It provides best estimates as well as measures for uncertainty quantification. However, for its textbook implementation, the approach involves iterations, to reach an optimum, and requires the determination of the Jacobian matrix, i.e., the derivative of the observation function with respect to the unknown. Although there are elegant methods for the determination of the Jacobian, the cost is high when the number of unknowns, m , and the number of observations, n , is high. It is also wasteful to compute the Jacobian for points away from the optimum. Irrespective of the issue of computing derivatives, the computational cost of implementing the method is generally of the order of m 2 n , though there are methods to reduce the computational cost. In this work, we present an implementation that utilizes a matrix free in terms of the Jacobian matrix Gauss-Newton method and improves the scalability of the geostatistical inverse problem. For each iteration, it is required to perform K runs of the forward problem, where K is not just much smaller than m but can be smaller that n . The computational and storage cost of implementation of the inverse procedure scales roughly linearly with m instead of m 2 as in the textbook approach. For problems of very large m , this implementation constitutes a dramatic reduction in computational cost compared to the textbook approach. Results illustrate the validity of the approach and provide insight in the conditions under which this method perform best.

  2. GARCH modelling of covariance in dynamical estimation of inverse solutions

    NASA Astrophysics Data System (ADS)

    Galka, Andreas; Yamashita, Okito; Ozaki, Tohru

    2004-12-01

    The problem of estimating unobserved states of spatially extended dynamical systems poses an inverse problem, which can be solved approximately by a recently developed variant of Kalman filtering; in order to provide the model of the dynamics with more flexibility with respect to space and time, we suggest to combine the concept of GARCH modelling of covariance, well known in econometrics, with Kalman filtering. We formulate this algorithm for spatiotemporal systems governed by stochastic diffusion equations and demonstrate its feasibility by presenting a numerical simulation designed to imitate the situation of the generation of electroencephalographic recordings by the human cortex.

  3. Force and Moment Approach for Achievable Dynamics Using Nonlinear Dynamic Inversion

    NASA Technical Reports Server (NTRS)

    Ostroff, Aaron J.; Bacon, Barton J.

    1999-01-01

    This paper describes a general form of nonlinear dynamic inversion control for use in a generic nonlinear simulation to evaluate candidate augmented aircraft dynamics. The implementation is specifically tailored to the task of quickly assessing an aircraft's control power requirements and defining the achievable dynamic set. The achievable set is evaluated while undergoing complex mission maneuvers, and perfect tracking will be accomplished when the desired dynamics are achievable. Variables are extracted directly from the simulation model each iteration, so robustness is not an issue. Included in this paper is a description of the implementation of the forces and moments from simulation variables, the calculation of control effectiveness coefficients, methods for implementing different types of aerodynamic and thrust vectoring controls, adjustments for control effector failures, and the allocation approach used. A few examples illustrate the perfect tracking results obtained.

  4. A flexible, extendable, modular and computationally efficient approach to scattering-integral-based seismic full waveform inversion

    NASA Astrophysics Data System (ADS)

    Schumacher, F.; Friederich, W.; Lamara, S.

    2016-02-01

    We present a new conceptual approach to scattering-integral-based seismic full waveform inversion (FWI) that allows a flexible, extendable, modular and both computationally and storage-efficient numerical implementation. To achieve maximum modularity and extendability, interactions between the three fundamental steps carried out sequentially in each iteration of the inversion procedure, namely, solving the forward problem, computing waveform sensitivity kernels and deriving a model update, are kept at an absolute minimum and are implemented by dedicated interfaces. To realize storage efficiency and maximum flexibility, the spatial discretization of the inverted earth model is allowed to be completely independent of the spatial discretization employed by the forward solver. For computational efficiency reasons, the inversion is done in the frequency domain. The benefits of our approach are as follows: (1) Each of the three stages of an iteration is realized by a stand-alone software program. In this way, we avoid the monolithic, unflexible and hard-to-modify codes that have often been written for solving inverse problems. (2) The solution of the forward problem, required for kernel computation, can be obtained by any wave propagation modelling code giving users maximum flexibility in choosing the forward modelling method. Both time-domain and frequency-domain approaches can be used. (3) Forward solvers typically demand spatial discretizations that are significantly denser than actually desired for the inverted model. Exploiting this fact by pre-integrating the kernels allows a dramatic reduction of disk space and makes kernel storage feasible. No assumptions are made on the spatial discretization scheme employed by the forward solver. (4) In addition, working in the frequency domain effectively reduces the amount of data, the number of kernels to be computed and the number of equations to be solved. (5) Updating the model by solving a large equation system can be done using different mathematical approaches. Since kernels are stored on disk, it can be repeated many times for different regularization parameters without need to solve the forward problem, making the approach accessible to Occam's method. Changes of choice of misfit functional, weighting of data and selection of data subsets are still possible at this stage. We have coded our approach to FWI into a program package called ASKI (Analysis of Sensitivity and Kernel Inversion) which can be applied to inverse problems at various spatial scales in both Cartesian and spherical geometries. It is written in modern FORTRAN language using object-oriented concepts that reflect the modular structure of the inversion procedure. We validate our FWI method by a small-scale synthetic study and present first results of its application to high-quality seismological data acquired in the southern Aegean.

  5. Development of a three-dimensional multistage inverse design method for aerodynamic matching of axial compressor blading

    NASA Astrophysics Data System (ADS)

    van Rooij, Michael P. C.

    Current turbomachinery design systems increasingly rely on multistage Computational Fluid Dynamics (CFD) as a means to assess performance of designs. However, design weaknesses attributed to improper stage matching are addressed using often ineffective strategies involving a costly iterative loop between blading modification, revision of design intent, and evaluation of aerodynamic performance. A design methodology is presented which greatly improves the process of achieving design-point aerodynamic matching. It is based on a three-dimensional viscous inverse design method which generates the blade camber surface based on prescribed pressure loading, thickness distribution and stacking line. This inverse design method has been extended to allow blading analysis and design in a multi-blade row environment. Blade row coupling was achieved through a mixing plane approximation. Parallel computing capability in the form of MPI has been implemented to reduce the computational time for multistage calculations. Improvements have been made to the flow solver to reach the level of accuracy required for multistage calculations. These include inclusion of heat flux, temperature-dependent treatment of viscosity, and improved calculation of stress components and artificial dissipation near solid walls. A validation study confirmed that the obtained accuracy is satisfactory at design point conditions. Improvements have also been made to the inverse method to increase robustness and design fidelity. These include the possibility to exclude spanwise sections of the blade near the endwalls from the design process, and a scheme that adjusts the specified loading area for changes resulting from the leading and trailing edge treatment. Furthermore, a pressure loading manager has been developed. Its function is to automatically adjust the pressure loading area distribution during the design calculation in order to achieve a specified design objective. Possible objectives are overall mass flow and compression ratio, and radial distribution of exit flow angle. To supplement the loading manager, mass flow inlet and exit boundary conditions have been implemented. Through appropriate combination of pressure or mass flow inflow/outflow boundary conditions and loading manager objectives, increased control over the design intent can be obtained. The three-dimensional multistage inverse design method with pressure loading manager was demonstrated to offer greatly enhanced blade row matching capabilities. Multistage design allows for simultaneous design of blade rows in a mutually interacting environment, which permits the redesigned blading to adapt to changing aerodynamic conditions resulting from the redesign. This ensures that the obtained blading geometry and performance implied by the prescribed pressure loading distribution are consistent with operation in the multi-blade row environment. The developed methodology offers high aerodynamic design quality and productivity, and constitutes a significant improvement over existing approaches used to address design-point aerodynamic matching.

  6. Inverse Design of Low-Boom Supersonic Concepts Using Reversed Equivalent-Area Targets

    NASA Technical Reports Server (NTRS)

    Li, Wu; Rallabhand, Sriam

    2011-01-01

    A promising path for developing a low-boom configuration is a multifidelity approach that (1) starts from a low-fidelity low-boom design, (2) refines the low-fidelity design with computational fluid dynamics (CFD) equivalent-area (Ae) analysis, and (3) improves the design with sonic-boom analysis by using CFD off-body pressure distributions. The focus of this paper is on the third step of this approach, in which the design is improved with sonic-boom analysis through the use of CFD calculations. A new inverse design process for off-body pressure tailoring is formulated and demonstrated with a low-boom supersonic configuration that was developed by using the mixed-fidelity design method with CFD Ae analysis. The new inverse design process uses the reverse propagation of the pressure distribution (dp/p) from a mid-field location to a near-field location, converts the near-field dp/p into an equivalent-area distribution, generates a low-boom target for the reversed equivalent area (Ae,r) of the configuration, and modifies the configuration to minimize the differences between the configuration s Ae,r and the low-boom target. The new inverse design process is used to modify a supersonic demonstrator concept for a cruise Mach number of 1.6 and a cruise weight of 30,000 lb. The modified configuration has a fully shaped ground signature that has a perceived loudness (PLdB) value of 78.5, while the original configuration has a partially shaped aft signature with a PLdB of 82.3.

  7. On the Use of Nonlinear Regularization in Inverse Methods for the Solar Tachocline Profile Determination

    NASA Astrophysics Data System (ADS)

    Corbard, T.; Berthomieu, G.; Provost, J.; Blanc-Feraud, L.

    Inferring the solar rotation from observed frequency splittings represents an ill-posed problem in the sense of Hadamard and the traditional approach used to override this difficulty consists in regularizing the problem by adding some a priori information on the global smoothness of the solution defined as the norm of its first or second derivative. Nevertheless, inversions of rotational splittings (e.g. Corbard et al., 1998; Schou et al., 1998) have shown that the surface layers and the so-called solar tachocline (Spiegel & Zahn 1992) at the base of the convection zone are regions in which high radial gradients of the rotation rate occur. %there exist high gradients in the solar rotation profile near %the surface and at the base of the convection zone (e.g. Corbard et al. 1998) %in the so-called solar tachocline (Spiegel & Zahn 1992). Therefore, the global smoothness a-priori which tends to smooth out every high gradient in the solution may not be appropriate for the study of a zone like the tachocline which is of particular interest for the study of solar dynamics (e.g. Elliot 1997). In order to infer the fine structure of such regions with high gradients by inverting helioseismic data, we have to find a way to preserve these zones in the inversion process. Setting a more adapted constraint on the solution leads to non-linear regularization methods that are in current use for edge-preserving regularization in computed imaging (e.g. Blanc-Feraud et al. 1995). In this work, we investigate their use in the helioseismic context of rotational inversions.

  8. Seismic waveform inversion for core-mantle boundary topography

    NASA Astrophysics Data System (ADS)

    Colombi, Andrea; Nissen-Meyer, Tarje; Boschi, Lapo; Giardini, Domenico

    2014-07-01

    The topography of the core-mantle boundary (CMB) is directly linked to the dynamics of both the mantle and the outer core, although it is poorly constrained and understood. Recent studies have produced topography models with mutual agreement up to degree 2. A broad-band waveform inversion strategy is introduced and applied here, with relatively low computational cost and based on a first-order Born approximation. Its performance is validated using synthetic waveforms calculated in theoretical earth models that include different topography patterns with varying lateral wavelengths, from 600 to 2500 km, and magnitudes (˜10 km peak-to-peak). The source-receiver geometry focuses mainly on the Pdiff, PKP, PcP and ScS phases. The results show that PKP branches, PcP and ScS generally perform well and in a similar fashion, while Pdiff yields unsatisfactory results. We investigate also how 3-D mantle correction influences the output models, and find that despite the disturbance introduced, the models recovered do not appear to be biased, provided that the 3-D model is correct. Using cross-correlated traveltimes, we derive new topography models from both P and S waves. The static corrections used to remove the mantle effect are likely to affect the inversion, compromising the agreement between models derived from P and S data. By modelling traveltime residuals starting from sensitivity kernels, we show how the simultaneous use of volumetric and boundary kernels can reduce the bias coming from mantle structures. The joint inversion approach should be the only reliable method to invert for CMB topography using absolute cross-correlation traveltimes.

  9. Network Inference via the Time-Varying Graphical Lasso

    PubMed Central

    Hallac, David; Park, Youngsuk; Boyd, Stephen; Leskovec, Jure

    2018-01-01

    Many important problems can be modeled as a system of interconnected entities, where each entity is recording time-dependent observations or measurements. In order to spot trends, detect anomalies, and interpret the temporal dynamics of such data, it is essential to understand the relationships between the different entities and how these relationships evolve over time. In this paper, we introduce the time-varying graphical lasso (TVGL), a method of inferring time-varying networks from raw time series data. We cast the problem in terms of estimating a sparse time-varying inverse covariance matrix, which reveals a dynamic network of interdependencies between the entities. Since dynamic network inference is a computationally expensive task, we derive a scalable message-passing algorithm based on the Alternating Direction Method of Multipliers (ADMM) to solve this problem in an efficient way. We also discuss several extensions, including a streaming algorithm to update the model and incorporate new observations in real time. Finally, we evaluate our TVGL algorithm on both real and synthetic datasets, obtaining interpretable results and outperforming state-of-the-art baselines in terms of both accuracy and scalability. PMID:29770256

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Na; Zhang, Peng; Kang, Wei

    Multiscale simulations of fluids such as blood represent a major computational challenge of coupling the disparate spatiotemporal scales between molecular and macroscopic transport phenomena characterizing such complex fluids. In this paper, a coarse-grained (CG) particle model is developed for simulating blood flow by modifying the Morse potential, traditionally used in Molecular Dynamics for modeling vibrating structures. The modified Morse potential is parameterized with effective mass scales for reproducing blood viscous flow properties, including density, pressure, viscosity, compressibility and characteristic flow dynamics of human blood plasma fluid. The parameterization follows a standard inverse-problem approach in which the optimal micro parameters aremore » systematically searched, by gradually decoupling loosely correlated parameter spaces, to match the macro physical quantities of viscous blood flow. The predictions of this particle based multiscale model compare favorably to classic viscous flow solutions such as Counter-Poiseuille and Couette flows. It demonstrates that such coarse grained particle model can be applied to replicate the dynamics of viscous blood flow, with the advantage of bridging the gap between macroscopic flow scales and the cellular scales characterizing blood flow that continuum based models fail to handle adequately.« less

  11. A derivation and scalable implementation of the synchronous parallel kinetic Monte Carlo method for simulating long-time dynamics

    NASA Astrophysics Data System (ADS)

    Byun, Hye Suk; El-Naggar, Mohamed Y.; Kalia, Rajiv K.; Nakano, Aiichiro; Vashishta, Priya

    2017-10-01

    Kinetic Monte Carlo (KMC) simulations are used to study long-time dynamics of a wide variety of systems. Unfortunately, the conventional KMC algorithm is not scalable to larger systems, since its time scale is inversely proportional to the simulated system size. A promising approach to resolving this issue is the synchronous parallel KMC (SPKMC) algorithm, which makes the time scale size-independent. This paper introduces a formal derivation of the SPKMC algorithm based on local transition-state and time-dependent Hartree approximations, as well as its scalable parallel implementation based on a dual linked-list cell method. The resulting algorithm has achieved a weak-scaling parallel efficiency of 0.935 on 1024 Intel Xeon processors for simulating biological electron transfer dynamics in a 4.2 billion-heme system, as well as decent strong-scaling parallel efficiency. The parallel code has been used to simulate a lattice of cytochrome complexes on a bacterial-membrane nanowire, and it is broadly applicable to other problems such as computational synthesis of new materials.

  12. A method of recovering the initial vectors of globally coupled map lattices based on symbolic dynamics

    NASA Astrophysics Data System (ADS)

    Sun, Li-Sha; Kang, Xiao-Yun; Zhang, Qiong; Lin, Lan-Xin

    2011-12-01

    Based on symbolic dynamics, a novel computationally efficient algorithm is proposed to estimate the unknown initial vectors of globally coupled map lattices (CMLs). It is proved that not all inverse chaotic mapping functions are satisfied for contraction mapping. It is found that the values in phase space do not always converge on their initial values with respect to sufficient backward iteration of the symbolic vectors in terms of global convergence or divergence (CD). Both CD property and the coupling strength are directly related to the mapping function of the existing CML. Furthermore, the CD properties of Logistic, Bernoulli, and Tent chaotic mapping functions are investigated and compared. Various simulation results and the performances of the initial vector estimation with different signal-to-noise ratios (SNRs) are also provided to confirm the proposed algorithm. Finally, based on the spatiotemporal chaotic characteristics of the CML, the conditions of estimating the initial vectors using symbolic dynamics are discussed. The presented method provides both theoretical and experimental results for better understanding and characterizing the behaviours of spatiotemporal chaotic systems.

  13. 2.5D complex resistivity modeling and inversion using unstructured grids

    NASA Astrophysics Data System (ADS)

    Xu, Kaijun; Sun, Jie

    2016-04-01

    The characteristic of complex resistivity on rock and ore has been recognized by people for a long time. Generally we have used the Cole-Cole Model(CCM) to describe complex resistivity. It has been proved that the electrical anomaly of geologic body can be quantitative estimated by CCM parameters such as direct resistivity(ρ0), chargeability(m), time constant(τ) and frequency dependence(c). Thus it is very important to obtain the complex parameters of geologic body. It is difficult to approximate complex structures and terrain using traditional rectangular grid. In order to enhance the numerical accuracy and rationality of modeling and inversion, we use an adaptive finite-element algorithm for forward modeling of the frequency-domain 2.5D complex resistivity and implement the conjugate gradient algorithm in the inversion of 2.5D complex resistivity. An adaptive finite element method is applied for solving the 2.5D complex resistivity forward modeling of horizontal electric dipole source. First of all, the CCM is introduced into the Maxwell's equations to calculate the complex resistivity electromagnetic fields. Next, the pseudo delta function is used to distribute electric dipole source. Then the electromagnetic fields can be expressed in terms of the primary fields caused by layered structure and the secondary fields caused by inhomogeneities anomalous conductivity. At last, we calculated the electromagnetic fields response of complex geoelectric structures such as anticline, syncline, fault. The modeling results show that adaptive finite-element methods can automatically improve mesh generation and simulate complex geoelectric models using unstructured grids. The 2.5D complex resistivity invertion is implemented based the conjugate gradient algorithm.The conjugate gradient algorithm doesn't need to compute the sensitivity matrix but directly computes the sensitivity matrix or its transpose multiplying vector. In addition, the inversion target zones are segmented with fine grids and the background zones are segmented with big grid, the method can reduce the grid amounts of inversion, it is very helpful to improve the computational efficiency. The inversion results verify the validity and stability of conjugate gradient inversion algorithm. The results of theoretical calculation indicate that the modeling and inversion of 2.5D complex resistivity using unstructured grids are feasible. Using unstructured grids can improve the accuracy of modeling, but the large number of grids inversion is extremely time-consuming, so the parallel computation for the inversion is necessary. Acknowledgments: We thank to the support of the National Natural Science Foundation of China(41304094).

  14. FAST: a framework for simulation and analysis of large-scale protein-silicon biosensor circuits.

    PubMed

    Gu, Ming; Chakrabartty, Shantanu

    2013-08-01

    This paper presents a computer aided design (CAD) framework for verification and reliability analysis of protein-silicon hybrid circuits used in biosensors. It is envisioned that similar to integrated circuit (IC) CAD design tools, the proposed framework will be useful for system level optimization of biosensors and for discovery of new sensing modalities without resorting to laborious fabrication and experimental procedures. The framework referred to as FAST analyzes protein-based circuits by solving inverse problems involving stochastic functional elements that admit non-linear relationships between different circuit variables. In this regard, FAST uses a factor-graph netlist as a user interface and solving the inverse problem entails passing messages/signals between the internal nodes of the netlist. Stochastic analysis techniques like density evolution are used to understand the dynamics of the circuit and estimate the reliability of the solution. As an example, we present a complete design flow using FAST for synthesis, analysis and verification of our previously reported conductometric immunoassay that uses antibody-based circuits to implement forward error-correction (FEC).

  15. Real-time inverse kinematics and inverse dynamics for lower limb applications using OpenSim

    PubMed Central

    Modenese, L.; Lloyd, D.G.

    2017-01-01

    Real-time estimation of joint angles and moments can be used for rapid evaluation in clinical, sport, and rehabilitation contexts. However, real-time calculation of kinematics and kinetics is currently based on approximate solutions or generic anatomical models. We present a real-time system based on OpenSim solving inverse kinematics and dynamics without simplifications at 2000 frame per seconds with less than 31.5ms of delay. We describe the software architecture, sensitivity analyses to minimise delays and errors, and compare offline and real-time results. This system has the potential to strongly impact current rehabilitation practices enabling the use of personalised musculoskeletal models in real-time. PMID:27723992

  16. Real-time inverse kinematics and inverse dynamics for lower limb applications using OpenSim.

    PubMed

    Pizzolato, C; Reggiani, M; Modenese, L; Lloyd, D G

    2017-03-01

    Real-time estimation of joint angles and moments can be used for rapid evaluation in clinical, sport, and rehabilitation contexts. However, real-time calculation of kinematics and kinetics is currently based on approximate solutions or generic anatomical models. We present a real-time system based on OpenSim solving inverse kinematics and dynamics without simplifications at 2000 frame per seconds with less than 31.5 ms of delay. We describe the software architecture, sensitivity analyses to minimise delays and errors, and compare offline and real-time results. This system has the potential to strongly impact current rehabilitation practices enabling the use of personalised musculoskeletal models in real-time.

  17. Inverse problems in the modeling of vibrations of flexible beams

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Powers, R. K.; Rosen, I. G.

    1987-01-01

    The formulation and solution of inverse problems for the estimation of parameters which describe damping and other dynamic properties in distributed models for the vibration of flexible structures is considered. Motivated by a slewing beam experiment, the identification of a nonlinear velocity dependent term which models air drag damping in the Euler-Bernoulli equation is investigated. Galerkin techniques are used to generate finite dimensional approximations. Convergence estimates and numerical results are given. The modeling of, and related inverse problems for the dynamics of a high pressure hose line feeding a gas thruster actuator at the tip of a cantilevered beam are then considered. Approximation and convergence are discussed and numerical results involving experimental data are presented.

  18. Estimating the Earthquake Source Time Function by Markov Chain Monte Carlo Sampling

    NASA Astrophysics Data System (ADS)

    Dȩbski, Wojciech

    2008-07-01

    Many aspects of earthquake source dynamics like dynamic stress drop, rupture velocity and directivity, etc. are currently inferred from the source time functions obtained by a deconvolution of the propagation and recording effects from seismograms. The question of the accuracy of obtained results remains open. In this paper we address this issue by considering two aspects of the source time function deconvolution. First, we propose a new pseudo-spectral parameterization of the sought function which explicitly takes into account the physical constraints imposed on the sought functions. Such parameterization automatically excludes non-physical solutions and so improves the stability and uniqueness of the deconvolution. Secondly, we demonstrate that the Bayesian approach to the inverse problem at hand, combined with an efficient Markov Chain Monte Carlo sampling technique, is a method which allows efficient estimation of the source time function uncertainties. The key point of the approach is the description of the solution of the inverse problem by the a posteriori probability density function constructed according to the Bayesian (probabilistic) theory. Next, the Markov Chain Monte Carlo sampling technique is used to sample this function so the statistical estimator of a posteriori errors can be easily obtained with minimal additional computational effort with respect to modern inversion (optimization) algorithms. The methodological considerations are illustrated by a case study of the mining-induced seismic event of the magnitude M L ≈3.1 that occurred at Rudna (Poland) copper mine. The seismic P-wave records were inverted for the source time functions, using the proposed algorithm and the empirical Green function technique to approximate Green functions. The obtained solutions seem to suggest some complexity of the rupture process with double pulses of energy release. However, the error analysis shows that the hypothesis of source complexity is not justified at the 95% confidence level. On the basis of the analyzed event we also show that the separation of the source inversion into two steps introduces limitations on the completeness of the a posteriori error analysis.

  19. Technical Note: Approximate Bayesian parameterization of a complex tropical forest model

    NASA Astrophysics Data System (ADS)

    Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.

    2013-08-01

    Inverse parameter estimation of process-based models is a long-standing problem in ecology and evolution. A key problem of inverse parameter estimation is to define a metric that quantifies how well model predictions fit to the data. Such a metric can be expressed by general cost or objective functions, but statistical inversion approaches are based on a particular metric, the probability of observing the data given the model, known as the likelihood. Deriving likelihoods for dynamic models requires making assumptions about the probability for observations to deviate from mean model predictions. For technical reasons, these assumptions are usually derived without explicit consideration of the processes in the simulation. Only in recent years have new methods become available that allow generating likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional MCMC, performs well in retrieving known parameter values from virtual field data generated by the forest model. We analyze the results of the parameter estimation, examine the sensitivity towards the choice and aggregation of model outputs and observed data (summary statistics), and show results from using this method to fit the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss differences of this approach to Approximate Bayesian Computing (ABC), another commonly used method to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can successfully be applied to process-based models of high complexity. The methodology is particularly suited to heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models in ecology and evolution.

  20. Molecular interactions of agonist and inverse agonist ligands at serotonin 5-HT2C G protein-coupled receptors: computational ligand docking and molecular dynamics studies validated by experimental mutagenesis results

    NASA Astrophysics Data System (ADS)

    Córdova-Sintjago, Tania C.; Liu, Yue; Booth, Raymond G.

    2015-02-01

    To understand molecular determinants for ligand activation of the serotonin 5-HT2C G protein-coupled receptor (GPCR), a drug target for obesity and neuropsychiatric disorders, a 5-HT2C homology model was built according to an adrenergic β2 GPCR (β2AR) structure and validated using a 5-HT2B GPCR crystal structure. The models were equilibrated in a simulated phosphatidyl choline membrane for ligand docking and molecular dynamics studies. Ligands included (2S, 4R)-(-)-trans-4-(3'-bromo- and trifluoro-phenyl)-N,N-dimethyl-1,2,3,4-tetrahydronaphthalene-2-amine (3'-Br-PAT and 3'-CF3-PAT), a 5-HT2C agonist and inverse agonist, respectively. Distinct interactions of 3'-Br-PAT and 3'-CF3-PAT at the wild-type (WT) 5-HT2C receptor model were observed and experimental 5-HT2C receptor mutagenesis studies were undertaken to validate the modelling results. For example, the inverse agonist 3'-CF3-PAT docked deeper in the WT 5-HT2C binding pocket and altered the orientation of transmembrane helices (TM) 6 in comparison to the agonist 3'-Br-PAT, suggesting that changes in TM orientation that result from ligand binding impact function. For both PATs, mutation of 5-HT2C residues S3.36, T3.37, and F5.47 to alanine resulted in significantly decreased affinity, as predicted from modelling results. It was concluded that upon PAT binding, 5-HT2C residues T3.37 and F5.47 in TMs 3 and 5, respectively, engage in inter-helical interactions with TMs 4 and 6, respectively. The movement of TMs 5 and 6 upon agonist and inverse agonist ligand binding observed in the 5-HT2C receptor modelling studies was similar to movements reported for the activation and deactivation of the β2AR, suggesting common mechanisms among aminergic neurotransmitter GPCRs.

  1. Nonlinear Stimulated Raman Exact Passage by Resonance-Locked Inverse Engineering

    NASA Astrophysics Data System (ADS)

    Dorier, V.; Gevorgyan, M.; Ishkhanyan, A.; Leroy, C.; Jauslin, H. R.; Guérin, S.

    2017-12-01

    We derive an exact and robust stimulated Raman process for nonlinear quantum systems driven by pulsed external fields. The external fields are designed with closed-form expressions from the inverse engineering of a given efficient and stable dynamics. This technique allows one to induce a controlled population inversion which surpasses the usual nonlinear stimulated Raman adiabatic passage efficiency.

  2. The First Time Ever I Saw Your Feet: Inversion Effect in Newborns' Sensitivity to Biological Motion

    ERIC Educational Resources Information Center

    Bardi, Lara; Regolin, Lucia; Simion, Francesca

    2014-01-01

    Inversion effect in biological motion perception has been recently attributed to an innate sensitivity of the visual system to the gravity-dependent dynamic of the motion. However, the specific cues that determine the inversion effect in naïve subjects were never investigated. In the present study, we have assessed the contribution of the local…

  3. 2D Seismic Imaging of Elastic Parameters by Frequency Domain Full Waveform Inversion

    NASA Astrophysics Data System (ADS)

    Brossier, R.; Virieux, J.; Operto, S.

    2008-12-01

    Thanks to recent advances in parallel computing, full waveform inversion is today a tractable seismic imaging method to reconstruct physical parameters of the earth interior at different scales ranging from the near- surface to the deep crust. We present a massively parallel 2D frequency-domain full-waveform algorithm for imaging visco-elastic media from multi-component seismic data. The forward problem (i.e. the resolution of the frequency-domain 2D PSV elastodynamics equations) is based on low-order Discontinuous Galerkin (DG) method (P0 and/or P1 interpolations). Thanks to triangular unstructured meshes, the DG method allows accurate modeling of both body waves and surface waves in case of complex topography for a discretization of 10 to 15 cells per shear wavelength. The frequency-domain DG system is solved efficiently for multiple sources with the parallel direct solver MUMPS. The local inversion procedure (i.e. minimization of residuals between observed and computed data) is based on the adjoint-state method which allows to efficiently compute the gradient of the objective function. Applying the inversion hierarchically from the low frequencies to the higher ones defines a multiresolution imaging strategy which helps convergence towards the global minimum. In place of expensive Newton algorithm, the combined use of the diagonal terms of the approximate Hessian matrix and optimization algorithms based on quasi-Newton methods (Conjugate Gradient, LBFGS, ...) allows to improve the convergence of the iterative inversion. The distribution of forward problem solutions over processors driven by a mesh partitioning performed by METIS allows to apply most of the inversion in parallel. We shall present the main features of the parallel modeling/inversion algorithm, assess its scalability and illustrate its performances with realistic synthetic case studies.

  4. Inversion of particle-size distribution from angular light-scattering data with genetic algorithms.

    PubMed

    Ye, M; Wang, S; Lu, Y; Hu, T; Zhu, Z; Xu, Y

    1999-04-20

    A stochastic inverse technique based on a genetic algorithm (GA) to invert particle-size distribution from angular light-scattering data is developed. This inverse technique is independent of any given a priori information of particle-size distribution. Numerical tests show that this technique can be successfully applied to inverse problems with high stability in the presence of random noise and low susceptibility to the shape of distributions. It has also been shown that the GA-based inverse technique is more efficient in use of computing time than the inverse Monte Carlo method recently developed by Ligon et al. [Appl. Opt. 35, 4297 (1996)].

  5. On improving the algorithm efficiency in the particle-particle force calculations

    NASA Astrophysics Data System (ADS)

    Kozynchenko, Alexander I.; Kozynchenko, Sergey A.

    2016-09-01

    The problem of calculating inter-particle forces in the particle-particle (PP) simulation models takes an important place in scientific computing. Such simulation models are used in diverse scientific applications arising in astrophysics, plasma physics, particle accelerators, etc., where the long-range forces are considered. The inverse-square laws such as Coulomb's law of electrostatic forces and Newton's law of universal gravitation are the examples of laws pertaining to the long-range forces. The standard naïve PP method outlined, for example, by Hockney and Eastwood [1] is straightforward, processing all pairs of particles in a double nested loop. The PP algorithm provides the best accuracy of all possible methods, but its computational complexity is O (Np2), where Np is a total number of particles involved. Too low efficiency of the PP algorithm seems to be the challenging issue in some cases where the high accuracy is required. An example can be taken from the charged particle beam dynamics where, under computing the own space charge of the beam, so-called macro-particles are used (see e.g., Humphries Jr. [2], Kozynchenko and Svistunov [3]).

  6. Coarse-Grained Models Reveal Functional Dynamics – II. Molecular Dynamics Simulation at the Coarse-Grained Level – Theories and Biological Applications

    PubMed Central

    Chng, Choon-Peng; Yang, Lee-Wei

    2008-01-01

    Molecular dynamics (MD) simulation has remained the most indispensable tool in studying equilibrium/non-equilibrium conformational dynamics since its advent 30 years ago. With advances in spectroscopy accompanying solved biocomplexes in growing sizes, sampling their dynamics that occur at biologically interesting spatial/temporal scales becomes computationally intractable; this motivated the use of coarse-grained (CG) approaches. CG-MD models are used to study folding and conformational transitions in reduced resolution and can employ enlarged time steps due to the absence of some of the fastest motions in the system. The Boltzmann-Inversion technique, heavily used in parameterizing these models, provides a smoothed-out effective potential on which molecular conformation evolves at a faster pace thus stretching simulations into tens of microseconds. As a result, a complete catalytic cycle of HIV-1 protease or the assembly of lipid-protein mixtures could be investigated by CG-MD to gain biological insights. In this review, we survey the theories developed in recent years, which are categorized into Folding-based and Molecular-Mechanics-based. In addition, physical bases in the selection of CG beads/time-step, the choice of effective potentials, representation of solvent, and restoration of molecular representations back to their atomic details are systematically discussed. PMID:19812774

  7. Technical Note: Approximate Bayesian parameterization of a process-based tropical forest model

    NASA Astrophysics Data System (ADS)

    Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.

    2014-02-01

    Inverse parameter estimation of process-based models is a long-standing problem in many scientific disciplines. A key question for inverse parameter estimation is how to define the metric that quantifies how well model predictions fit to the data. This metric can be expressed by general cost or objective functions, but statistical inversion methods require a particular metric, the probability of observing the data given the model parameters, known as the likelihood. For technical and computational reasons, likelihoods for process-based stochastic models are usually based on general assumptions about variability in the observed data, and not on the stochasticity generated by the model. Only in recent years have new methods become available that allow the generation of likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional Markov chain Monte Carlo (MCMC) sampler, performs well in retrieving known parameter values from virtual inventory data generated by the forest model. We analyze the results of the parameter estimation, examine its sensitivity to the choice and aggregation of model outputs and observed data (summary statistics), and demonstrate the application of this method by fitting the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss how this approach differs from approximate Bayesian computation (ABC), another method commonly used to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can be successfully applied to process-based models of high complexity. The methodology is particularly suitable for heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models.

  8. Visco-elastic controlled-source full waveform inversion without surface waves

    NASA Astrophysics Data System (ADS)

    Paschke, Marco; Krause, Martin; Bleibinhaus, Florian

    2016-04-01

    We developed a frequency-domain visco-elastic full waveform inversion for onshore seismic experiments with topography. The forward modeling is based on a finite-difference time-domain algorithm by Robertsson that uses the image-method to ensure a stress-free condition at the surface. The time-domain data is Fourier-transformed at every point in the model space during the forward modeling for a given set of frequencies. The motivation for this approach is the reduced amount of memory when computing kernels, and the straightforward implementation of the multiscale approach. For the inversion, we calculate the Frechet derivative matrix explicitly, and we implement a Levenberg-Marquardt scheme that allows for computing the resolution matrix. To reduce the size of the Frechet derivative matrix, and to stabilize the inversion, an adapted inverse mesh is used. The node spacing is controlled by the velocity distribution and the chosen frequencies. To focus the inversion on body waves (P, P-coda, and S) we mute the surface waves from the data. Consistent spatiotemporal weighting factors are applied to the wavefields during the Fourier transform to obtain the corresponding kernels. We test our code with a synthetic study using the Marmousi model with arbitrary topography. This study also demonstrates the importance of topography and muting surface waves in controlled-source full waveform inversion.

  9. The Breakup of Temperature Inversions In Steep Valleys

    NASA Astrophysics Data System (ADS)

    Colette, A.; Street, R.

    The purpose of this research is to model and provide a better understanding of tem- perature inversions breakup in steep valleys. The Advanced Regional Prediction Sys- tem (ARPS), a three-dimensional, compressible, and non-hydrostatic modeling tool developed by the Center for Analysis and Prediction of Storms at the University of Oklahoma was used. Many field studies indicate that the evolution of the convective and inversion layers are strongly dependant on the surrounding topography. In relatively open valleys, the convective boundary layer usually grows from the bottom of the valley while in steeper cases, the upslope morning winds affects the dynamic of the mixing layer resulting in the destruction of the inversion from its bottom and its top (see Whiteman 1980). ARPS allows one to perform accurate simulation of such situations. First, written in terrain following coordinates, it handles steep topographies; then its extensive radi- ation and surface flux packages provide a good treatment of land related processes. Moreover, ARPS accounts for the incidence angle of sunrays, differencing the ex- posed and non-exposed mountain slopes. However, it neglects the topographic shade which can delay the sunrise of a hour or more in steep valleys. A new subroutine described by Colette etal. 2002 is thus used to compute the projected shade on the surrounding topography. Simulations of temperature inversion breakup for various two-dimensional valleys are presented. The time scale of evolution of the mixing layer is in good agreement with field studies and, as expected, the convective boundary layer shows an asymmetry between east and west facing slopes. The different patterns of inversion breakup doc- umented by Whiteman are also reproduced. These simulations of idealized cases give a better understanding of inversion breakup in steep valleys. Our code is now being applied to a real case: the study of a peculiar wind, la Ora del Garda, caused by the interaction between a lake breeze and a valley wind in the Garda Valley (Northern Italy). Preliminary simulations will be presented. The support of AC by TotalFinaElf and RS by the Physical Meteorology Program of NSF and the VTMX Program of DoE is appreciated.

  10. Transurethral Ultrasound Diffraction Tomography

    DTIC Science & Technology

    2007-03-01

    the covariance matrix was derived. The covariance reduced to that of the X- ray CT under the assumptions of linear operator and real data.[5] The...the covariance matrix in the linear x- ray computed tomography is a special case of the inverse scattering matrix derived in this paper. The matrix was...is derived in Sec. IV, and its relation to that of the linear x- ray computed tomography appears in Sec. V. In Sec. VI, the inverse scattering

  11. Novel Image Quality Control Systems(Add-On). Innovative Computational Methods for Inverse Problems in Optical and SAR Imaging

    DTIC Science & Technology

    2007-02-28

    Iterative Ultrasonic Signal and Image Deconvolution for Estimation of the Complex Medium Response, International Journal of Imaging Systems and...1767-1782, 2006. 31. Z. Mu, R. Plemmons, and P. Santago. Iterative Ultrasonic Signal and Image Deconvolution for Estimation of the Complex...rigorous mathematical and computational research on inverse problems in optical imaging of direct interest to the Army and also the intelligence agencies

  12. Search for β2 Adrenergic Receptor Ligands by Virtual Screening via Grid Computing and Investigation of Binding Modes by Docking and Molecular Dynamics Simulations

    PubMed Central

    Bai, Qifeng; Shao, Yonghua; Pan, Dabo; Zhang, Yang; Liu, Huanxiang; Yao, Xiaojun

    2014-01-01

    We designed a program called MolGridCal that can be used to screen small molecule database in grid computing on basis of JPPF grid environment. Based on MolGridCal program, we proposed an integrated strategy for virtual screening and binding mode investigation by combining molecular docking, molecular dynamics (MD) simulations and free energy calculations. To test the effectiveness of MolGridCal, we screened potential ligands for β2 adrenergic receptor (β2AR) from a database containing 50,000 small molecules. MolGridCal can not only send tasks to the grid server automatically, but also can distribute tasks using the screensaver function. As for the results of virtual screening, the known agonist BI-167107 of β2AR is ranked among the top 2% of the screened candidates, indicating MolGridCal program can give reasonable results. To further study the binding mode and refine the results of MolGridCal, more accurate docking and scoring methods are used to estimate the binding affinity for the top three molecules (agonist BI-167107, neutral antagonist alprenolol and inverse agonist ICI 118,551). The results indicate agonist BI-167107 has the best binding affinity. MD simulation and free energy calculation are employed to investigate the dynamic interaction mechanism between the ligands and β2AR. The results show that the agonist BI-167107 also has the lowest binding free energy. This study can provide a new way to perform virtual screening effectively through integrating molecular docking based on grid computing, MD simulations and free energy calculations. The source codes of MolGridCal are freely available at http://molgridcal.codeplex.com. PMID:25229694

  13. Seismic data restoration with a fast L1 norm trust region method

    NASA Astrophysics Data System (ADS)

    Cao, Jingjie; Wang, Yanfei

    2014-08-01

    Seismic data restoration is a major strategy to provide reliable wavefield when field data dissatisfy the Shannon sampling theorem. Recovery by sparsity-promoting inversion often get sparse solutions of seismic data in a transformed domains, however, most methods for sparsity-promoting inversion are line-searching methods which are efficient but are inclined to obtain local solutions. Using trust region method which can provide globally convergent solutions is a good choice to overcome this shortcoming. A trust region method for sparse inversion has been proposed, however, the efficiency should be improved to suitable for large-scale computation. In this paper, a new L1 norm trust region model is proposed for seismic data restoration and a robust gradient projection method for solving the sub-problem is utilized. Numerical results of synthetic and field data demonstrate that the proposed trust region method can get excellent computation speed and is a viable alternative for large-scale computation.

  14. Using machine learning to accelerate sampling-based inversion

    NASA Astrophysics Data System (ADS)

    Valentine, A. P.; Sambridge, M.

    2017-12-01

    In most cases, a complete solution to a geophysical inverse problem (including robust understanding of the uncertainties associated with the result) requires a sampling-based approach. However, the computational burden is high, and proves intractable for many problems of interest. There is therefore considerable value in developing techniques that can accelerate sampling procedures.The main computational cost lies in evaluation of the forward operator (e.g. calculation of synthetic seismograms) for each candidate model. Modern machine learning techniques-such as Gaussian Processes-offer a route for constructing a computationally-cheap approximation to this calculation, which can replace the accurate solution during sampling. Importantly, the accuracy of the approximation can be refined as inversion proceeds, to ensure high-quality results.In this presentation, we describe and demonstrate this approach-which can be seen as an extension of popular current methods, such as the Neighbourhood Algorithm, and bridges the gap between prior- and posterior-sampling frameworks.

  15. Permittivity and conductivity parameter estimations using full waveform inversion

    NASA Astrophysics Data System (ADS)

    Serrano, Jheyston O.; Ramirez, Ana B.; Abreo, Sergio A.; Sadler, Brian M.

    2018-04-01

    Full waveform inversion of Ground Penetrating Radar (GPR) data is a promising strategy to estimate quantitative characteristics of the subsurface such as permittivity and conductivity. In this paper, we propose a methodology that uses Full Waveform Inversion (FWI) in time domain of 2D GPR data to obtain highly resolved images of the permittivity and conductivity parameters of the subsurface. FWI is an iterative method that requires a cost function to measure the misfit between observed and modeled data, a wave propagator to compute the modeled data and an initial velocity model that is updated at each iteration until an acceptable decrease of the cost function is reached. The use of FWI with GPR are expensive computationally because it is based on the computation of the electromagnetic full wave propagation. Also, the commercially available acquisition systems use only one transmitter and one receiver antenna at zero offset, requiring a large number of shots to scan a single line.

  16. Numerical Inverse Scattering for the Toda Lattice

    NASA Astrophysics Data System (ADS)

    Bilman, Deniz; Trogdon, Thomas

    2017-06-01

    We present a method to compute the inverse scattering transform (IST) for the famed Toda lattice by solving the associated Riemann-Hilbert (RH) problem numerically. Deformations for the RH problem are incorporated so that the IST can be evaluated in O(1) operations for arbitrary points in the ( n, t)-domain, including short- and long-time regimes. No time-stepping is required to compute the solution because ( n, t) appear as parameters in the associated RH problem. The solution of the Toda lattice is computed in long-time asymptotic regions where the asymptotics are not known rigorously.

  17. Semiautomatic and Automatic Cooperative Inversion of Seismic and Magnetotelluric Data

    NASA Astrophysics Data System (ADS)

    Le, Cuong V. A.; Harris, Brett D.; Pethick, Andrew M.; Takam Takougang, Eric M.; Howe, Brendan

    2016-09-01

    Natural source electromagnetic methods have the potential to recover rock property distributions from the surface to great depths. Unfortunately, results in complex 3D geo-electrical settings can be disappointing, especially where significant near-surface conductivity variations exist. In such settings, unconstrained inversion of magnetotelluric data is inexorably non-unique. We believe that: (1) correctly introduced information from seismic reflection can substantially improve MT inversion, (2) a cooperative inversion approach can be automated, and (3) massively parallel computing can make such a process viable. Nine inversion strategies including baseline unconstrained inversion and new automated/semiautomated cooperative inversion approaches are applied to industry-scale co-located 3D seismic and magnetotelluric data sets. These data sets were acquired in one of the Carlin gold deposit districts in north-central Nevada, USA. In our approach, seismic information feeds directly into the creation of sets of prior conductivity model and covariance coefficient distributions. We demonstrate how statistical analysis of the distribution of selected seismic attributes can be used to automatically extract subvolumes that form the framework for prior model 3D conductivity distribution. Our cooperative inversion strategies result in detailed subsurface conductivity distributions that are consistent with seismic, electrical logs and geochemical analysis of cores. Such 3D conductivity distributions would be expected to provide clues to 3D velocity structures that could feed back into full seismic inversion for an iterative practical and truly cooperative inversion process. We anticipate that, with the aid of parallel computing, cooperative inversion of seismic and magnetotelluric data can be fully automated, and we hold confidence that significant and practical advances in this direction have been accomplished.

  18. Modularized seismic full waveform inversion based on waveform sensitivity kernels - The software package ASKI

    NASA Astrophysics Data System (ADS)

    Schumacher, Florian; Friederich, Wolfgang; Lamara, Samir; Gutt, Phillip; Paffrath, Marcel

    2015-04-01

    We present a seismic full waveform inversion concept for applications ranging from seismological to enineering contexts, based on sensitivity kernels for full waveforms. The kernels are derived from Born scattering theory as the Fréchet derivatives of linearized frequency-domain full waveform data functionals, quantifying the influence of elastic earth model parameters and density on the data values. For a specific source-receiver combination, the kernel is computed from the displacement and strain field spectrum originating from the source evaluated throughout the inversion domain, as well as the Green function spectrum and its strains originating from the receiver. By storing the wavefield spectra of specific sources/receivers, they can be re-used for kernel computation for different specific source-receiver combinations, optimizing the total number of required forward simulations. In the iterative inversion procedure, the solution of the forward problem, the computation of sensitivity kernels and the derivation of a model update is held completely separate. In particular, the model description for the forward problem and the description of the inverted model update are kept independent. Hence, the resolution of the inverted model as well as the complexity of solving the forward problem can be iteratively increased (with increasing frequency content of the inverted data subset). This may regularize the overall inverse problem and optimizes the computational effort of both, solving the forward problem and computing the model update. The required interconnection of arbitrary unstructured volume and point grids is realized by generalized high-order integration rules and 3D-unstructured interpolation methods. The model update is inferred solving a minimization problem in a least-squares sense, resulting in Gauss-Newton convergence of the overall inversion process. The inversion method was implemented in the modularized software package ASKI (Analysis of Sensitivity and Kernel Inversion), which provides a generalized interface to arbitrary external forward modelling codes. So far, the 3D spectral-element code SPECFEM3D (Tromp, Komatitsch and Liu, 2008) and the 1D semi-analytical code GEMINI (Friederich and Dalkolmo, 1995) in both, Cartesian and spherical framework are supported. The creation of interfaces to further forward codes is planned in the near future. ASKI is freely available under the terms of the GPL at www.rub.de/aski . Since the independent modules of ASKI must communicate via file output/input, large storage capacities need to be accessible conveniently. Storing the complete sensitivity matrix to file, however, permits the scientist full manual control over each step in a customized procedure of sensitivity/resolution analysis and full waveform inversion. In the presentation, we will show some aspects of the theory behind the full waveform inversion method and its practical realization by the software package ASKI, as well as synthetic and real-data applications from different scales and geometries.

  19. Guidance of Nonlinear Nonminimum-Phase Dynamic Systems

    NASA Technical Reports Server (NTRS)

    Devasia, Santosh

    1996-01-01

    The research work has advanced the inversion-based guidance theory for: systems with non-hyperbolic internal dynamics; systems with parameter jumps; and systems where a redesign of the output trajectory is desired. A technique to achieve output tracking for nonminimum phase linear systems with non-hyperbolic and near non-hyperbolic internal dynamics was developed. This approach integrated stable inversion techniques, that achieve exact-tracking, with approximation techniques, that modify the internal dynamics to achieve desirable performance. Such modification of the internal dynamics was used (a) to remove non-hyperbolicity which is an obstruction to applying stable inversion techniques and (b) to reduce large preactuation times needed to apply stable inversion for near non-hyperbolic cases. The method was applied to an example helicopter hover control problem with near non-hyperbolic internal dynamics for illustrating the trade-off between exact tracking and reduction of preactuation time. Future work will extend these results to guidance of nonlinear non-hyperbolic systems. The exact output tracking problem for systems with parameter jumps was considered. Necessary and sufficient conditions were derived for the elimination of switching-introduced output transient. While previous works had studied this problem by developing a regulator that maintains exact tracking through parameter jumps (switches), such techniques are, however, only applicable to minimum-phase systems. In contrast, our approach is also applicable to nonminimum-phase systems and leads to bounded but possibly non-causal solutions. In addition, for the case when the reference trajectories are generated by an exosystem, we developed an exact-tracking controller which could be written in a feedback form. As in standard regulator theory, we also obtained a linear map from the states of the exosystem to the desired system state, which was defined via a matrix differential equation.

  20. Caracterisation mecanique dynamique de materiaux poro-visco-elastiques

    NASA Astrophysics Data System (ADS)

    Renault, Amelie

    Poro-viscoelastic materials are well modelled with Biot-Allard equations. This model needs a number of geometrical parameters in order to describe the macroscopic geometry of the material and elastic parameters in order to describe the elastic properties of the material skeleton. Several characterisation methods of viscoelastic parameters of porous materials are studied in this thesis. Firstly, quasistatic and resonant characterization methods are described and analyzed. Secondly, a new inverse dynamic characterization of the same modulus is developed. The latter involves a two layers metal-porous beam, which is excited at the center. The input mobility is measured. The set-up is simplified compared to previous methods. The parameters are obtained via an inversion procedure based on the minimisation of the cost function comparing the measured and calculated frequency response functions (FRF). The calculation is done with a general laminate model. A parametric study identifies the optimal beam dimensions for maximum sensitivity of the inversion model. The advantage of using a code which is not taking into account fluid-structure interactions is the low computation time. For most materials, the effect of this interaction on the elastic properties is negligible. Several materials are tested to demonstrate the performance of the method compared to the classical quasi-static approaches, and set its limitations and range of validity. Finally, conclusions about their utilisation are given. Keywords. Elastic parameters, porous materials, anisotropy, vibration.

  1. Gravity Wave Dynamics in a Mesospheric Inversion Layer: 1. Reflection, Trapping, and Instability Dynamics

    PubMed Central

    Laughman, Brian; Wang, Ling; Lund, Thomas S.; Collins, Richard L.

    2018-01-01

    Abstract An anelastic numerical model is employed to explore the dynamics of gravity waves (GWs) encountering a mesosphere inversion layer (MIL) having a moderate static stability enhancement and a layer of weaker static stability above. Instabilities occur within the MIL when the GW amplitude approaches that required for GW breaking due to compression of the vertical wavelength accompanying the increasing static stability. Thus, MILs can cause large‐amplitude GWs to yield instabilities and turbulence below the altitude where they would otherwise arise. Smaller‐amplitude GWs encountering a MIL do not lead to instability and turbulence but do exhibit partial reflection and transmission, and the transmission is a smaller fraction of the incident GW when instabilities and turbulence arise within the MIL. Additionally, greater GW transmission occurs for weaker MILs and for GWs having larger vertical wavelengths relative to the MIL depth and for lower GW intrinsic frequencies. These results imply similar dynamics for inversions due to other sources, including the tropopause inversion layer, the high stability capping the polar summer mesopause, and lower frequency GWs or tides having sufficient amplitudes to yield significant variations in stability at large and small vertical scales. MILs also imply much stronger reflections and less coherent GW propagation in environments having significant fine structure in the stability and velocity fields than in environments that are smoothly varying. PMID:29576994

  2. Gravity Wave Dynamics in a Mesospheric Inversion Layer: 1. Reflection, Trapping, and Instability Dynamics

    NASA Astrophysics Data System (ADS)

    Fritts, David C.; Laughman, Brian; Wang, Ling; Lund, Thomas S.; Collins, Richard L.

    2018-01-01

    An anelastic numerical model is employed to explore the dynamics of gravity waves (GWs) encountering a mesosphere inversion layer (MIL) having a moderate static stability enhancement and a layer of weaker static stability above. Instabilities occur within the MIL when the GW amplitude approaches that required for GW breaking due to compression of the vertical wavelength accompanying the increasing static stability. Thus, MILs can cause large-amplitude GWs to yield instabilities and turbulence below the altitude where they would otherwise arise. Smaller-amplitude GWs encountering a MIL do not lead to instability and turbulence but do exhibit partial reflection and transmission, and the transmission is a smaller fraction of the incident GW when instabilities and turbulence arise within the MIL. Additionally, greater GW transmission occurs for weaker MILs and for GWs having larger vertical wavelengths relative to the MIL depth and for lower GW intrinsic frequencies. These results imply similar dynamics for inversions due to other sources, including the tropopause inversion layer, the high stability capping the polar summer mesopause, and lower frequency GWs or tides having sufficient amplitudes to yield significant variations in stability at large and small vertical scales. MILs also imply much stronger reflections and less coherent GW propagation in environments having significant fine structure in the stability and velocity fields than in environments that are smoothly varying.

  3. Spin-charge coupled dynamics driven by a time-dependent magnetization

    NASA Astrophysics Data System (ADS)

    Tölle, Sebastian; Eckern, Ulrich; Gorini, Cosimo

    2017-03-01

    The spin-charge coupled dynamics in a thin, magnetized metallic system are investigated. The effective driving force acting on the charge carriers is generated by a dynamical magnetic texture, which can be induced, e.g., by a magnetic material in contact with a normal-metal system. We consider a general inversion-asymmetric substrate/normal-metal/magnet structure, which, by specifying the precise nature of each layer, can mimic various experimentally employed setups. Inversion symmetry breaking gives rise to an effective Rashba spin-orbit interaction. We derive general spin-charge kinetic equations which show that such spin-orbit interaction, together with anisotropic Elliott-Yafet spin relaxation, yields significant corrections to the magnetization-induced dynamics. In particular, we present a consistent treatment of the spin density and spin current contributions to the equations of motion, inter alia, identifying a term in the effective force which appears due to a spin current polarized parallel to the magnetization. This "inverse-spin-filter" contribution depends markedly on the parameter which describes the anisotropy in spin relaxation. To further highlight the physical meaning of the different contributions, the spin-pumping configuration of typical experimental setups is analyzed in detail. In the two-dimensional limit the buildup of dc voltage is dominated by the spin-galvanic (inverse Edelstein) effect. A measuring scheme that could isolate this contribution is discussed.

  4. A Toolkit for Forward/Inverse Problems in Electrocardiography within the SCIRun Problem Solving Environment

    PubMed Central

    Burton, Brett M; Tate, Jess D; Erem, Burak; Swenson, Darrell J; Wang, Dafang F; Steffen, Michael; Brooks, Dana H; van Dam, Peter M; Macleod, Rob S

    2012-01-01

    Computational modeling in electrocardiography often requires the examination of cardiac forward and inverse problems in order to non-invasively analyze physiological events that are otherwise inaccessible or unethical to explore. The study of these models can be performed in the open-source SCIRun problem solving environment developed at the Center for Integrative Biomedical Computing (CIBC). A new toolkit within SCIRun provides researchers with essential frameworks for constructing and manipulating electrocardiographic forward and inverse models in a highly efficient and interactive way. The toolkit contains sample networks, tutorials and documentation which direct users through SCIRun-specific approaches in the assembly and execution of these specific problems. PMID:22254301

  5. A compressive sensing-based computational method for the inversion of wide-band ground penetrating radar data

    NASA Astrophysics Data System (ADS)

    Gelmini, A.; Gottardi, G.; Moriyama, T.

    2017-10-01

    This work presents an innovative computational approach for the inversion of wideband ground penetrating radar (GPR) data. The retrieval of the dielectric characteristics of sparse scatterers buried in a lossy soil is performed by combining a multi-task Bayesian compressive sensing (MT-BCS) solver and a frequency hopping (FH) strategy. The developed methodology is able to benefit from the regularization capabilities of the MT-BCS as well as to exploit the multi-chromatic informative content of GPR measurements. A set of numerical results is reported in order to assess the effectiveness of the proposed GPR inverse scattering technique, as well as to compare it to a simpler single-task implementation.

  6. Assessing non-uniqueness: An algebraic approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vasco, Don W.

    Geophysical inverse problems are endowed with a rich mathematical structure. When discretized, most differential and integral equations of interest are algebraic (polynomial) in form. Techniques from algebraic geometry and computational algebra provide a means to address questions of existence and uniqueness for both linear and non-linear inverse problem. In a sense, the methods extend ideas which have proven fruitful in treating linear inverse problems.

  7. Noise removal using factor analysis of dynamic structures: application to cardiac gated studies.

    PubMed

    Bruyant, P P; Sau, J; Mallet, J J

    1999-10-01

    Factor analysis of dynamic structures (FADS) facilitates the extraction of relevant data, usually with physiologic meaning, from a dynamic set of images. The result of this process is a set of factor images and curves plus some residual activity. The set of factor images and curves can be used to retrieve the original data with reduced noise using an inverse factor analysis process (iFADS). This improvement in image quality is expected because the inverse process does not use the residual activity, assumed to be made of noise. The goal of this work is to quantitate and assess the efficiency of this method on gated cardiac images. A computer simulation of a planar cardiac gated study was performed. The simulated images were added with noise and processed by the FADS-iFADS program. The signal-to-noise ratios (SNRs) were compared between original and processed data. Planar gated cardiac studies from 10 patients were tested. The data processed by FADS-iFADS were subtracted to the original data. The result of the substraction was studied to evaluate its noisy nature. The SNR is about five times greater after the FADS-iFADS process. The difference between original and processed data is noise only, i.e., processed data equals original data minus some white noise. The FADS-iFADS process is successful in the removal of an important part of the noise and therefore is a tool to improve the image quality of cardiac images. This tool does not decrease the spatial resolution (compared with smoothing filters) and does not lose details (compared with frequential filters). Once the number of factors is chosen, this method is not operator dependent.

  8. Program manual for the Eppler airfoil inversion program

    NASA Technical Reports Server (NTRS)

    Thomson, W. G.

    1975-01-01

    A computer program is described for calculating the profile of an airfoil as well as the boundary layer momentum thickness and energy form parameter. The theory underlying the airfoil inversion technique developed by Eppler is discussed.

  9. Pseudo-dynamic source characterization accounting for rough-fault effects

    NASA Astrophysics Data System (ADS)

    Galis, Martin; Thingbaijam, Kiran K. S.; Mai, P. Martin

    2016-04-01

    Broadband ground-motion simulations, ideally for frequencies up to ~10Hz or higher, are important for earthquake engineering; for example, seismic hazard analysis for critical facilities. An issue with such simulations is realistic generation of radiated wave-field in the desired frequency range. Numerical simulations of dynamic ruptures propagating on rough faults suggest that fault roughness is necessary for realistic high-frequency radiation. However, simulations of dynamic ruptures are too expensive for routine applications. Therefore, simplified synthetic kinematic models are often used. They are usually based on rigorous statistical analysis of rupture models inferred by inversions of seismic and/or geodetic data. However, due to limited resolution of the inversions, these models are valid only for low-frequency range. In addition to the slip, parameters such as rupture-onset time, rise time and source time functions are needed for complete spatiotemporal characterization of the earthquake rupture. But these parameters are poorly resolved in the source inversions. To obtain a physically consistent quantification of these parameters, we simulate and analyze spontaneous dynamic ruptures on rough faults. First, by analyzing the impact of fault roughness on the rupture and seismic radiation, we develop equivalent planar-fault kinematic analogues of the dynamic ruptures. Next, we investigate the spatial interdependencies between the source parameters to allow consistent modeling that emulates the observed behavior of dynamic ruptures capturing the rough-fault effects. Based on these analyses, we formulate a framework for pseudo-dynamic source model, physically consistent with the dynamic ruptures on rough faults.

  10. Accounting for model error in Bayesian solutions to hydrogeophysical inverse problems using a local basis approach

    NASA Astrophysics Data System (ADS)

    Irving, J.; Koepke, C.; Elsheikh, A. H.

    2017-12-01

    Bayesian solutions to geophysical and hydrological inverse problems are dependent upon a forward process model linking subsurface parameters to measured data, which is typically assumed to be known perfectly in the inversion procedure. However, in order to make the stochastic solution of the inverse problem computationally tractable using, for example, Markov-chain-Monte-Carlo (MCMC) methods, fast approximations of the forward model are commonly employed. This introduces model error into the problem, which has the potential to significantly bias posterior statistics and hamper data integration efforts if not properly accounted for. Here, we present a new methodology for addressing the issue of model error in Bayesian solutions to hydrogeophysical inverse problems that is geared towards the common case where these errors cannot be effectively characterized globally through some parametric statistical distribution or locally based on interpolation between a small number of computed realizations. Rather than focusing on the construction of a global or local error model, we instead work towards identification of the model-error component of the residual through a projection-based approach. In this regard, pairs of approximate and detailed model runs are stored in a dictionary that grows at a specified rate during the MCMC inversion procedure. At each iteration, a local model-error basis is constructed for the current test set of model parameters using the K-nearest neighbour entries in the dictionary, which is then used to separate the model error from the other error sources before computing the likelihood of the proposed set of model parameters. We demonstrate the performance of our technique on the inversion of synthetic crosshole ground-penetrating radar traveltime data for three different subsurface parameterizations of varying complexity. The synthetic data are generated using the eikonal equation, whereas a straight-ray forward model is assumed in the inversion procedure. In each case, the developed model-error approach enables to remove posterior bias and obtain a more realistic characterization of uncertainty.

  11. Cross Correlations for Two-Dimensional Geosynchronous Satellite Imagery Data,

    DTIC Science & Technology

    1980-04-01

    transform of f(x), g(x,u) is the forward transformation kernel, and u assumes values in the range 0, 1, .. ,N-i. Similarly, the inverse transform is given...transform for values of u and v in the range, 0, 1, 2, ..., N-1. To obtain the inverse transform we pre-multiply and post-multiply Eq. (5-7) by an inverse...any algorithm for computing the forward transform can be used directly to obtain the inverse transform simply by multiplying the result of the

  12. Application of a stochastic inverse to the geophysical inverse problem

    NASA Technical Reports Server (NTRS)

    Jordan, T. H.; Minster, J. B.

    1972-01-01

    The inverse problem for gross earth data can be reduced to an undertermined linear system of integral equations of the first kind. A theory is discussed for computing particular solutions to this linear system based on the stochastic inverse theory presented by Franklin. The stochastic inverse is derived and related to the generalized inverse of Penrose and Moore. A Backus-Gilbert type tradeoff curve is constructed for the problem of estimating the solution to the linear system in the presence of noise. It is shown that the stochastic inverse represents an optimal point on this tradeoff curve. A useful form of the solution autocorrelation operator as a member of a one-parameter family of smoothing operators is derived.

  13. Modeling analysis of pulsed magnetization process of magnetic core based on inverse Jiles-Atherton model

    NASA Astrophysics Data System (ADS)

    Liu, Yi; Zhang, He; Liu, Siwei; Lin, Fuchang

    2018-05-01

    The J-A (Jiles-Atherton) model is widely used to describe the magnetization characteristics of magnetic cores in a low-frequency alternating field. However, this model is deficient in the quantitative analysis of the eddy current loss and residual loss in a high-frequency magnetic field. Based on the decomposition of magnetization intensity, an inverse J-A model is established which uses magnetic flux density B as an input variable. Static and dynamic core losses under high frequency excitation are separated based on the inverse J-A model. Optimized parameters of the inverse J-A model are obtained based on particle swarm optimization. The platform for the pulsed magnetization characteristic test is designed and constructed. The hysteresis curves of ferrite and Fe-based nanocrystalline cores at high magnetization rates are measured. The simulated and measured hysteresis curves are presented and compared. It is found that the inverse J-A model can be used to describe the magnetization characteristics at high magnetization rates and to separate the static loss and dynamic loss accurately.

  14. Efficient 3D inversions using the Richards equation

    NASA Astrophysics Data System (ADS)

    Cockett, Rowan; Heagy, Lindsey J.; Haber, Eldad

    2018-07-01

    Fluid flow in the vadose zone is governed by the Richards equation; it is parameterized by hydraulic conductivity, which is a nonlinear function of pressure head. Investigations in the vadose zone typically require characterizing distributed hydraulic properties. Water content or pressure head data may include direct measurements made from boreholes. Increasingly, proxy measurements from hydrogeophysics are being used to supply more spatially and temporally dense data sets. Inferring hydraulic parameters from such datasets requires the ability to efficiently solve and optimize the nonlinear time domain Richards equation. This is particularly important as the number of parameters to be estimated in a vadose zone inversion continues to grow. In this paper, we describe an efficient technique to invert for distributed hydraulic properties in 1D, 2D, and 3D. Our technique does not store the Jacobian matrix, but rather computes its product with a vector. Existing literature for the Richards equation inversion explicitly calculates the sensitivity matrix using finite difference or automatic differentiation, however, for large scale problems these methods are constrained by computation and/or memory. Using an implicit sensitivity algorithm enables large scale inversion problems for any distributed hydraulic parameters in the Richards equation to become tractable on modest computational resources. We provide an open source implementation of our technique based on the SimPEG framework, and show it in practice for a 3D inversion of saturated hydraulic conductivity using water content data through time.

  15. Six-minute magnetic resonance imaging protocol for evaluation of acute ischemic stroke: pushing the boundaries.

    PubMed

    Nael, Kambiz; Khan, Rihan; Choudhary, Gagandeep; Meshksar, Arash; Villablanca, Pablo; Tay, Jennifer; Drake, Kendra; Coull, Bruce M; Kidwell, Chelsea S

    2014-07-01

    If magnetic resonance imaging (MRI) is to compete with computed tomography for evaluation of patients with acute ischemic stroke, there is a need for further improvements in acquisition speed. Inclusion criteria for this prospective, single institutional study were symptoms of acute ischemic stroke within 24 hours onset, National Institutes of Health Stroke Scale ≥3, and absence of MRI contraindications. A combination of echo-planar imaging (EPI) and a parallel acquisition technique were used on a 3T magnetic resonance (MR) scanner to accelerate the acquisition time. Image analysis was performed independently by 2 neuroradiologists. A total of 62 patients met inclusion criteria. A repeat MRI scan was performed in 22 patients resulting in a total of 84 MRIs available for analysis. Diagnostic image quality was achieved in 100% of diffusion-weighted imaging, 100% EPI-fluid attenuation inversion recovery imaging, 98% EPI-gradient recalled echo, 90% neck MR angiography and 96% of brain MR angiography, and 94% of dynamic susceptibility contrast perfusion scans with interobserver agreements (k) ranging from 0.64 to 0.84. Fifty-nine patients (95%) had acute infarction. There was good interobserver agreement for EPI-fluid attenuation inversion recovery imaging findings (k=0.78; 95% confidence interval, 0.66-0.87) and for detection of mismatch classification using dynamic susceptibility contrast-Tmax (k=0.92; 95% confidence interval, 0.87-0.94). Thirteen acute intracranial hemorrhages were detected on EPI-gradient recalled echo by both observers. A total of 68 and 72 segmental arterial stenoses were detected on contrast-enhanced MR angiography of the neck and brain with k=0.93, 95% confidence interval, 0.84 to 0.96 and 0.87, 95% confidence interval, 0.80 to 0.90, respectively. A 6-minute multimodal MR protocol with good diagnostic quality is feasible for the evaluation of patients with acute ischemic stroke and can result in significant reduction in scan time rivaling that of the multimodal computed tomographic protocol. © 2014 American Heart Association, Inc.

  16. Validation of a "Kane's Dynamics" Model for the Active Rack Isolation System

    NASA Technical Reports Server (NTRS)

    Beech, Geoffrey S.; Hampton, R. David

    2000-01-01

    Many microgravity space-science experiments require vibratory acceleration levels unachievable without active isolation. The Boeing Corporation's Active Rack Isolation System (ARIS) employs a novel combination of magnetic actuation and mechanical linkages, to address these isolation requirements on the International Space Station (ISS). ARIS provides isolation at the rack (international Standard Payload Rack, or ISPR) level. Effective model-based vibration isolation requires (1) an isolation device, (2) an adequate dynamic (i.e., mathematical) model of that isolator, and (3) a suitable, corresponding controller, ARIS provides the ISS response to the first requirement. In November 1999, the authors presented a response to the second ("A 'Kane's Dynamics' model for the Active Rack Isolation System", Hampton and Beech) intended to facilitate an optimal-controls approach to the third. This paper documents the validation of that high-fidelity dynamic model of ARIS. As before, this model contains the full actuator dynamics, however, the umbilical models are not included in this presentation. The validation of this dynamics model was achieved by utilizing two Commercial Off the Shelf (COTS) software tools: Deneb's ENVISION, and Online Dynamics' AUTOLEV. ENVISION is a robotics software package developed for the automotive industry that employs 3-dimensional (3-D) Computer Aided Design (CAD) models to facilitate both forward and inverse kinematics analyses. AUTOLEV is a DOS based interpreter that is designed in general to solve vector based mathematical problems and specifically to solve Dynamics problems using Kane's method.

  17. Biomechanical Comparison of 3 Ankle Braces With and Without Free Rotation in the Sagittal Plane

    PubMed Central

    Alfuth, Martin; Klein, Dieter; Koch, Raphael; Rosenbaum, Dieter

    2014-01-01

    Context: Various designs of braces including hinged and nonhinged models are used to provide external support of the ankle. Hinged ankle braces supposedly allow almost free dorsiflexion and plantar flexion of the foot in the sagittal plane. It is unclear, however, whether this additional degree of freedom affects the stabilizing effect of the brace in the other planes of motion. Objective: To investigate the dynamic and passive stabilizing effects of 3 ankle braces, 2 hinged models that provide free plantar flexion–dorsiflexion in the sagittal plane and 1 ankle brace without a hinge. Design: Crossover study. Setting: University Movement Analysis Laboratory. Patients or Other Participants: Seventeen healthy volunteers (5 women, 12 men; age = 25.4 ± 4.8 years; height = 180.3 ± 6.5 cm; body mass = 75.5 ± 10.4 kg). Intervention(s): We dynamically induced foot inversion on a tilting platform and passively induced foot movements in 6 directions via a custom-built apparatus in 3 brace conditions and a control condition (no brace). Main Outcome Measure(s): Maximum inversion was determined dynamically using an in-shoe electrogoniometer. Passively induced maximal joint angles were measured using a torque and angle sensor. We analyzed differences among the 4 ankle-brace conditions (3 braces, 1 control) for each of the dependent variables with Friedman and post hoc tests (P < .05). Results: Each ankle brace restricted dynamic foot-inversion movements on the tilting platform as compared with the control condition, whereas only the 2 hinged ankle braces differed from each other, with greater movement restriction caused by the Ankle X model. Passive foot inversion was reduced with all ankle braces. Passive plantar flexion was greater in the hinged models as compared with the nonhinged brace. Conclusions: All ankle braces showed stabilizing effects against dynamic and passive foot inversion. Differences between the hinged braces and the nonhinged brace did not appear to be clinically relevant. PMID:25098661

  18. How much can history constrain adaptive evolution? A real-time evolutionary approach of inversion polymorphisms in Drosophila subobscura.

    PubMed

    Fragata, I; Lopes-Cunha, M; Bárbaro, M; Kellen, B; Lima, M; Santos, M A; Faria, G S; Santos, M; Matos, M; Simões, P

    2014-12-01

    Chromosomal inversions are present in a wide range of animals and plants, having an important role in adaptation and speciation. Although empirical evidence of their adaptive value is abundant, the role of different processes underlying evolution of chromosomal polymorphisms is not fully understood. History and selection are likely to shape inversion polymorphism variation to an extent yet largely unknown. Here, we perform a real-time evolution study addressing the role of historical constraints and selection in the evolution of these polymorphisms. We founded laboratory populations of Drosophila subobscura derived from three locations along the European cline and followed the evolutionary dynamics of inversion polymorphisms throughout the first 40 generations. At the beginning, populations were highly differentiated and remained so throughout generations. We report evidence of positive selection for some inversions, variable between foundations. Signs of negative selection were more frequent, in particular for most cold-climate standard inversions across the three foundations. We found that previously observed convergence at the phenotypic level in these populations was not associated with convergence in inversion frequencies. In conclusion, our study shows that selection has shaped the evolutionary dynamics of inversion frequencies, but doing so within the constraints imposed by previous history. Both history and selection are therefore fundamental to predict the evolutionary potential of different populations to respond to global environmental changes. © 2014 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2014 European Society For Evolutionary Biology.

  19. Secondary Structure Predictions for Long RNA Sequences Based on Inversion Excursions and MapReduce.

    PubMed

    Yehdego, Daniel T; Zhang, Boyu; Kodimala, Vikram K R; Johnson, Kyle L; Taufer, Michela; Leung, Ming-Ying

    2013-05-01

    Secondary structures of ribonucleic acid (RNA) molecules play important roles in many biological processes including gene expression and regulation. Experimental observations and computing limitations suggest that we can approach the secondary structure prediction problem for long RNA sequences by segmenting them into shorter chunks, predicting the secondary structures of each chunk individually using existing prediction programs, and then assembling the results to give the structure of the original sequence. The selection of cutting points is a crucial component of the segmenting step. Noting that stem-loops and pseudoknots always contain an inversion, i.e., a stretch of nucleotides followed closely by its inverse complementary sequence, we developed two cutting methods for segmenting long RNA sequences based on inversion excursions: the centered and optimized method. Each step of searching for inversions, chunking, and predictions can be performed in parallel. In this paper we use a MapReduce framework, i.e., Hadoop, to extensively explore meaningful inversion stem lengths and gap sizes for the segmentation and identify correlations between chunking methods and prediction accuracy. We show that for a set of long RNA sequences in the RFAM database, whose secondary structures are known to contain pseudoknots, our approach predicts secondary structures more accurately than methods that do not segment the sequence, when the latter predictions are possible computationally. We also show that, as sequences exceed certain lengths, some programs cannot computationally predict pseudoknots while our chunking methods can. Overall, our predicted structures still retain the accuracy level of the original prediction programs when compared with known experimental secondary structure.

  20. The Source Inversion Validation (SIV) Initiative: A Collaborative Study on Uncertainty Quantification in Earthquake Source Inversions

    NASA Astrophysics Data System (ADS)

    Mai, P. M.; Schorlemmer, D.; Page, M.

    2012-04-01

    Earthquake source inversions image the spatio-temporal rupture evolution on one or more fault planes using seismic and/or geodetic data. Such studies are critically important for earthquake seismology in general, and for advancing seismic hazard analysis in particular, as they reveal earthquake source complexity and help (i) to investigate earthquake mechanics; (ii) to develop spontaneous dynamic rupture models; (iii) to build models for generating rupture realizations for ground-motion simulations. In applications (i - iii), the underlying finite-fault source models are regarded as "data" (input information), but their uncertainties are essentially unknown. After all, source models are obtained from solving an inherently ill-posed inverse problem to which many a priori assumptions and uncertain observations are applied. The Source Inversion Validation (SIV) project is a collaborative effort to better understand the variability between rupture models for a single earthquake (as manifested in the finite-source rupture model database) and to develop robust uncertainty quantification for earthquake source inversions. The SIV project highlights the need to develop a long-standing and rigorous testing platform to examine the current state-of-the-art in earthquake source inversion, and to develop and test novel source inversion approaches. We will review the current status of the SIV project, and report the findings and conclusions of the recent workshops. We will briefly discuss several source-inversion methods, how they treat uncertainties in data, and assess the posterior model uncertainty. Case studies include initial forward-modeling tests on Green's function calculations, and inversion results for synthetic data from spontaneous dynamic crack-like strike-slip earthquake on steeply dipping fault, embedded in a layered crustal velocity-density structure.

  1. Is chemical heating a major cause of the mesosphere inversion layer?

    NASA Technical Reports Server (NTRS)

    Meriwether, John W.; Mlynczak, Martin G.

    1995-01-01

    A region of thermal enhancement of the mesosphere has been detected on numerous occasions by in situ measurements, remote sensing from space, and lidar techniques. The source of these 'temperature inversion layers' has been attributed in the literature to the dissipation relating to dynamical forcing by gravity wave or tidal activity. However, evidence that gravity wave breaking can produce the inversion layer with amplitude as large as that observed in lidar measurements has been limited to results of numerical modeling. An alternative source for the production of the thermal inversion layer in the mesosphere is the direct deposition of heat by exothermic chemical reactions. Two-dimensional modeling combining a comprehensive model of the mesosphere photochemistry with the dynamical transport of long-lived species shows that the region from 80 to 95 km may be heated as much as 3 to 10 K/d during the night and half this rate during the day. Given the uncertainties in our understanding of the dynamics and chemistry for the mesopause region, separating the two sources by passive observations of the mesosphere thermal structure looks to be difficult. Therefore we have considered an active means for producing a mesopause thermal layer, namely the release of ozone into the upper mesosphere from a rocket payload. The induced effects would include artificial enhancements of the OH and Na airglow intensities as well as the mesopause thermal structure. The advantages of the rocket release of ozone is that detection of these effects by ground-based imaging, radar, and lidar systems and comparison of these effects with model predictions would help quantify the partition of the artificial inversion layer production into sources of dynamical and chemical forcing.

  2. Direct integration of the inverse Radon equation for X-ray computed tomography.

    PubMed

    Libin, E E; Chakhlov, S V; Trinca, D

    2016-11-22

    A new mathematical appoach using the inverse Radon equation for restoration of images in problems of linear two-dimensional x-ray tomography is formulated. In this approach, Fourier transformation is not used, and it gives the chance to create the practical computing algorithms having more reliable mathematical substantiation. Results of software implementation show that for especially for low number of projections, the described approach performs better than standard X-ray tomographic reconstruction algorithms.

  3. Application of Adaptive Autopilot Designs for an Unmanned Aerial Vehicle

    NASA Technical Reports Server (NTRS)

    Shin, Yoonghyun; Calise, Anthony J.; Motter, Mark A.

    2005-01-01

    This paper summarizes the application of two adaptive approaches to autopilot design, and presents an evaluation and comparison of the two approaches in simulation for an unmanned aerial vehicle. One approach employs two-stage dynamic inversion and the other employs feedback dynamic inversions based on a command augmentation system. Both are augmented with neural network based adaptive elements. The approaches permit adaptation to both parametric uncertainty and unmodeled dynamics, and incorporate a method that permits adaptation during periods of control saturation. Simulation results for an FQM-117B radio controlled miniature aerial vehicle are presented to illustrate the performance of the neural network based adaptation.

  4. Scalable Evaluation of Polarization Energy and Associated Forces in Polarizable Molecular Dynamics: II.Towards Massively Parallel Computations using Smooth Particle Mesh Ewald.

    PubMed

    Lagardère, Louis; Lipparini, Filippo; Polack, Étienne; Stamm, Benjamin; Cancès, Éric; Schnieders, Michael; Ren, Pengyu; Maday, Yvon; Piquemal, Jean-Philip

    2014-02-28

    In this paper, we present a scalable and efficient implementation of point dipole-based polarizable force fields for molecular dynamics (MD) simulations with periodic boundary conditions (PBC). The Smooth Particle-Mesh Ewald technique is combined with two optimal iterative strategies, namely, a preconditioned conjugate gradient solver and a Jacobi solver in conjunction with the Direct Inversion in the Iterative Subspace for convergence acceleration, to solve the polarization equations. We show that both solvers exhibit very good parallel performances and overall very competitive timings in an energy-force computation needed to perform a MD step. Various tests on large systems are provided in the context of the polarizable AMOEBA force field as implemented in the newly developed Tinker-HP package which is the first implementation for a polarizable model making large scale experiments for massively parallel PBC point dipole models possible. We show that using a large number of cores offers a significant acceleration of the overall process involving the iterative methods within the context of spme and a noticeable improvement of the memory management giving access to very large systems (hundreds of thousands of atoms) as the algorithm naturally distributes the data on different cores. Coupled with advanced MD techniques, gains ranging from 2 to 3 orders of magnitude in time are now possible compared to non-optimized, sequential implementations giving new directions for polarizable molecular dynamics in periodic boundary conditions using massively parallel implementations.

  5. Scalable Evaluation of Polarization Energy and Associated Forces in Polarizable Molecular Dynamics: II.Towards Massively Parallel Computations using Smooth Particle Mesh Ewald

    PubMed Central

    Lagardère, Louis; Lipparini, Filippo; Polack, Étienne; Stamm, Benjamin; Cancès, Éric; Schnieders, Michael; Ren, Pengyu; Maday, Yvon; Piquemal, Jean-Philip

    2015-01-01

    In this paper, we present a scalable and efficient implementation of point dipole-based polarizable force fields for molecular dynamics (MD) simulations with periodic boundary conditions (PBC). The Smooth Particle-Mesh Ewald technique is combined with two optimal iterative strategies, namely, a preconditioned conjugate gradient solver and a Jacobi solver in conjunction with the Direct Inversion in the Iterative Subspace for convergence acceleration, to solve the polarization equations. We show that both solvers exhibit very good parallel performances and overall very competitive timings in an energy-force computation needed to perform a MD step. Various tests on large systems are provided in the context of the polarizable AMOEBA force field as implemented in the newly developed Tinker-HP package which is the first implementation for a polarizable model making large scale experiments for massively parallel PBC point dipole models possible. We show that using a large number of cores offers a significant acceleration of the overall process involving the iterative methods within the context of spme and a noticeable improvement of the memory management giving access to very large systems (hundreds of thousands of atoms) as the algorithm naturally distributes the data on different cores. Coupled with advanced MD techniques, gains ranging from 2 to 3 orders of magnitude in time are now possible compared to non-optimized, sequential implementations giving new directions for polarizable molecular dynamics in periodic boundary conditions using massively parallel implementations. PMID:26512230

  6. Modeling 3D Dynamic Rupture on Arbitrarily-Shaped faults by Boundary-Conforming Finite Difference Method

    NASA Astrophysics Data System (ADS)

    Zhu, D.; Zhu, H.; Luo, Y.; Chen, X.

    2008-12-01

    We use a new finite difference method (FDM) and the slip-weakening law to model the rupture dynamics of a non-planar fault embedded in a 3-D elastic media with free surface. The new FDM, based on boundary- conforming grid, sets up the mapping equations between the curvilinear coordinate and the Cartesian coordinate and transforms irregular physical space to regular computational space; it also employs a higher- order non-staggered DRP/opt MacCormack scheme which is of low dispersion and low dissipation so that the high accuracy and stability of our rupture modeling are guaranteed. Compared with the previous methods, not only we can compute the spontaneous rupture of an arbitrarily shaped fault, but also can model the influence of the surface topography on the rupture process of earthquake. In order to verify the feasibility of this method, we compared our results and other previous results, and found out they matched perfectly. Thanks to the boundary-conforming FDM, problems such as dynamic rupture with arbitrary dip, strike and rake over an arbitrary curved plane can be handled; and supershear or subshear rupture can be simulated with different parameters such as the initial stresses and the critical slip displacement Dc. Besides, our rupture modeling is economical to be implemented owing to its high efficiency and does not suffer from displacement leakage. With the help of inversion data of rupture by field observations, this method is convenient to model rupture processes and seismograms of natural earthquakes.

  7. Investigating Coulomb's Law.

    ERIC Educational Resources Information Center

    Noll, Ellis; Koehlinger, Mervin; Kowalski, Ludwik; Swackhamer, Gregg

    1998-01-01

    Describes the use of a computer-linked camera to demonstrate Coulomb's law. Suggests a way of reducing the difficulties in presenting Coulomb's law by teaching the inverse square law of gravity and the inverse square law of electricity in the same unit. (AIM)

  8. A frozen Gaussian approximation-based multi-level particle swarm optimization for seismic inversion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Jinglai, E-mail: jinglaili@sjtu.edu.cn; Lin, Guang, E-mail: lin491@purdue.edu; Computational Sciences and Mathematics Division, Pacific Northwest National Laboratory, Richland, WA 99352

    2015-09-01

    In this paper, we propose a frozen Gaussian approximation (FGA)-based multi-level particle swarm optimization (MLPSO) method for seismic inversion of high-frequency wave data. The method addresses two challenges in it: First, the optimization problem is highly non-convex, which makes hard for gradient-based methods to reach global minima. This is tackled by MLPSO which can escape from undesired local minima. Second, the character of high-frequency of seismic waves requires a large number of grid points in direct computational methods, and thus renders an extremely high computational demand on the simulation of each sample in MLPSO. We overcome this difficulty by threemore » steps: First, we use FGA to compute high-frequency wave propagation based on asymptotic analysis on phase plane; Then we design a constrained full waveform inversion problem to prevent the optimization search getting into regions of velocity where FGA is not accurate; Last, we solve the constrained optimization problem by MLPSO that employs FGA solvers with different fidelity. The performance of the proposed method is demonstrated by a two-dimensional full-waveform inversion example of the smoothed Marmousi model.« less

  9. Aerosol Robotic Network (AERONET) Version 3 Aerosol Optical Depth and Inversion Products

    NASA Astrophysics Data System (ADS)

    Giles, D. M.; Holben, B. N.; Eck, T. F.; Smirnov, A.; Sinyuk, A.; Schafer, J.; Sorokin, M. G.; Slutsker, I.

    2017-12-01

    The Aerosol Robotic Network (AERONET) surface-based aerosol optical depth (AOD) database has been a principal component of many Earth science remote sensing applications and modelling for more than two decades. During this time, the AERONET AOD database had utilized a semiautomatic quality assurance approach (Smirnov et al., 2000). Data quality automation developed for AERONET Version 3 (V3) was achieved by augmenting and improving upon the combination of Version 2 (V2) automatic and manual procedures to provide a more refined near real time (NRT) and historical worldwide database of AOD. The combined effect of these new changes provides a historical V3 AOD Level 2.0 data set comparable to V2 Level 2.0 AOD. The recently released V3 Level 2.0 AOD product uses Level 1.5 data with automated cloud screening and quality controls and applies pre-field and post-field calibrations and wavelength-dependent temperature characterizations. For V3, the AERONET aerosol retrieval code inverts AOD and almucantar sky radiances using a full vector radiative transfer called Successive ORDers of scattering (SORD; Korkin et al., 2017). The full vector code allows for potentially improving the real part of the complex index of refraction and the sphericity parameter and computing the radiation field in the UV (e.g., 380nm) and degree of linear depolarization. Effective lidar ratio and depolarization ratio products are also available with the V3 inversion release. Inputs to the inversion code were updated to the accommodate H2O, O3 and NO2 absorption to be consistent with the computation of V3 AOD. All of the inversion products are associated with estimated uncertainties that include the random error plus biases due to the uncertainty in measured AOD, absolute sky radiance calibration, and retrieved MODIS BRDF for snow-free and snow covered surfaces. The V3 inversion products use the same data quality assurance criteria as V2 inversions (Holben et al. 2006). The entire AERONET V3 almucantar inversion database was computed using the NASA High End Computing resources at NASA Ames Research Center and NASA Goddard Space Flight Center. In addition to a description of data products, this presentation will provide a comparison of the V3 AOD and inversion climatology comparison of the V3 Level 2.0 and V2 Level 2.0 for sites with varying aerosol types.

  10. Simultaneous prediction of muscle and contact forces in the knee during gait.

    PubMed

    Lin, Yi-Chung; Walter, Jonathan P; Banks, Scott A; Pandy, Marcus G; Fregly, Benjamin J

    2010-03-22

    Musculoskeletal models are currently the primary means for estimating in vivo muscle and contact forces in the knee during gait. These models typically couple a dynamic skeletal model with individual muscle models but rarely include articular contact models due to their high computational cost. This study evaluates a novel method for predicting muscle and contact forces simultaneously in the knee during gait. The method utilizes a 12 degree-of-freedom knee model (femur, tibia, and patella) combining muscle, articular contact, and dynamic skeletal models. Eight static optimization problems were formulated using two cost functions (one based on muscle activations and one based on contact forces) and four constraints sets (each composed of different combinations of inverse dynamic loads). The estimated muscle and contact forces were evaluated using in vivo tibial contact force data collected from a patient with a force-measuring knee implant. When the eight optimization problems were solved with added constraints to match the in vivo contact force measurements, root-mean-square errors in predicted contact forces were less than 10 N. Furthermore, muscle and patellar contact forces predicted by the two cost functions became more similar as more inverse dynamic loads were used as constraints. When the contact force constraints were removed, estimated medial contact forces were similar and lateral contact forces lower in magnitude compared to measured contact forces, with estimated muscle forces being sensitive and estimated patellar contact forces relatively insensitive to the choice of cost function and constraint set. These results suggest that optimization problem formulation coupled with knee model complexity can significantly affect predicted muscle and contact forces in the knee during gait. Further research using a complete lower limb model is needed to assess the importance of this finding to the muscle and contact force estimation process. Copyright (c) 2009 Elsevier Ltd. All rights reserved.

  11. Constraining inverse-curvature gravity with supernovae.

    PubMed

    Mena, Olga; Santiago, José; Weller, Jochen

    2006-02-03

    We show that models of generalized modified gravity, with inverse powers of the curvature, can explain the current accelerated expansion of the Universe without resorting to dark energy and without conflicting with solar system experiments. We have solved the Friedmann equations for the full dynamical range of the evolution of the Universe and performed a detailed analysis of supernovae data in the context of such models that results in an excellent fit. If we further include constraints on the current expansion of the Universe and on its age, we obtain that the matter content of the Universe is 0.07

  12. EEGNET: An Open Source Tool for Analyzing and Visualizing M/EEG Connectome.

    PubMed

    Hassan, Mahmoud; Shamas, Mohamad; Khalil, Mohamad; El Falou, Wassim; Wendling, Fabrice

    2015-01-01

    The brain is a large-scale complex network often referred to as the "connectome". Exploring the dynamic behavior of the connectome is a challenging issue as both excellent time and space resolution is required. In this context Magneto/Electroencephalography (M/EEG) are effective neuroimaging techniques allowing for analysis of the dynamics of functional brain networks at scalp level and/or at reconstructed sources. However, a tool that can cover all the processing steps of identifying brain networks from M/EEG data is still missing. In this paper, we report a novel software package, called EEGNET, running under MATLAB (Math works, inc), and allowing for analysis and visualization of functional brain networks from M/EEG recordings. EEGNET is developed to analyze networks either at the level of scalp electrodes or at the level of reconstructed cortical sources. It includes i) Basic steps in preprocessing M/EEG signals, ii) the solution of the inverse problem to localize / reconstruct the cortical sources, iii) the computation of functional connectivity among signals collected at surface electrodes or/and time courses of reconstructed sources and iv) the computation of the network measures based on graph theory analysis. EEGNET is the unique tool that combines the M/EEG functional connectivity analysis and the computation of network measures derived from the graph theory. The first version of EEGNET is easy to use, flexible and user friendly. EEGNET is an open source tool and can be freely downloaded from this webpage: https://sites.google.com/site/eegnetworks/.

  13. EEGNET: An Open Source Tool for Analyzing and Visualizing M/EEG Connectome

    PubMed Central

    Hassan, Mahmoud; Shamas, Mohamad; Khalil, Mohamad; El Falou, Wassim; Wendling, Fabrice

    2015-01-01

    The brain is a large-scale complex network often referred to as the “connectome”. Exploring the dynamic behavior of the connectome is a challenging issue as both excellent time and space resolution is required. In this context Magneto/Electroencephalography (M/EEG) are effective neuroimaging techniques allowing for analysis of the dynamics of functional brain networks at scalp level and/or at reconstructed sources. However, a tool that can cover all the processing steps of identifying brain networks from M/EEG data is still missing. In this paper, we report a novel software package, called EEGNET, running under MATLAB (Math works, inc), and allowing for analysis and visualization of functional brain networks from M/EEG recordings. EEGNET is developed to analyze networks either at the level of scalp electrodes or at the level of reconstructed cortical sources. It includes i) Basic steps in preprocessing M/EEG signals, ii) the solution of the inverse problem to localize / reconstruct the cortical sources, iii) the computation of functional connectivity among signals collected at surface electrodes or/and time courses of reconstructed sources and iv) the computation of the network measures based on graph theory analysis. EEGNET is the unique tool that combines the M/EEG functional connectivity analysis and the computation of network measures derived from the graph theory. The first version of EEGNET is easy to use, flexible and user friendly. EEGNET is an open source tool and can be freely downloaded from this webpage: https://sites.google.com/site/eegnetworks/. PMID:26379232

  14. Definition and solution of a stochastic inverse problem for the Manning's n parameter field in hydrodynamic models.

    PubMed

    Butler, T; Graham, L; Estep, D; Dawson, C; Westerink, J J

    2015-04-01

    The uncertainty in spatially heterogeneous Manning's n fields is quantified using a novel formulation and numerical solution of stochastic inverse problems for physics-based models. The uncertainty is quantified in terms of a probability measure and the physics-based model considered here is the state-of-the-art ADCIRC model although the presented methodology applies to other hydrodynamic models. An accessible overview of the formulation and solution of the stochastic inverse problem in a mathematically rigorous framework based on measure theory is presented. Technical details that arise in practice by applying the framework to determine the Manning's n parameter field in a shallow water equation model used for coastal hydrodynamics are presented and an efficient computational algorithm and open source software package are developed. A new notion of "condition" for the stochastic inverse problem is defined and analyzed as it relates to the computation of probabilities. This notion of condition is investigated to determine effective output quantities of interest of maximum water elevations to use for the inverse problem for the Manning's n parameter and the effect on model predictions is analyzed.

  15. Definition and solution of a stochastic inverse problem for the Manning's n parameter field in hydrodynamic models

    NASA Astrophysics Data System (ADS)

    Butler, T.; Graham, L.; Estep, D.; Dawson, C.; Westerink, J. J.

    2015-04-01

    The uncertainty in spatially heterogeneous Manning's n fields is quantified using a novel formulation and numerical solution of stochastic inverse problems for physics-based models. The uncertainty is quantified in terms of a probability measure and the physics-based model considered here is the state-of-the-art ADCIRC model although the presented methodology applies to other hydrodynamic models. An accessible overview of the formulation and solution of the stochastic inverse problem in a mathematically rigorous framework based on measure theory is presented. Technical details that arise in practice by applying the framework to determine the Manning's n parameter field in a shallow water equation model used for coastal hydrodynamics are presented and an efficient computational algorithm and open source software package are developed. A new notion of "condition" for the stochastic inverse problem is defined and analyzed as it relates to the computation of probabilities. This notion of condition is investigated to determine effective output quantities of interest of maximum water elevations to use for the inverse problem for the Manning's n parameter and the effect on model predictions is analyzed.

  16. Hidden scale invariance of metals

    NASA Astrophysics Data System (ADS)

    Hummel, Felix; Kresse, Georg; Dyre, Jeppe C.; Pedersen, Ulf R.

    2015-11-01

    Density functional theory (DFT) calculations of 58 liquid elements at their triple point show that most metals exhibit near proportionality between the thermal fluctuations of the virial and the potential energy in the isochoric ensemble. This demonstrates a general "hidden" scale invariance of metals making the condensed part of the thermodynamic phase diagram effectively one dimensional with respect to structure and dynamics. DFT computed density scaling exponents, related to the Grüneisen parameter, are in good agreement with experimental values for the 16 elements where reliable data were available. Hidden scale invariance is demonstrated in detail for magnesium by showing invariance of structure and dynamics. Computed melting curves of period three metals follow curves with invariance (isomorphs). The experimental structure factor of magnesium is predicted by assuming scale invariant inverse power-law (IPL) pair interactions. However, crystal packings of several transition metals (V, Cr, Mn, Fe, Nb, Mo, Ta, W, and Hg), most post-transition metals (Ga, In, Sn, and Tl), and the metalloids Si and Ge cannot be explained by the IPL assumption. The virial-energy correlation coefficients of iron and phosphorous are shown to increase at elevated pressures. Finally, we discuss how scale invariance explains the Grüneisen equation of state and a number of well-known empirical melting and freezing rules.

  17. Study of crash energy absorption characteristics of inversion tube on passenger vehicle

    NASA Astrophysics Data System (ADS)

    Liu, Jiandong; Liu, Tao; Yao, Shengjie; Zhao, Rutao

    2017-09-01

    This article studied the energy absorption characteristics of the inversion tube and acquired the inversion tube design key dimensions under theoretical conditions by performing formula derivation in the quasi-static and dynamic state based on the working principle of the inversion tube: free inversion. The article further adopted HyperMesh and LS-Dyna to perform simulation and compared the simulation result with the theoretical calculating value for comparison. The design was applied in the full-vehicle model to perform 50km/h front fullwidth crash simulation. The findings showed that the deformation mode of the inversion tube in the full-vehicle crash was consistent with the design mode, and the inversion tube absorbed 33.0% of total energy, thereby conforming to the vehicle safety design requirements.

  18. Inverse Stochastic Resonance in Cerebellar Purkinje Cells

    PubMed Central

    Häusser, Michael; Gutkin, Boris S.; Roth, Arnd

    2016-01-01

    Purkinje neurons play an important role in cerebellar computation since their axons are the only projection from the cerebellar cortex to deeper cerebellar structures. They have complex internal dynamics, which allow them to fire spontaneously, display bistability, and also to be involved in network phenomena such as high frequency oscillations and travelling waves. Purkinje cells exhibit type II excitability, which can be revealed by a discontinuity in their f-I curves. We show that this excitability mechanism allows Purkinje cells to be efficiently inhibited by noise of a particular variance, a phenomenon known as inverse stochastic resonance (ISR). While ISR has been described in theoretical models of single neurons, here we provide the first experimental evidence for this effect. We find that an adaptive exponential integrate-and-fire model fitted to the basic Purkinje cell characteristics using a modified dynamic IV method displays ISR and bistability between the resting state and a repetitive activity limit cycle. ISR allows the Purkinje cell to operate in different functional regimes: the all-or-none toggle or the linear filter mode, depending on the variance of the synaptic input. We propose that synaptic noise allows Purkinje cells to quickly switch between these functional regimes. Using mutual information analysis, we demonstrate that ISR can lead to a locally optimal information transfer between the input and output spike train of the Purkinje cell. These results provide the first experimental evidence for ISR and suggest a functional role for ISR in cerebellar information processing. PMID:27541958

  19. Almost but not quite 2D, Non-linear Bayesian Inversion of CSEM Data

    NASA Astrophysics Data System (ADS)

    Ray, A.; Key, K.; Bodin, T.

    2013-12-01

    The geophysical inverse problem can be elegantly stated in a Bayesian framework where a probability distribution can be viewed as a statement of information regarding a random variable. After all, the goal of geophysical inversion is to provide information on the random variables of interest - physical properties of the earth's subsurface. However, though it may be simple to postulate, a practical difficulty of fully non-linear Bayesian inversion is the computer time required to adequately sample the model space and extract the information we seek. As a consequence, in geophysical problems where evaluation of a full 2D/3D forward model is computationally expensive, such as marine controlled source electromagnetic (CSEM) mapping of the resistivity of seafloor oil and gas reservoirs, Bayesian studies have largely been conducted with 1D forward models. While the 1D approximation is indeed appropriate for exploration targets with planar geometry and geological stratification, it only provides a limited, site-specific idea of uncertainty in resistivity with depth. In this work, we extend our fully non-linear 1D Bayesian inversion to a 2D model framework, without requiring the usual regularization of model resistivities in the horizontal or vertical directions used to stabilize quasi-2D inversions. In our approach, we use the reversible jump Markov-chain Monte-Carlo (RJ-MCMC) or trans-dimensional method and parameterize the subsurface in a 2D plane with Voronoi cells. The method is trans-dimensional in that the number of cells required to parameterize the subsurface is variable, and the cells dynamically move around and multiply or combine as demanded by the data being inverted. This approach allows us to expand our uncertainty analysis of resistivity at depth to more than a single site location, allowing for interactions between model resistivities at different horizontal locations along a traverse over an exploration target. While the model is parameterized in 2D, we efficiently evaluate the forward response using 1D profiles extracted from the model at the common-midpoints of the EM source-receiver pairs. Since the 1D approximation is locally valid at different midpoint locations, the computation time is far lower than is required by a full 2D or 3D simulation. We have applied this method to both synthetic and real CSEM survey data from the Scarborough gas field on the Northwest shelf of Australia, resulting in a spatially variable quantification of resistivity and its uncertainty in 2D. This Bayesian approach results in a large database of 2D models that comprise a posterior probability distribution, which we can subset to test various hypotheses about the range of model structures compatible with the data. For example, we can subset the model distributions to examine the hypothesis that a resistive reservoir extends overs a certain spatial extent. Depending on how this conditions other parts of the model space, light can be shed on the geological viability of the hypothesis. Since tackling spatially variable uncertainty and trade-offs in 2D and 3D is a challenging research problem, the insights gained from this work may prove valuable for subsequent full 2D and 3D Bayesian inversions.

  20. 3D fast adaptive correlation imaging for large-scale gravity data based on GPU computation

    NASA Astrophysics Data System (ADS)

    Chen, Z.; Meng, X.; Guo, L.; Liu, G.

    2011-12-01

    In recent years, large scale gravity data sets have been collected and employed to enhance gravity problem-solving abilities of tectonics studies in China. Aiming at the large scale data and the requirement of rapid interpretation, previous authors have carried out a lot of work, including the fast gradient module inversion and Euler deconvolution depth inversion ,3-D physical property inversion using stochastic subspaces and equivalent storage, fast inversion using wavelet transforms and a logarithmic barrier method. So it can be say that 3-D gravity inversion has been greatly improved in the last decade. Many authors added many different kinds of priori information and constraints to deal with nonuniqueness using models composed of a large number of contiguous cells of unknown property and obtained good results. However, due to long computation time, instability and other shortcomings, 3-D physical property inversion has not been widely applied to large-scale data yet. In order to achieve 3-D interpretation with high efficiency and precision for geological and ore bodies and obtain their subsurface distribution, there is an urgent need to find a fast and efficient inversion method for large scale gravity data. As an entirely new geophysical inversion method, 3D correlation has a rapid development thanks to the advantage of requiring no a priori information and demanding small amount of computer memory. This method was proposed to image the distribution of equivalent excess masses of anomalous geological bodies with high resolution both longitudinally and transversely. In order to tranform the equivalence excess masses into real density contrasts, we adopt the adaptive correlation imaging for gravity data. After each 3D correlation imaging, we change the equivalence into density contrasts according to the linear relationship, and then carry out forward gravity calculation for each rectangle cells. Next, we compare the forward gravity data with real data, and comtinue to perform 3D correlation imaging for the redisual gravity data. After several iterations, we can obtain a satisfactoy results. Newly developed general purpose computing technology from Nvidia GPU (Graphics Processing Unit) has been put into practice and received widespread attention in many areas. Based on the GPU programming mode and two parallel levels, five CPU loops for the main computation of 3D correlation imaging are converted into three loops in GPU kernel functions, thus achieving GPU/CPU collaborative computing. The two inner loops are defined as the dimensions of blocks and the three outer loops are defined as the dimensions of threads, thus realizing the double loop block calculation. Theoretical and real gravity data tests show that results are reliable and the computing time is greatly reduced. Acknowledgments We acknowledge the financial support of Sinoprobe project (201011039 and 201011049-03), the Fundamental Research Funds for the Central Universities (2010ZY26 and 2011PY0183), the National Natural Science Foundation of China (41074095) and the Open Project of State Key Laboratory of Geological Processes and Mineral Resources (GPMR0945).

  1. Identifing Atmospheric Pollutant Sources Using Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Paes, F. F.; Campos, H. F.; Luz, E. P.; Carvalho, A. R.

    2008-05-01

    The estimation of the area source pollutant strength is a relevant issue for atmospheric environment. This characterizes an inverse problem in the atmospheric pollution dispersion. In the inverse analysis, an area source domain is considered, where the strength of such area source term is assumed unknown. The inverse problem is solved by using a supervised artificial neural network: multi-layer perceptron. The conection weights of the neural network are computed from delta rule - learning process. The neural network inversion is compared with results from standard inverse analysis (regularized inverse solution). In the regularization method, the inverse problem is formulated as a non-linear optimization approach, whose the objective function is given by the square difference between the measured pollutant concentration and the mathematical models, associated with a regularization operator. In our numerical experiments, the forward problem is addressed by a source-receptor scheme, where a regressive Lagrangian model is applied to compute the transition matrix. The second order maximum entropy regularization is used, and the regularization parameter is calculated by the L-curve technique. The objective function is minimized employing a deterministic scheme (a quasi-Newton algorithm) [1] and a stochastic technique (PSO: particle swarm optimization) [2]. The inverse problem methodology is tested with synthetic observational data, from six measurement points in the physical domain. The best inverse solutions were obtained with neural networks. References: [1] D. R. Roberti, D. Anfossi, H. F. Campos Velho, G. A. Degrazia (2005): Estimating Emission Rate and Pollutant Source Location, Ciencia e Natura, p. 131-134. [2] E.F.P. da Luz, H.F. de Campos Velho, J.C. Becceneri, D.R. Roberti (2007): Estimating Atmospheric Area Source Strength Through Particle Swarm Optimization. Inverse Problems, Desing and Optimization Symposium IPDO-2007, April 16-18, Miami (FL), USA, vol 1, p. 354-359.

  2. Unbiased, scalable sampling of protein loop conformations from probabilistic priors.

    PubMed

    Zhang, Yajia; Hauser, Kris

    2013-01-01

    Protein loops are flexible structures that are intimately tied to function, but understanding loop motion and generating loop conformation ensembles remain significant computational challenges. Discrete search techniques scale poorly to large loops, optimization and molecular dynamics techniques are prone to local minima, and inverse kinematics techniques can only incorporate structural preferences in adhoc fashion. This paper presents Sub-Loop Inverse Kinematics Monte Carlo (SLIKMC), a new Markov chain Monte Carlo algorithm for generating conformations of closed loops according to experimentally available, heterogeneous structural preferences. Our simulation experiments demonstrate that the method computes high-scoring conformations of large loops (>10 residues) orders of magnitude faster than standard Monte Carlo and discrete search techniques. Two new developments contribute to the scalability of the new method. First, structural preferences are specified via a probabilistic graphical model (PGM) that links conformation variables, spatial variables (e.g., atom positions), constraints and prior information in a unified framework. The method uses a sparse PGM that exploits locality of interactions between atoms and residues. Second, a novel method for sampling sub-loops is developed to generate statistically unbiased samples of probability densities restricted by loop-closure constraints. Numerical experiments confirm that SLIKMC generates conformation ensembles that are statistically consistent with specified structural preferences. Protein conformations with 100+ residues are sampled on standard PC hardware in seconds. Application to proteins involved in ion-binding demonstrate its potential as a tool for loop ensemble generation and missing structure completion.

  3. Unbiased, scalable sampling of protein loop conformations from probabilistic priors

    PubMed Central

    2013-01-01

    Background Protein loops are flexible structures that are intimately tied to function, but understanding loop motion and generating loop conformation ensembles remain significant computational challenges. Discrete search techniques scale poorly to large loops, optimization and molecular dynamics techniques are prone to local minima, and inverse kinematics techniques can only incorporate structural preferences in adhoc fashion. This paper presents Sub-Loop Inverse Kinematics Monte Carlo (SLIKMC), a new Markov chain Monte Carlo algorithm for generating conformations of closed loops according to experimentally available, heterogeneous structural preferences. Results Our simulation experiments demonstrate that the method computes high-scoring conformations of large loops (>10 residues) orders of magnitude faster than standard Monte Carlo and discrete search techniques. Two new developments contribute to the scalability of the new method. First, structural preferences are specified via a probabilistic graphical model (PGM) that links conformation variables, spatial variables (e.g., atom positions), constraints and prior information in a unified framework. The method uses a sparse PGM that exploits locality of interactions between atoms and residues. Second, a novel method for sampling sub-loops is developed to generate statistically unbiased samples of probability densities restricted by loop-closure constraints. Conclusion Numerical experiments confirm that SLIKMC generates conformation ensembles that are statistically consistent with specified structural preferences. Protein conformations with 100+ residues are sampled on standard PC hardware in seconds. Application to proteins involved in ion-binding demonstrate its potential as a tool for loop ensemble generation and missing structure completion. PMID:24565175

  4. Improving urban wind flow predictions through data assimilation

    NASA Astrophysics Data System (ADS)

    Sousa, Jorge; Gorle, Catherine

    2017-11-01

    Computational fluid dynamic is fundamentally important to several aspects in the design of sustainable and resilient urban environments. The prediction of the flow pattern for example can help to determine pedestrian wind comfort, air quality, optimal building ventilation strategies, and wind loading on buildings. However, the significant variability and uncertainty in the boundary conditions poses a challenge when interpreting results as a basis for design decisions. To improve our understanding of the uncertainties in the models and develop better predictive tools, we started a pilot field measurement campaign on Stanford University's campus combined with a detailed numerical prediction of the wind flow. The experimental data is being used to investigate the potential use of data assimilation and inverse techniques to better characterize the uncertainty in the results and improve the confidence in current wind flow predictions. We consider the incoming wind direction and magnitude as unknown parameters and perform a set of Reynolds-averaged Navier-Stokes simulations to build a polynomial chaos expansion response surface at each sensor location. We subsequently use an inverse ensemble Kalman filter to retrieve an estimate for the probabilistic density function of the inflow parameters. Once these distributions are obtained, the forward analysis is repeated to obtain predictions for the flow field in the entire urban canopy and the results are compared with the experimental data. We would like to acknowledge high-performance computing support from Yellowstone (ark:/85065/d7wd3xhc) provided by NCAR.

  5. Combined inverse-forward artificial neural networks for fast and accurate estimation of the diffusion coefficients of cartilage based on multi-physics models.

    PubMed

    Arbabi, Vahid; Pouran, Behdad; Weinans, Harrie; Zadpoor, Amir A

    2016-09-06

    Analytical and numerical methods have been used to extract essential engineering parameters such as elastic modulus, Poisson׳s ratio, permeability and diffusion coefficient from experimental data in various types of biological tissues. The major limitation associated with analytical techniques is that they are often only applicable to problems with simplified assumptions. Numerical multi-physics methods, on the other hand, enable minimizing the simplified assumptions but require substantial computational expertise, which is not always available. In this paper, we propose a novel approach that combines inverse and forward artificial neural networks (ANNs) which enables fast and accurate estimation of the diffusion coefficient of cartilage without any need for computational modeling. In this approach, an inverse ANN is trained using our multi-zone biphasic-solute finite-bath computational model of diffusion in cartilage to estimate the diffusion coefficient of the various zones of cartilage given the concentration-time curves. Robust estimation of the diffusion coefficients, however, requires introducing certain levels of stochastic variations during the training process. Determining the required level of stochastic variation is performed by coupling the inverse ANN with a forward ANN that receives the diffusion coefficient as input and returns the concentration-time curve as output. Combined together, forward-inverse ANNs enable computationally inexperienced users to obtain accurate and fast estimation of the diffusion coefficients of cartilage zones. The diffusion coefficients estimated using the proposed approach are compared with those determined using direct scanning of the parameter space as the optimization approach. It has been shown that both approaches yield comparable results. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Coupled land surface-subsurface hydrogeophysical inverse modeling to estimate soil organic carbon content and explore associated hydrological and thermal dynamics in the Arctic tundra

    NASA Astrophysics Data System (ADS)

    Phuong Tran, Anh; Dafflon, Baptiste; Hubbard, Susan S.

    2017-09-01

    Quantitative characterization of soil organic carbon (OC) content is essential due to its significant impacts on surface-subsurface hydrological-thermal processes and microbial decomposition of OC, which both in turn are important for predicting carbon-climate feedbacks. While such quantification is particularly important in the vulnerable organic-rich Arctic region, it is challenging to achieve due to the general limitations of conventional core sampling and analysis methods, and to the extremely dynamic nature of hydrological-thermal processes associated with annual freeze-thaw events. In this study, we develop and test an inversion scheme that can flexibly use single or multiple datasets - including soil liquid water content, temperature and electrical resistivity tomography (ERT) data - to estimate the vertical distribution of OC content. Our approach relies on the fact that OC content strongly influences soil hydrological-thermal parameters and, therefore, indirectly controls the spatiotemporal dynamics of soil liquid water content, temperature and their correlated electrical resistivity. We employ the Community Land Model to simulate nonisothermal surface-subsurface hydrological dynamics from the bedrock to the top of canopy, with consideration of land surface processes (e.g., solar radiation balance, evapotranspiration, snow accumulation and melting) and ice-liquid water phase transitions. For inversion, we combine a deterministic and an adaptive Markov chain Monte Carlo (MCMC) optimization algorithm to estimate a posteriori distributions of desired model parameters. For hydrological-thermal-to-geophysical variable transformation, the simulated subsurface temperature, liquid water content and ice content are explicitly linked to soil electrical resistivity via petrophysical and geophysical models. We validate the developed scheme using different numerical experiments and evaluate the influence of measurement errors and benefit of joint inversion on the estimation of OC and other parameters. We also quantify the propagation of uncertainty from the estimated parameters to prediction of hydrological-thermal responses. We find that, compared to inversion of single dataset (temperature, liquid water content or apparent resistivity), joint inversion of these datasets significantly reduces parameter uncertainty. We find that the joint inversion approach is able to estimate OC and sand content within the shallow active layer (top 0.3 m of soil) with high reliability. Due to the small variations of temperature and moisture within the shallow permafrost (here at about 0.6 m depth), the approach is unable to estimate OC with confidence. However, if the soil porosity is functionally related to the OC and mineral content, which is often observed in organic-rich Arctic soil, the uncertainty of OC estimate at this depth remarkably decreases. Our study documents the value of the new surface-subsurface, deterministic-stochastic inversion approach, as well as the benefit of including multiple types of data to estimate OC and associated hydrological-thermal dynamics.

  7. Development of a coupled FLEXPART-TM5 CO2 inverse modeling system

    NASA Astrophysics Data System (ADS)

    Monteil, Guillaume; Scholze, Marko

    2017-04-01

    Inverse modeling techniques are used to derive information on surface CO2 fluxes from measurements of atmospheric CO2 concentrations. The principle is to use an atmospheric transport model to compute the CO2 concentrations corresponding to a prior estimate of the surface CO2 fluxes. From the mismatches between observed and modeled concentrations, a correction of the flux estimate is computed, that represents the best statistical compromise between the prior knowledge and the new information brought in by the observations. Such "top-down" CO2 flux estimates are useful for a number of applications, such as the verification of CO2 emission inventories reported by countries in the framework of international greenhouse gas emission reduction treaties (Paris agreement), or for the validation and improvement of the bottom-up models used in future climate predictions. Inverse modeling CO2 flux estimates are limited in resolution (spatial and temporal) by the lack of observational constraints and by the very heavy computational cost of high-resolution inversions. The observational limitation is however being lifted, with the expansion of regional surface networks such as ICOS in Europe, and with the launch of new satellite instruments to measure tropospheric CO2 concentrations. To make an efficient use of these new observations, it is necessary to step up the resolution of atmospheric inversions. We have developed an inverse modeling system, based on a coupling between the TM5 and the FLEXPART transport models. The coupling follows the approach described in Rodenbeck et al., 2009: a first global, coarse resolution, inversion is performed using TM5-4DVAR, and is used to provide background constraints to a second, regional, fine resolution inversion, using FLEXPART as a transport model. The inversion algorithm is adapted from the 4DVAR algorithm used by TM5, but has been developed to be model-agnostic: it would be straightforward to replace TM5 and/or FLEXPART by other transport models, thus making it well suited to study transport model uncertainties. We will present preliminary European CO2 inversions using ICOS observations, and comparisons with TM5-4DVAR and TM3-STILT inversions. Reference: Rödenbeck, C., Gerbig, C., Trusilova, K., & Heimann, M. (2009). A two-step scheme for high-resolution regional atmospheric trace gas inversions based on independent models. Atmospheric Chemistry and Physics Discussions, 9(1), 1727-1756. http://doi.org/10.5194/acpd-9-1727-2009

  8. Fast Component Pursuit for Large-Scale Inverse Covariance Estimation.

    PubMed

    Han, Lei; Zhang, Yu; Zhang, Tong

    2016-08-01

    The maximum likelihood estimation (MLE) for the Gaussian graphical model, which is also known as the inverse covariance estimation problem, has gained increasing interest recently. Most existing works assume that inverse covariance estimators contain sparse structure and then construct models with the ℓ 1 regularization. In this paper, different from existing works, we study the inverse covariance estimation problem from another perspective by efficiently modeling the low-rank structure in the inverse covariance, which is assumed to be a combination of a low-rank part and a diagonal matrix. One motivation for this assumption is that the low-rank structure is common in many applications including the climate and financial analysis, and another one is that such assumption can reduce the computational complexity when computing its inverse. Specifically, we propose an efficient COmponent Pursuit (COP) method to obtain the low-rank part, where each component can be sparse. For optimization, the COP method greedily learns a rank-one component in each iteration by maximizing the log-likelihood. Moreover, the COP algorithm enjoys several appealing properties including the existence of an efficient solution in each iteration and the theoretical guarantee on the convergence of this greedy approach. Experiments on large-scale synthetic and real-world datasets including thousands of millions variables show that the COP method is faster than the state-of-the-art techniques for the inverse covariance estimation problem when achieving comparable log-likelihood on test data.

  9. Informativeness of Wind Data in Linear Madden-Julian Oscillation Prediction

    DTIC Science & Technology

    2016-08-15

    Linear inverse models (LIMs) are used to explore predictability and information content of the Madden–Julian Oscillation (MJO). Hindcast skill for...mostly at the largest scales, adds 1–2 days of skill. Keywords: linear inverse modeling; Madden–Julian Oscillation; sub-seasonal prediction 1...tion that may reflect on the MJO’s incompletely under- stood dynamics. Cavanaugh et al. (2014, hereafter C14) explored the skill of linear inverse

  10. Inference of chromosomal inversion dynamics from Pool-Seq data in natural and laboratory populations of Drosophila melanogaster.

    PubMed

    Kapun, Martin; van Schalkwyk, Hester; McAllister, Bryant; Flatt, Thomas; Schlötterer, Christian

    2014-04-01

    Sequencing of pools of individuals (Pool-Seq) represents a reliable and cost-effective approach for estimating genome-wide SNP and transposable element insertion frequencies. However, Pool-Seq does not provide direct information on haplotypes so that, for example, obtaining inversion frequencies has not been possible until now. Here, we have developed a new set of diagnostic marker SNPs for seven cosmopolitan inversions in Drosophila melanogaster that can be used to infer inversion frequencies from Pool-Seq data. We applied our novel marker set to Pool-Seq data from an experimental evolution study and from North American and Australian latitudinal clines. In the experimental evolution data, we find evidence that positive selection has driven the frequencies of In(3R)C and In(3R)Mo to increase over time. In the clinal data, we confirm the existence of frequency clines for In(2L)t, In(3L)P and In(3R)Payne in both North America and Australia and detect a previously unknown latitudinal cline for In(3R)Mo in North America. The inversion markers developed here provide a versatile and robust tool for characterizing inversion frequencies and their dynamics in Pool-Seq data from diverse D. melanogaster populations. © 2013 The Authors. Molecular Ecology Published by John Wiley & Sons Ltd.

  11. Inference of chromosomal inversion dynamics from Pool-Seq data in natural and laboratory populations of Drosophila melanogaster

    PubMed Central

    Kapun, Martin; van Schalkwyk, Hester; McAllister, Bryant; Flatt, Thomas; Schlötterer, Christian

    2014-01-01

    Sequencing of pools of individuals (Pool-Seq) represents a reliable and cost-effective approach for estimating genome-wide SNP and transposable element insertion frequencies. However, Pool-Seq does not provide direct information on haplotypes so that, for example, obtaining inversion frequencies has not been possible until now. Here, we have developed a new set of diagnostic marker SNPs for seven cosmopolitan inversions in Drosophila melanogaster that can be used to infer inversion frequencies from Pool-Seq data. We applied our novel marker set to Pool-Seq data from an experimental evolution study and from North American and Australian latitudinal clines. In the experimental evolution data, we find evidence that positive selection has driven the frequencies of In(3R)C and In(3R)Mo to increase over time. In the clinal data, we confirm the existence of frequency clines for In(2L)t, In(3L)P and In(3R)Payne in both North America and Australia and detect a previously unknown latitudinal cline for In(3R)Mo in North America. The inversion markers developed here provide a versatile and robust tool for characterizing inversion frequencies and their dynamics in Pool-Seq data from diverse D. melanogaster populations. PMID:24372777

  12. Dynamical arrest with zero complexity: The unusual behavior of the spherical Blume-Emery-Griffiths disordered model

    NASA Astrophysics Data System (ADS)

    Rainone, Corrado; Ferrari, Ulisse; Paoluzzi, Matteo; Leuzzi, Luca

    2015-12-01

    The short- and long-time dynamics of model systems undergoing a glass transition with apparent inversion of Kauzmann and dynamical arrest glass transition lines is investigated. These models belong to the class of the spherical mean-field approximation of a spin-1 model with p -body quenched disordered interaction, with p >2 , termed spherical Blume-Emery-Griffiths models. Depending on temperature and chemical potential the system is found in a paramagnetic or in a glassy phase and the transition between these phases can be of a different nature. In specific regions of the phase diagram coexistence of low-density and high-density paramagnets can occur, as well as the coexistence of spin-glass and paramagnetic phases. The exact static solution for the glassy phase is known to be obtained by the one-step replica symmetry breaking ansatz. Different scenarios arise for both the dynamic and the thermodynamic transitions. These include: (i) the usual random first-order transition (Kauzmann-like) for mean-field glasses preceded by a dynamic transition, (ii) a thermodynamic first-order transition with phase coexistence and latent heat, and (iii) a regime of apparent inversion of static transition line and dynamic transition lines, the latter defined as a nonzero complexity line. The latter inversion, though, turns out to be preceded by a dynamical arrest line at higher temperature. Crossover between different regimes is analyzed by solving mode-coupling-theory equations near the boundaries of paramagnetic solutions and the relationship with the underlying statics is discussed.

  13. The effect of resistance level and stability demands on recruitment patterns and internal loading of spine in dynamic flexion and extension using a simple trunk model.

    PubMed

    Zeinali-Davarani, Shahrokh; Shirazi-Adl, Aboulfazl; Dariush, Behzad; Hemami, Hooshang; Parnianpour, Mohamad

    2011-07-01

    The effects of external resistance on the recruitment of trunk muscles in sagittal movements and the coactivation mechanism to maintain spinal stability were investigated using a simple computational model of iso-resistive spine sagittal movements. Neural excitation of muscles was attained based on inverse dynamics approach along with a stability-based optimisation. The trunk flexion and extension movements between 60° flexion and the upright posture against various resistance levels were simulated. Incorporation of the stability constraint in the optimisation algorithm required higher antagonistic activities for all resistance levels mostly close to the upright position. Extension movements showed higher coactivation with higher resistance, whereas flexion movements demonstrated lower coactivation indicating a greater stability demand in backward extension movements against higher resistance at the neighbourhood of the upright posture. Optimal extension profiles based on minimum jerk, work and power had distinct kinematics profiles which led to recruitment patterns with different timing and amplitude of activation.

  14. Whole-Body Human Inverse Dynamics with Distributed Micro-Accelerometers, Gyros and Force Sensing †

    PubMed Central

    Latella, Claudia; Kuppuswamy, Naveen; Romano, Francesco; Traversaro, Silvio; Nori, Francesco

    2016-01-01

    Human motion tracking is a powerful tool used in a large range of applications that require human movement analysis. Although it is a well-established technique, its main limitation is the lack of estimation of real-time kinetics information such as forces and torques during the motion capture. In this paper, we present a novel approach for a human soft wearable force tracking for the simultaneous estimation of whole-body forces along with the motion. The early stage of our framework encompasses traditional passive marker based methods, inertial and contact force sensor modalities and harnesses a probabilistic computational technique for estimating dynamic quantities, originally proposed in the domain of humanoid robot control. We present experimental analysis on subjects performing a two degrees-of-freedom bowing task, and we estimate the motion and kinetics quantities. The results demonstrate the validity of the proposed method. We discuss the possible use of this technique in the design of a novel soft wearable force tracking device and its potential applications. PMID:27213394

  15. Simulation of the modulation transfer function dependent on the partial Fourier fraction in dynamic contrast enhancement magnetic resonance imaging.

    PubMed

    Takatsu, Yasuo; Ueyama, Tsuyoshi; Miyati, Tosiaki; Yamamura, Kenichirou

    2016-12-01

    The image characteristics in dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) depend on the partial Fourier fraction and contrast medium concentration. These characteristics were assessed and the modulation transfer function (MTF) was calculated by computer simulation. A digital phantom was created from signal intensity data acquired at different contrast medium concentrations on a breast model. The frequency images [created by fast Fourier transform (FFT)] were divided into 512 parts and rearranged to form a new image. The inverse FFT of this image yielded the MTF. From the reference data, three linear models (low, medium, and high) and three exponential models (slow, medium, and rapid) of the signal intensity were created. Smaller partial Fourier fractions, and higher gradients in the linear models, corresponded to faster MTF decline. The MTF more gradually decreased in the exponential models than in the linear models. The MTF, which reflects the image characteristics in DCE-MRI, was more degraded as the partial Fourier fraction decreased.

  16. Sources of shaking and flooding during the Tohoku-Oki earthquake: a mixture of rupture styles

    USGS Publications Warehouse

    Wei, Shengji; Graves, Robert; Helmberger, Don; Avouac, Jean-Philippe; Jiang, Junle

    2012-01-01

    Modeling strong ground motions from great subduction zone earthquakes is one of the great challenges of computational seismology. To separate the rupture characteristics from complexities caused by 3D sub-surface geology requires an extraordinary data set such as provided by the recent Mw9.0 Tohoku-Oki earthquake. Here we combine deterministic inversion and dynamically guided forward simulation methods to model over one thousand high-rate GPS and strong motion observations from 0 to 0.25 Hz across the entire Honshu Island. Our results display distinct styles of rupture with a deeper generic interplate event (~Mw8.5) transitioning to a shallow tsunamigenic earthquake (~Mw9.0) at about 25 km depth in a process driven by a strong dynamic weakening mechanism, possibly thermal pressurization. This source model predicts many important features of the broad set of seismic, geodetic and seafloor observations providing a major advance in our understanding of such great natural hazards.

  17. Heat-transfer dynamics during cryogen spray cooling of substrate at different initial temperatures.

    PubMed

    Jia, Wangcun; Aguilar, Guillermo; Wang, Guo-Xiang; Nelson, J Stuart

    2004-12-07

    Cryogen spray cooling (CSC) is used to minimize the risk of epidermal damage during laser dermatologic therapy. However, the dominant mechanisms of heat transfer during the transient cooling process are incompletely understood. The objective of this study is to elucidate the physics of CSC by measuring the effect of initial substrate temperature (T0) on cooling dynamics. Cryogen was delivered by a straight-tube nozzle onto a skin phantom. A fast-response thermocouple was used to record the phantom temperature changes before, during and after the cryogen spray. Surface heat fluxes (q") and heat-transfer coefficients (h) were computed using an inverse heat conduction algorithm. The maximum surface heat flux (q"max) was observed to increase with T0. The surface temperature corresponding to q"max also increased with T0 but the latter has no significant effect on h. It is concluded that heat transfer between the cryogen spray and skin phantom remains in the nucleate boiling region even if T0 is 80 degrees C.

  18. Automated inverse computer modeling of borehole flow data in heterogeneous aquifers

    NASA Astrophysics Data System (ADS)

    Sawdey, J. R.; Reeve, A. S.

    2012-09-01

    A computer model has been developed to simulate borehole flow in heterogeneous aquifers where the vertical distribution of permeability may vary significantly. In crystalline fractured aquifers, flow into or out of a borehole occurs at discrete locations of fracture intersection. Under these circumstances, flow simulations are defined by independent variables of transmissivity and far-field heads for each flow contributing fracture intersecting the borehole. The computer program, ADUCK (A Downhole Underwater Computational Kit), was developed to automatically calibrate model simulations to collected flowmeter data providing an inverse solution to fracture transmissivity and far-field head. ADUCK has been tested in variable borehole flow scenarios, and converges to reasonable solutions in each scenario. The computer program has been created using open-source software to make the ADUCK model widely available to anyone who could benefit from its utility.

  19. Visualizing characteristics of ocean data collected during the Shuttle Imaging Radar-B experiment

    NASA Technical Reports Server (NTRS)

    Tilley, David G.

    1991-01-01

    Topographic measurements of sea surface elevation collected by the Surface Contour Radar (SCR) during NASA's Shuttle Imaging Radar (SIR-B) experiment are plotted as three dimensional surface plots to observe wave height variance along the track of a P-3 aircraft. Ocean wave spectra were computed from rotating altimeter measurements acquired by the Radar Ocean Wave Spectrometer (ROWS). Fourier power spectra computed from SIR-B synthetic aperture radar (SAR) images of the ocean are compared to ROWS surface wave spectra. Fourier inversion of SAR spectra, after subtraction of spectral noise and modeling of wave height modulation, yields topography similar to direct measurements made by SCR. Visual perspectives on the SCR and SAR ocean data are compared. Threshold distinctions between surface elevation and texture modulations of SAR data are considered within the context of a dynamic statistical model of rough surface scattering. The result of these endeavors is insight as to the physical mechanism governing the imaging of ocean waves with SAR.

  20. Random-subset fitting of digital holograms for fast three-dimensional particle tracking [invited].

    PubMed

    Dimiduk, Thomas G; Perry, Rebecca W; Fung, Jerome; Manoharan, Vinothan N

    2014-09-20

    Fitting scattering solutions to time series of digital holograms is a precise way to measure three-dimensional dynamics of microscale objects such as colloidal particles. However, this inverse-problem approach is computationally expensive. We show that the computational time can be reduced by an order of magnitude or more by fitting to a random subset of the pixels in a hologram. We demonstrate our algorithm on experimentally measured holograms of micrometer-scale colloidal particles, and we show that 20-fold increases in speed, relative to fitting full frames, can be attained while introducing errors in the particle positions of 10 nm or less. The method is straightforward to implement and works for any scattering model. It also enables a parallelization strategy wherein random-subset fitting is used to quickly determine initial guesses that are subsequently used to fit full frames in parallel. This approach may prove particularly useful for studying rare events, such as nucleation, that can only be captured with high frame rates over long times.

Top