Sample records for pseudo-inverse jacobian control

  1. Advanced control schemes and kinematic analysis for a kinematically redundant 7 DOF manipulator

    NASA Technical Reports Server (NTRS)

    Nguyen, Charles C.; Zhou, Zhen-Lei

    1990-01-01

    The kinematic analysis and control of a kinematically redundant manipulator is addressed. The manipulator is the slave arm of a telerobot system recently built at Goddard Space Flight Center (GSFC) to serve as a testbed for investigating research issues in telerobotics. A forward kinematic transformation is developed in its most simplified form, suitable for real-time control applications, and the manipulator Jacobian is derived using the vector cross product method. Using the developed forward kinematic transformation and quaternion representation of orientation matrices, we perform computer simulation to evaluate the efficiency of the Jacobian in converting joint velocities into Cartesian velocities and to investigate the accuracy of Jacobian pseudo-inverse for various sampling times. The equivalence between Cartesian velocities and quaternion is also verified using computer simulation. Three control schemes are proposed and discussed for controlling the motion of the slave arm end-effector.

  2. Kinematically redundant robot manipulators

    NASA Technical Reports Server (NTRS)

    Baillieul, J.; Hollerbach, J.; Brockett, R.; Martin, D.; Percy, R.; Thomas, R.

    1987-01-01

    Research on control, design and programming of kinematically redundant robot manipulators (KRRM) is discussed. These are devices in which there are more joint space degrees of freedom than are required to achieve every position and orientation of the end-effector necessary for a given task in a given workspace. The technological developments described here deal with: kinematic programming techniques for automatically generating joint-space trajectories to execute prescribed tasks; control of redundant manipulators to optimize dynamic criteria (e.g., applications of forces and moments at the end-effector that optimally distribute the loading of actuators); and design of KRRMs to optimize functionality in congested work environments or to achieve other goals unattainable with non-redundant manipulators. Kinematic programming techniques are discussed, which show that some pseudo-inverse techniques that have been proposed for redundant manipulator control fail to achieve the goals of avoiding kinematic singularities and also generating closed joint-space paths corresponding to close paths of the end effector in the workspace. The extended Jacobian is proposed as an alternative to pseudo-inverse techniques.

  3. A cut-&-paste strategy for the 3-D inversion of helicopter-borne electromagnetic data - I. 3-D inversion using the explicit Jacobian and a tensor-based formulation

    NASA Astrophysics Data System (ADS)

    Scheunert, M.; Ullmann, A.; Afanasjew, M.; Börner, R.-U.; Siemon, B.; Spitzer, K.

    2016-06-01

    We present an inversion concept for helicopter-borne frequency-domain electromagnetic (HEM) data capable of reconstructing 3-D conductivity structures in the subsurface. Standard interpretation procedures often involve laterally constrained stitched 1-D inversion techniques to create pseudo-3-D models that are largely representative for smoothly varying conductivity distributions in the subsurface. Pronounced lateral conductivity changes may, however, produce significant artifacts that can lead to serious misinterpretation. Still, 3-D inversions of entire survey data sets are numerically very expensive. Our approach is therefore based on a cut-&-paste strategy whereupon the full 3-D inversion needs to be applied only to those parts of the survey where the 1-D inversion actually fails. The introduced 3-D Gauss-Newton inversion scheme exploits information given by a state-of-the-art (laterally constrained) 1-D inversion. For a typical HEM measurement, an explicit representation of the Jacobian matrix is inevitable which is caused by the unique transmitter-receiver relation. We introduce tensor quantities which facilitate the matrix assembly of the forward operator as well as the efficient calculation of the Jacobian. The finite difference forward operator incorporates the displacement currents because they may seriously affect the electromagnetic response at frequencies above 100. Finally, we deliver the proof of concept for the inversion using a synthetic data set with a noise level of up to 5%.

  4. Development of advanced control schemes for telerobot manipulators

    NASA Technical Reports Server (NTRS)

    Nguyen, Charles C.; Zhou, Zhen-Lei

    1991-01-01

    To study space applications of telerobotics, Goddard Space Flight Center (NASA) has recently built a testbed composed mainly of a pair of redundant slave arms having seven degrees of freedom and a master hand controller system. The mathematical developments required for the computerized simulation study and motion control of the slave arms are presented. The slave arm forward kinematic transformation is presented which is derived using the D-H notation and is then reduced to its most simplified form suitable for real-time control applications. The vector cross product method is then applied to obtain the slave arm Jacobian matrix. Using the developed forward kinematic transformation and quaternions representation of the slave arm end-effector orientation, computer simulation is conducted to evaluate the efficiency of the Jacobian in converting joint velocities into Cartesian velocities and to investigate the accuracy of the Jacobian pseudo-inverse for various sampling times. In addition, the equivalence between Cartesian velocities and quaternion is also verified using computer simulation. The motion control of the slave arm is examined. Three control schemes, the joint-space adaptive control scheme, the Cartesian adaptive control scheme, and the hybrid position/force control scheme are proposed for controlling the motion of the slave arm end-effector. Development of the Cartesian adaptive control scheme is presented and some preliminary results of the remaining control schemes are presented and discussed.

  5. Robust Inversion and Data Compression in Control Allocation

    NASA Technical Reports Server (NTRS)

    Hodel, A. Scottedward

    2000-01-01

    We present an off-line computational method for control allocation design. The control allocation function delta = F(z)tau = delta (sub 0) (z) mapping commanded body-frame torques to actuator commands is implicitly specified by trim condition delta (sub 0) (z) and by a robust pseudo-inverse problem double vertical line I - G(z) F(z) double vertical line less than epsilon (z) where G(z) is a system Jacobian evaluated at operating point z, z circumflex is an estimate of z, and epsilon (z) less than 1 is a specified error tolerance. The allocation function F(z) = sigma (sub i) psi (z) F (sub i) is computed using a heuristic technique for selecting wavelet basis functions psi and a constrained least-squares criterion for selecting the allocation matrices F (sub i). The method is applied to entry trajectory control allocation for a reusable launch vehicle (X-33).

  6. A Method to Solve Interior and Exterior Camera Calibration Parameters for Image Resection

    NASA Technical Reports Server (NTRS)

    Samtaney, Ravi

    1999-01-01

    An iterative method is presented to solve the internal and external camera calibration parameters, given model target points and their images from one or more camera locations. The direct linear transform formulation was used to obtain a guess for the iterative method, and herein lies one of the strengths of the present method. In all test cases, the method converged to the correct solution. In general, an overdetermined system of nonlinear equations is solved in the least-squares sense. The iterative method presented is based on Newton-Raphson for solving systems of nonlinear algebraic equations. The Jacobian is analytically derived and the pseudo-inverse of the Jacobian is obtained by singular value decomposition.

  7. Inversion Of Jacobian Matrix For Robot Manipulators

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Bejczy, Antal K.

    1989-01-01

    Report discusses inversion of Jacobian matrix for class of six-degree-of-freedom arms with spherical wrist, i.e., with last three joints intersecting. Shows by taking advantage of simple geometry of such arms, closed-form solution of Q=J-1X, which represents linear transformation from task space to joint space, obtained efficiently. Presents solutions for PUMA arm, JPL/Stanford arm, and six-revolute-joint coplanar arm along with all singular points. Main contribution of paper shows simple geometry of this type of arms exploited in performing inverse transformation without any need to compute Jacobian or its inverse explicitly. Implication of this computational efficiency advanced task-space control schemes for spherical-wrist arms implemented more efficiently.

  8. 3-D Magnetotelluric Forward Modeling And Inversion Incorporating Topography By Using Vector Finite-Element Method Combined With Divergence Corrections Based On The Magnetic Field (VFEH++)

    NASA Astrophysics Data System (ADS)

    Shi, X.; Utada, H.; Jiaying, W.

    2009-12-01

    The vector finite-element method combined with divergence corrections based on the magnetic field H, referred to as VFEH++ method, is developed to simulate the magnetotelluric (MT) responses of 3-D conductivity models. The advantages of the new VFEH++ method are the use of edge-elements to eliminate the vector parasites and the divergence corrections to explicitly guarantee the divergence-free conditions in the whole modeling domain. 3-D MT topographic responses are modeling using the new VFEH++ method, and are compared with those calculated by other numerical methods. The results show that MT responses can be modeled highly accurate using the VFEH+ +method. The VFEH++ algorithm is also employed for the 3-D MT data inversion incorporating topography. The 3-D MT inverse problem is formulated as a minimization problem of the regularized misfit function. In order to avoid the huge memory requirement and very long time for computing the Jacobian sensitivity matrix for Gauss-Newton method, we employ the conjugate gradient (CG) approach to solve the inversion equation. In each iteration of CG algorithm, the cost computation is the product of the Jacobian sensitivity matrix with a model vector x or its transpose with a data vector y, which can be transformed into two pseudo-forwarding modeling. This avoids the full explicitly Jacobian matrix calculation and storage which leads to considerable savings in the memory required by the inversion program in PC computer. The performance of CG algorithm will be illustrated by several typical 3-D models with horizontal earth surface and topographic surfaces. The results show that the VFEH++ and CG algorithms can be effectively employed to 3-D MT field data inversion.

  9. Parallel processing architecture for computing inverse differential kinematic equations of the PUMA arm

    NASA Technical Reports Server (NTRS)

    Hsia, T. C.; Lu, G. Z.; Han, W. H.

    1987-01-01

    In advanced robot control problems, on-line computation of inverse Jacobian solution is frequently required. Parallel processing architecture is an effective way to reduce computation time. A parallel processing architecture is developed for the inverse Jacobian (inverse differential kinematic equation) of the PUMA arm. The proposed pipeline/parallel algorithm can be inplemented on an IC chip using systolic linear arrays. This implementation requires 27 processing cells and 25 time units. Computation time is thus significantly reduced.

  10. Algorithms for Nonlinear Least-Squares Problems

    DTIC Science & Technology

    1988-09-01

    O -,i(x) 2 , where each -,(x) is a smooth function mapping Rn to R. J - The m x n Jacobian matrix of f. ... x g - The gradient of the nonlinear least...V211f(X*)I112~ l~ l) J(xk)T J(xk) 2 + O(k - X*) For more convergence results and detailed convergence analysis for the Gauss-Newton method, see, e. g ...for a class of nonlinear least-squares problems that includes zero-residual prob- lems. The function Jt is the pseudo-inverse of Jk (see, e. g

  11. Correlation of spacecraft thermal mathematical models to reference data

    NASA Astrophysics Data System (ADS)

    Torralbo, Ignacio; Perez-Grande, Isabel; Sanz-Andres, Angel; Piqueras, Javier

    2018-03-01

    Model-to-test correlation is a frequent problem in spacecraft-thermal control design. The idea is to determine the values of the parameters of the thermal mathematical model (TMM) that allows reaching a good fit between the TMM results and test data, in order to reduce the uncertainty of the mathematical model. Quite often, this task is performed manually, mainly because a good engineering knowledge and experience is needed to reach a successful compromise, but the use of a mathematical tool could facilitate this work. The correlation process can be considered as the minimization of the error of the model results with regard to the reference data. In this paper, a simple method is presented suitable to solve the TMM-to-test correlation problem, using Jacobian matrix formulation and Moore-Penrose pseudo-inverse, generalized to include several load cases. Aside, in simple cases, this method also allows for analytical solutions to be obtained, which helps to analyze some problems that appear when the Jacobian matrix is singular. To show the implementation of the method, two problems have been considered, one more academic, and the other one the TMM of an electronic box of PHI instrument of ESA Solar Orbiter mission, to be flown in 2019. The use of singular value decomposition of the Jacobian matrix to analyze and reduce these models is also shown. The error in parameter space is used to assess the quality of the correlation results in both models.

  12. CLFs-based optimization control for a class of constrained visual servoing systems.

    PubMed

    Song, Xiulan; Miaomiao, Fu

    2017-03-01

    In this paper, we use the control Lyapunov function (CLF) technique to present an optimized visual servo control method for constrained eye-in-hand robot visual servoing systems. With the knowledge of camera intrinsic parameters and depth of target changes, visual servo control laws (i.e. translation speed) with adjustable parameters are derived by image point features and some known CLF of the visual servoing system. The Fibonacci method is employed to online compute the optimal value of those adjustable parameters, which yields an optimized control law to satisfy constraints of the visual servoing system. The Lyapunov's theorem and the properties of CLF are used to establish stability of the constrained visual servoing system in the closed-loop with the optimized control law. One merit of the presented method is that there is no requirement of online calculating the pseudo-inverse of the image Jacobian's matrix and the homography matrix. Simulation and experimental results illustrated the effectiveness of the method proposed here. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  13. 3-D minimum-structure inversion of magnetotelluric data using the finite-element method and tetrahedral grids

    NASA Astrophysics Data System (ADS)

    Jahandari, H.; Farquharson, C. G.

    2017-11-01

    Unstructured grids enable representing arbitrary structures more accurately and with fewer cells compared to regular structured grids. These grids also allow more efficient refinements compared to rectilinear meshes. In this study, tetrahedral grids are used for the inversion of magnetotelluric (MT) data, which allows for the direct inclusion of topography in the model, for constraining an inversion using a wireframe-based geological model and for local refinement at the observation stations. A minimum-structure method with an iterative model-space Gauss-Newton algorithm for optimization is used. An iterative solver is employed for solving the normal system of equations at each Gauss-Newton step and the sensitivity matrix-vector products that are required by this solver are calculated using pseudo-forward problems. This method alleviates the need to explicitly form the Hessian or Jacobian matrices which significantly reduces the required computation memory. Forward problems are formulated using an edge-based finite-element approach and a sparse direct solver is used for the solutions. This solver allows saving and re-using the factorization of matrices for similar pseudo-forward problems within a Gauss-Newton iteration which greatly minimizes the computation time. Two examples are presented to show the capability of the algorithm: the first example uses a benchmark model while the second example represents a realistic geological setting with topography and a sulphide deposit. The data that are inverted are the full-tensor impedance and the magnetic transfer function vector. The inversions sufficiently recovered the models and reproduced the data, which shows the effectiveness of unstructured grids for complex and realistic MT inversion scenarios. The first example is also used to demonstrate the computational efficiency of the presented model-space method by comparison with its data-space counterpart.

  14. Kinematic equations for control of the redundant eight-degree-of-freedom advanced research manipulator 2

    NASA Technical Reports Server (NTRS)

    Williams, Robert L., II

    1992-01-01

    The forward position and velocity kinematics for the redundant eight-degree-of-freedom Advanced Research Manipulator 2 (ARM2) are presented. Inverse position and velocity kinematic solutions are also presented. The approach in this paper is to specify two of the unknowns and solve for the remaining six unknowns. Two unknowns can be specified with two restrictions. First, the elbow joint angle and rate cannot be specified because they are known from the end-effector position and velocity. Second, one unknown must be specified from the four-jointed wrist, and the second from joints that translate the wrist, elbow joint excluded. There are eight solutions to the inverse position problem. The inverse velocity solution is unique, assuming the Jacobian matrix is not singular. A discussion of singularities is based on specifying two joint rates and analyzing the reduced Jacobian matrix. When this matrix is singular, the generalized inverse may be used as an alternate solution. Computer simulations were developed to verify the equations. Examples demonstrate agreement between forward and inverse solutions.

  15. Kinematics of an in-parallel actuated manipulator based on the Stewart platform mechanism

    NASA Technical Reports Server (NTRS)

    Williams, Robert L., II

    1992-01-01

    This paper presents kinematic equations and solutions for an in-parallel actuated robotic mechanism based on Stewart's platform. These equations are required for inverse position and resolved rate (inverse velocity) platform control. NASA LaRC has a Vehicle Emulator System (VES) platform designed by MIT which is based on Stewart's platform. The inverse position solution is straight-forward and computationally inexpensive. Given the desired position and orientation of the moving platform with respect to the base, the lengths of the prismatic leg actuators are calculated. The forward position solution is more complicated and theoretically has 16 solutions. The position and orientation of the moving platform with respect to the base is calculated given the leg actuator lengths. Two methods are pursued in this paper to solve this problem. The resolved rate (inverse velocity) solution is derived. Given the desired Cartesian velocity of the end-effector, the required leg actuator rates are calculated. The Newton-Raphson Jacobian matrix resulting from the second forward position kinematics solution is a modified inverse Jacobian matrix. Examples and simulations are given for the VES.

  16. Coordinated Control of a Planar Dual-Crane Non-Fully Restrained System

    DTIC Science & Technology

    2008-12-01

    Support Over-The-Shore (HSOTS) 2007 in Puerto Quetzal , Guatemala. . . . . . . . . . . . . . 30 Figure 27. Reference frame and coordinate definitions...2007 in Puerto Quetzal , Guatemala. 30 require the construction an inverse of the Jacobian, but rather the transpose only. It is noted that the effector

  17. Efficient Jacobian inversion for the control of simple robot manipulators

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Bejczy, Antal K.

    1988-01-01

    Symbolic inversion of the Jacobian matrix for spherical wrist arms is investigated. It is shown that, taking advantage of the simple geometry of these arms, the closed-form solution of the system Q = J-1X, representing a transformation from task space to joint space, can be obtained very efficiently. The solutions for PUMA, Stanford, and a six-revolute-joint coplanar arm, along with all singular points, are presented. The solution for each joint variable is found as an explicit function of the singular points which provides a better insight into the effect of different singular points on the motion and force exertion of each individual joint. For the above arms, the computation cost of the solution is on the same order as the cost of forward kinematic solution and it is significantly reduced if forward kinematic solution is already obtained. A comparison with previous methods shows that this method is the most efficient to date.

  18. Reducing computational costs in large scale 3D EIT by using a sparse Jacobian matrix with block-wise CGLS reconstruction.

    PubMed

    Yang, C L; Wei, H Y; Adler, A; Soleimani, M

    2013-06-01

    Electrical impedance tomography (EIT) is a fast and cost-effective technique to provide a tomographic conductivity image of a subject from boundary current-voltage data. This paper proposes a time and memory efficient method for solving a large scale 3D EIT inverse problem using a parallel conjugate gradient (CG) algorithm. The 3D EIT system with a large number of measurement data can produce a large size of Jacobian matrix; this could cause difficulties in computer storage and the inversion process. One of challenges in 3D EIT is to decrease the reconstruction time and memory usage, at the same time retaining the image quality. Firstly, a sparse matrix reduction technique is proposed using thresholding to set very small values of the Jacobian matrix to zero. By adjusting the Jacobian matrix into a sparse format, the element with zeros would be eliminated, which results in a saving of memory requirement. Secondly, a block-wise CG method for parallel reconstruction has been developed. The proposed method has been tested using simulated data as well as experimental test samples. Sparse Jacobian with a block-wise CG enables the large scale EIT problem to be solved efficiently. Image quality measures are presented to quantify the effect of sparse matrix reduction in reconstruction results.

  19. Laterally constrained inversion for CSAMT data interpretation

    NASA Astrophysics Data System (ADS)

    Wang, Ruo; Yin, Changchun; Wang, Miaoyue; Di, Qingyun

    2015-10-01

    Laterally constrained inversion (LCI) has been successfully applied to the inversion of dc resistivity, TEM and airborne EM data. However, it hasn't been yet applied to the interpretation of controlled-source audio-frequency magnetotelluric (CSAMT) data. In this paper, we apply the LCI method for CSAMT data inversion by preconditioning the Jacobian matrix. We apply a weighting matrix to Jacobian to balance the sensitivity of model parameters, so that the resolution with respect to different model parameters becomes more uniform. Numerical experiments confirm that this can improve the convergence of the inversion. We first invert a synthetic dataset with and without noise to investigate the effect of LCI applications to CSAMT data, for the noise free data, the results show that the LCI method can recover the true model better compared to the traditional single-station inversion; and for the noisy data, the true model is recovered even with a noise level of 8%, indicating that LCI inversions are to some extent noise insensitive. Then, we re-invert two CSAMT datasets collected respectively in a watershed and a coal mine area in Northern China and compare our results with those from previous inversions. The comparison with the previous inversion in a coal mine shows that LCI method delivers smoother layer interfaces that well correlate to seismic data, while comparison with a global searching algorithm of simulated annealing (SA) in a watershed shows that though both methods deliver very similar good results, however, LCI algorithm presented in this paper runs much faster. The inversion results for the coal mine CSAMT survey show that a conductive water-bearing zone that was not revealed by the previous inversions has been identified by the LCI. This further demonstrates that the method presented in this paper works for CSAMT data inversion.

  20. Robust inverse kinematics using damped least squares with dynamic weighting

    NASA Technical Reports Server (NTRS)

    Schinstock, D. E.; Faddis, T. N.; Greenway, R. B.

    1994-01-01

    This paper presents a general method for calculating the inverse kinematics with singularity and joint limit robustness for both redundant and non-redundant serial-link manipulators. Damped least squares inverse of the Jacobian is used with dynamic weighting matrices in approximating the solution. This reduces specific joint differential vectors. The algorithm gives an exact solution away from the singularities and joint limits, and an approximate solution at or near the singularities and/or joint limits. The procedure is here implemented for a six d.o.f. teleoperator and a well behaved slave manipulator resulted under teleoperational control.

  1. Strategies to Enhance the Model Update in Regions of Weak Sensitivities for Use in Full Waveform Inversion

    NASA Astrophysics Data System (ADS)

    Nuber, André; Manukyan, Edgar; Maurer, Hansruedi

    2014-05-01

    Conventional methods of interpreting seismic data rely on filtering and processing limited portions of the recorded wavefield. Typically, either reflections, refractions or surface waves are considered in isolation. Particularly in near-surface engineering and environmental investigations (depths less than, say 100 m), these wave types often overlap in time and are difficult to separate. Full waveform inversion is a technique that seeks to exploit and interpret the full information content of the seismic records without the need for separating events first; it yields models of the subsurface at sub-wavelength resolution. We use a finite element modelling code to solve the 2D elastic isotropic wave equation in the frequency domain. This code is part of a Gauss-Newton inversion scheme which we employ to invert for the P- and S-wave velocities as well as for density in the subsurface. For shallow surface data the use of an elastic forward solver is essential because surface waves often dominate the seismograms. This leads to high sensitivities (partial derivatives contained in the Jacobian matrix of the Gauss-Newton inversion scheme) and thus large model updates close to the surface. Reflections from deeper structures may also include useful information, but the large sensitivities of the surface waves often preclude this information from being fully exploited. We have developed two methods that balance the sensitivity distributions and thus may help resolve the deeper structures. The first method includes equilibrating the columns of the Jacobian matrix prior to every inversion step by multiplying them with individual scaling factors. This is expected to also balance the model updates throughout the entire subsurface model. It can be shown that this procedure is mathematically equivalent to balancing the regularization weights of the individual model parameters. A proper choice of the scaling factors required to balance the Jacobian matrix is critical. We decided to normalise the columns of the Jacobian based on their absolute column sum, but defining an upper threshold for the scaling factors. This avoids particularly small and therefore insignificant sensitivities being over-boosted, which would produce unstable results. The second method proposed includes adjusting the inversion cell size with depth. Multiple cells of the forward modelling grid are merged to form larger inversion cells (typical ratios between forward and inversion cells are in the order of 1:100). The irregular inversion grid is adapted to the expected resolution power of full waveform inversion. Besides stabilizing the inversion, this approach also reduces the number of model parameters to be recovered. Consequently, the computational costs and the memory consumption are reduced significantly. This is particularly critical when Gauss-Newton type inversion schemes are employed. Extensive tests with synthetic data demonstrated that both methods stabilise the inversion and improve the inversion results. The two methods have some redundancy, which can be seen when both are applied simultaneously, that is, when scaling of the Jacobian matrix is applied to an irregular inversion grid. The calculated scaling factors are quite balanced and span a much smaller range than in the case of a regular inversion grid.

  2. Principal Component Geostatistical Approach for large-dimensional inverse problems

    PubMed Central

    Kitanidis, P K; Lee, J

    2014-01-01

    The quasi-linear geostatistical approach is for weakly nonlinear underdetermined inverse problems, such as Hydraulic Tomography and Electrical Resistivity Tomography. It provides best estimates as well as measures for uncertainty quantification. However, for its textbook implementation, the approach involves iterations, to reach an optimum, and requires the determination of the Jacobian matrix, i.e., the derivative of the observation function with respect to the unknown. Although there are elegant methods for the determination of the Jacobian, the cost is high when the number of unknowns, m, and the number of observations, n, is high. It is also wasteful to compute the Jacobian for points away from the optimum. Irrespective of the issue of computing derivatives, the computational cost of implementing the method is generally of the order of m2n, though there are methods to reduce the computational cost. In this work, we present an implementation that utilizes a matrix free in terms of the Jacobian matrix Gauss-Newton method and improves the scalability of the geostatistical inverse problem. For each iteration, it is required to perform K runs of the forward problem, where K is not just much smaller than m but can be smaller that n. The computational and storage cost of implementation of the inverse procedure scales roughly linearly with m instead of m2 as in the textbook approach. For problems of very large m, this implementation constitutes a dramatic reduction in computational cost compared to the textbook approach. Results illustrate the validity of the approach and provide insight in the conditions under which this method perform best. PMID:25558113

  3. Principal Component Geostatistical Approach for large-dimensional inverse problems.

    PubMed

    Kitanidis, P K; Lee, J

    2014-07-01

    The quasi-linear geostatistical approach is for weakly nonlinear underdetermined inverse problems, such as Hydraulic Tomography and Electrical Resistivity Tomography. It provides best estimates as well as measures for uncertainty quantification. However, for its textbook implementation, the approach involves iterations, to reach an optimum, and requires the determination of the Jacobian matrix, i.e., the derivative of the observation function with respect to the unknown. Although there are elegant methods for the determination of the Jacobian, the cost is high when the number of unknowns, m , and the number of observations, n , is high. It is also wasteful to compute the Jacobian for points away from the optimum. Irrespective of the issue of computing derivatives, the computational cost of implementing the method is generally of the order of m 2 n , though there are methods to reduce the computational cost. In this work, we present an implementation that utilizes a matrix free in terms of the Jacobian matrix Gauss-Newton method and improves the scalability of the geostatistical inverse problem. For each iteration, it is required to perform K runs of the forward problem, where K is not just much smaller than m but can be smaller that n . The computational and storage cost of implementation of the inverse procedure scales roughly linearly with m instead of m 2 as in the textbook approach. For problems of very large m , this implementation constitutes a dramatic reduction in computational cost compared to the textbook approach. Results illustrate the validity of the approach and provide insight in the conditions under which this method perform best.

  4. Optimization of computations for adjoint field and Jacobian needed in 3D CSEM inversion

    NASA Astrophysics Data System (ADS)

    Dehiya, Rahul; Singh, Arun; Gupta, Pravin K.; Israil, M.

    2017-01-01

    We present the features and results of a newly developed code, based on Gauss-Newton optimization technique, for solving three-dimensional Controlled-Source Electromagnetic inverse problem. In this code a special emphasis has been put on representing the operations by block matrices for conjugate gradient iteration. We show how in the computation of Jacobian, the matrix formed by differentiation of system matrix can be made independent of frequency to optimize the operations at conjugate gradient step. The coarse level parallel computing, using OpenMP framework, is used primarily due to its simplicity in implementation and accessibility of shared memory multi-core computing machine to almost anyone. We demonstrate how the coarseness of modeling grid in comparison to source (comp`utational receivers) spacing can be exploited for efficient computing, without compromising the quality of the inverted model, by reducing the number of adjoint calls. It is also demonstrated that the adjoint field can even be computed on a grid coarser than the modeling grid without affecting the inversion outcome. These observations were reconfirmed using an experiment design where the deviation of source from straight tow line is considered. Finally, a real field data inversion experiment is presented to demonstrate robustness of the code.

  5. Three-dimensional magnetotelluric inversion including topography using deformed hexahedral edge finite elements, direct solvers and data space Gauss-Newton, parallelized on SMP computers

    NASA Astrophysics Data System (ADS)

    Kordy, M. A.; Wannamaker, P. E.; Maris, V.; Cherkaev, E.; Hill, G. J.

    2014-12-01

    We have developed an algorithm for 3D simulation and inversion of magnetotelluric (MT) responses using deformable hexahedral finite elements that permits incorporation of topography. Direct solvers parallelized on symmetric multiprocessor (SMP), single-chassis workstations with large RAM are used for the forward solution, parameter jacobians, and model update. The forward simulator, jacobians calculations, as well as synthetic and real data inversion are presented. We use first-order edge elements to represent the secondary electric field (E), yielding accuracy O(h) for E and its curl (magnetic field). For very low frequency or small material admittivity, the E-field requires divergence correction. Using Hodge decomposition, correction may be applied after the forward solution is calculated. It allows accurate E-field solutions in dielectric air. The system matrix factorization is computed using the MUMPS library, which shows moderately good scalability through 12 processor cores but limited gains beyond that. The factored matrix is used to calculate the forward response as well as the jacobians of field and MT responses using the reciprocity theorem. Comparison with other codes demonstrates accuracy of our forward calculations. We consider a popular conductive/resistive double brick structure and several topographic models. In particular, the ability of finite elements to represent smooth topographic slopes permits accurate simulation of refraction of electromagnetic waves normal to the slopes at high frequencies. Run time tests indicate that for meshes as large as 150x150x60 elements, MT forward response and jacobians can be calculated in ~2.5 hours per frequency. For inversion, we implemented data space Gauss-Newton method, which offers reduction in memory requirement and a significant speedup of the parameter step versus model space approach. For dense matrix operations we use tiling approach of PLASMA library, which shows very good scalability. In synthetic inversions we examine the importance of including the topography in the inversion and we test different regularization schemes using weighted second norm of model gradient as well as inverting for a static distortion matrix following Miensopust/Avdeeva approach. We also apply our algorithm to invert MT data collected at Mt St Helens.

  6. Algorithmic vs. finite difference Jacobians for infrared atmospheric radiative transfer

    NASA Astrophysics Data System (ADS)

    Schreier, Franz; Gimeno García, Sebastián; Vasquez, Mayte; Xu, Jian

    2015-10-01

    Jacobians, i.e. partial derivatives of the radiance and transmission spectrum with respect to the atmospheric state parameters to be retrieved from remote sensing observations, are important for the iterative solution of the nonlinear inverse problem. Finite difference Jacobians are easy to implement, but computationally expensive and possibly of dubious quality; on the other hand, analytical Jacobians are accurate and efficient, but the implementation can be quite demanding. GARLIC, our "Generic Atmospheric Radiation Line-by-line Infrared Code", utilizes algorithmic differentiation (AD) techniques to implement derivatives w.r.t. atmospheric temperature and molecular concentrations. In this paper, we describe our approach for differentiation of the high resolution infrared and microwave spectra and provide an in-depth assessment of finite difference approximations using "exact" AD Jacobians as a reference. The results indicate that the "standard" two-point finite differences with 1 K and 1% perturbation for temperature and volume mixing ratio, respectively, can exhibit substantial errors, and central differences are significantly better. However, these deviations do not transfer into the truncated singular value decomposition solution of a least squares problem. Nevertheless, AD Jacobians are clearly recommended because of the superior speed and accuracy.

  7. Joint three-dimensional inversion of coupled groundwater flow and heat transfer based on automatic differentiation: sensitivity calculation, verification, and synthetic examples

    NASA Astrophysics Data System (ADS)

    Rath, V.; Wolf, A.; Bücker, H. M.

    2006-10-01

    Inverse methods are useful tools not only for deriving estimates of unknown parameters of the subsurface, but also for appraisal of the thus obtained models. While not being neither the most general nor the most efficient methods, Bayesian inversion based on the calculation of the Jacobian of a given forward model can be used to evaluate many quantities useful in this process. The calculation of the Jacobian, however, is computationally expensive and, if done by divided differences, prone to truncation error. Here, automatic differentiation can be used to produce derivative code by source transformation of an existing forward model. We describe this process for a coupled fluid flow and heat transport finite difference code, which is used in a Bayesian inverse scheme to estimate thermal and hydraulic properties and boundary conditions form measured hydraulic potentials and temperatures. The resulting derivative code was validated by comparison to simple analytical solutions and divided differences. Synthetic examples from different flow regimes demonstrate the use of the inverse scheme, and its behaviour in different configurations.

  8. Telerobotic control of the seven-degree-of-freedom CESAR manipulator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Babcock, S.M.; Dubey, R.V.; Euler, J.A.

    1988-01-01

    The application of a computationally efficient kinematic control scheme for manipulators with redundant degrees of freedom to the unilateral telerobotic control of seven-degree-of-freedom manipulator (CESARM) at the Oak Ridge National Laboratory Center for Engineering Systems Advanced Research is presented. The kinematic control scheme uses a gradient projection optimization method, which eliminates that need to determine the generalized inverse of the Jacobian when solving for joint velocities, given Cartesian end-effector velocities. A six-degree-of-freedom (nonreplica) master controller is used. Performance indices for redundancy resolution are discussed. 5 ref., 6 figs.

  9. Inverse optimal self-tuning PID control design for an autonomous underwater vehicle

    NASA Astrophysics Data System (ADS)

    Rout, Raja; Subudhi, Bidyadhar

    2017-01-01

    This paper presents a new approach to path following control design for an autonomous underwater vehicle (AUV). A NARMAX model of the AUV is derived first and then its parameters are adapted online using the recursive extended least square algorithm. An adaptive Propotional-Integral-Derivative (PID) controller is developed using the derived parameters to accomplish the path following task of an AUV. The gain parameters of the PID controller are tuned using an inverse optimal control technique, which alleviates the problem of solving Hamilton-Jacobian equation and also satisfies an error cost function. Simulation studies were pursued to verify the efficacy of the proposed control algorithm. From the obtained results, it is envisaged that the proposed NARMAX model-based self-tuning adaptive PID control provides good path following performance even in the presence of uncertainty arising due to ocean current or hydrodynamic parameter.

  10. Solar Sail Attitude Control Performance Comparison

    NASA Technical Reports Server (NTRS)

    Bladt, Jeff J.; Lawrence, Dale A.

    2005-01-01

    Performance of two solar sail attitude control implementations is evaluated. One implementation employs four articulated reflective vanes located at the periphery of the sail assembly to generate control torque about all three axes. A second attitude control configuration uses mass on a gimbaled boom to alter the center-of-mass location relative to the center-of-pressure producing roll and pitch torque along with a pair of articulated control vanes for yaw control. Command generation algorithms employ linearized dynamics with a feedback inversion loop to map desired vehicle attitude control torque into vane and/or gimbal articulation angle commands. We investigate the impact on actuator deflection angle behavior due to variations in how the Jacobian matrix is incorporated into the feedback inversion loop. Additionally, we compare how well each implementation tracks a commanded thrust profile, which has been generated to follow an orbit trajectory from the sun-earth L1 point to a sub-L1 station.

  11. Assessment of pseudo-bilayer structures in the heterogate germanium electron-hole bilayer tunnel field-effect transistor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Padilla, J. L., E-mail: jose.padilladelatorre@epfl.ch; Alper, C.; Ionescu, A. M.

    2015-06-29

    We investigate the effect of pseudo-bilayer configurations at low operating voltages (≤0.5 V) in the heterogate germanium electron-hole bilayer tunnel field-effect transistor (HG-EHBTFET) compared to the traditional bilayer structures of EHBTFETs arising from semiclassical simulations where the inversion layers for electrons and holes featured very symmetric profiles with similar concentration levels at the ON-state. Pseudo-bilayer layouts are attained by inducing a certain asymmetry between the top and the bottom gates so that even though the hole inversion layer is formed at the bottom of the channel, the top gate voltage remains below the required value to trigger the formation of themore » inversion layer for electrons. Resulting benefits from this setup are improved electrostatic control on the channel, enhanced gate-to-gate efficiency, and higher I{sub ON} levels. Furthermore, pseudo-bilayer configurations alleviate the difficulties derived from confining very high opposite carrier concentrations in very thin structures.« less

  12. 3-dimensional magnetotelluric inversion including topography using deformed hexahedral edge finite elements and direct solvers parallelized on symmetric multiprocessor computers - Part II: direct data-space inverse solution

    NASA Astrophysics Data System (ADS)

    Kordy, M.; Wannamaker, P.; Maris, V.; Cherkaev, E.; Hill, G.

    2016-01-01

    Following the creation described in Part I of a deformable edge finite-element simulator for 3-D magnetotelluric (MT) responses using direct solvers, in Part II we develop an algorithm named HexMT for 3-D regularized inversion of MT data including topography. Direct solvers parallelized on large-RAM, symmetric multiprocessor (SMP) workstations are used also for the Gauss-Newton model update. By exploiting the data-space approach, the computational cost of the model update becomes much less in both time and computer memory than the cost of the forward simulation. In order to regularize using the second norm of the gradient, we factor the matrix related to the regularization term and apply its inverse to the Jacobian, which is done using the MKL PARDISO library. For dense matrix multiplication and factorization related to the model update, we use the PLASMA library which shows very good scalability across processor cores. A synthetic test inversion using a simple hill model shows that including topography can be important; in this case depression of the electric field by the hill can cause false conductors at depth or mask the presence of resistive structure. With a simple model of two buried bricks, a uniform spatial weighting for the norm of model smoothing recovered more accurate locations for the tomographic images compared to weightings which were a function of parameter Jacobians. We implement joint inversion for static distortion matrices tested using the Dublin secret model 2, for which we are able to reduce nRMS to ˜1.1 while avoiding oscillatory convergence. Finally we test the code on field data by inverting full impedance and tipper MT responses collected around Mount St Helens in the Cascade volcanic chain. Among several prominent structures, the north-south trending, eruption-controlling shear zone is clearly imaged in the inversion.

  13. Two Dimensional Finite Element Based Magnetotelluric Inversion using Singular Value Decomposition Method on Transverse Electric Mode

    NASA Astrophysics Data System (ADS)

    Tjong, Tiffany; Yihaa’ Roodhiyah, Lisa; Nurhasan; Sutarno, Doddy

    2018-04-01

    In this work, an inversion scheme was performed using a vector finite element (VFE) based 2-D magnetotelluric (MT) forward modelling. We use an inversion scheme with Singular value decomposition (SVD) method toimprove the accuracy of MT inversion.The inversion scheme was applied to transverse electric (TE) mode of MT. SVD method was used in this inversion to decompose the Jacobian matrices. Singular values which obtained from the decomposition process were analyzed. This enabled us to determine the importance of data and therefore to define a threshold for truncation process. The truncation of singular value in inversion processcould improve the resulted model.

  14. Learning the inverse kinetics of an octopus-like manipulator in three-dimensional space.

    PubMed

    Giorelli, M; Renda, F; Calisti, M; Arienti, A; Ferri, G; Laschi, C

    2015-05-13

    This work addresses the inverse kinematics problem of a bioinspired octopus-like manipulator moving in three-dimensional space. The bioinspired manipulator has a conical soft structure that confers the ability of twirling around objects as a real octopus arm does. Despite the simple design, the soft conical shape manipulator driven by cables is described by nonlinear differential equations, which are difficult to solve analytically. Since exact solutions of the equations are not available, the Jacobian matrix cannot be calculated analytically and the classical iterative methods cannot be used. To overcome the intrinsic problems of methods based on the Jacobian matrix, this paper proposes a neural network learning the inverse kinematics of a soft octopus-like manipulator driven by cables. After the learning phase, a feed-forward neural network is able to represent the relation between manipulator tip positions and forces applied to the cables. Experimental results show that a desired tip position can be achieved in a short time, since heavy computations are avoided, with a degree of accuracy of 8% relative average error with respect to the total arm length.

  15. A Fortran 77 computer code for damped least-squares inversion of Slingram electromagnetic anomalies over thin tabular conductors

    NASA Astrophysics Data System (ADS)

    Dondurur, Derman; Sarı, Coşkun

    2004-07-01

    A FORTRAN 77 computer code is presented that permits the inversion of Slingram electromagnetic anomalies to an optimal conductor model. Damped least-squares inversion algorithm is used to estimate the anomalous body parameters, e.g. depth, dip and surface projection point of the target. Iteration progress is controlled by maximum relative error value and iteration continued until a tolerance value was satisfied, while the modification of Marquardt's parameter is controlled by sum of the squared errors value. In order to form the Jacobian matrix, the partial derivatives of theoretical anomaly expression with respect to the parameters being optimised are calculated by numerical differentiation by using first-order forward finite differences. A theoretical and two field anomalies are inserted to test the accuracy and applicability of the present inversion program. Inversion of the field data indicated that depth and the surface projection point parameters of the conductor are estimated correctly, however, considerable discrepancies appeared on the estimated dip angles. It is therefore concluded that the most important factor resulting in the misfit between observed and calculated data is due to the fact that the theory used for computing Slingram anomalies is valid for only thin conductors and this assumption might have caused incorrect dip estimates in the case of wide conductors.

  16. WE-AB-202-03: Quantifying Ventilation Change Due to Radiation Therapy Using 4DCT Jacobian Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patton, T; Du, K; Bayouth, J

    Purpose: Four-dimensional computed tomography (4DCT) and image registration can be used to determine regional lung ventilation changes after radiation therapy (RT). This study aimed to determine if lung ventilation change following radiation therapy was affected by the pre-RT ventilation of the lung. Methods: 13 subjects had three 4DCT scans: two repeat scans acquired before RT and one three months after RT. Regional ventilation was computed using Jacobian determinant calculations on the registered 4DCT images. The post-RT ventilation map was divided by the pre-RT ventilation map to get a voxel-by-voxel Jacobian ratio map depicting ventilation change over the course of RT.more » Jacobian ratio change was compared over the range of delivered doses. The first pre-RT ventilation image was divided by the second to establish a control for Jacobian ratio change without radiation delivered. The functional change between scans was assessed using histograms of the Jacobian ratios. Results: There were significantly (p < 0.05) more voxels that had a large decrease in Jacobian ratio in the post-RT divided by pre-RT map (15.6%) than the control (13.2%). There were also significantly (p < .01) more voxels that had a large increase in Jacobian ratio (16.2%) when compared to control (13.3%). Lung regions with low function (<10% expansion by Jacobian) showed a slight linear reduction in expansion (0.2%/10 Gy delivered), while high function regions (>10% expansion) showed a greater response (1.2% reduction/10 Gy). Contiguous high function regions > 1 liter occurred in 11 of 13 subjects. Conclusion: There is a significant change in regional ventilation following a course of radiation therapy. The change in Jacobian following RT is dependent both on the delivered dose and the initial ventilation of the lung tissue: high functioning lung has greater ventilation loss for equivalent radiation doses. Substantial regions of high function lung tissue are prevalent. Research support from NIH grants CA166119 and CA166703, a gift from Roger Koch, and a Pilot Grant from University of Iowa Carver College of Medicine.« less

  17. An optimal resolved rate law for kindematically redundant manipulators

    NASA Technical Reports Server (NTRS)

    Bourgeois, B. J.

    1987-01-01

    The resolved rate law for a manipulator provides the instantaneous joint rates required to satisfy a given instantaneous hand motion. When the joint space has more degrees of freedom than the task space, the manipulator is kinematically redundant and the kinematic rate equations are underdetermined. These equations can be locally optimized, but the resulting pseudo-inverse solution was found to cause large joint rates in some case. A weighting matrix in the locally optimized (pseudo-inverse) solution is dynamically adjusted to control the joint motion as desired. Joint reach limit avoidance is demonstrated in a kinematically redundant planar arm model. The treatment is applicable to redundant manipulators with any number of revolute joints and to nonplanar manipulators.

  18. Iterative inversion of deformation vector fields with feedback control.

    PubMed

    Dubey, Abhishek; Iliopoulos, Alexandros-Stavros; Sun, Xiaobai; Yin, Fang-Fang; Ren, Lei

    2018-05-14

    Often, the inverse deformation vector field (DVF) is needed together with the corresponding forward DVF in four-dimesional (4D) reconstruction and dose calculation, adaptive radiation therapy, and simultaneous deformable registration. This study aims at improving both accuracy and efficiency of iterative algorithms for DVF inversion, and advancing our understanding of divergence and latency conditions. We introduce a framework of fixed-point iteration algorithms with active feedback control for DVF inversion. Based on rigorous convergence analysis, we design control mechanisms for modulating the inverse consistency (IC) residual of the current iterate, to be used as feedback into the next iterate. The control is designed adaptively to the input DVF with the objective to enlarge the convergence area and expedite convergence. Three particular settings of feedback control are introduced: constant value over the domain throughout the iteration; alternating values between iteration steps; and spatially variant values. We also introduce three spectral measures of the displacement Jacobian for characterizing a DVF. These measures reveal the critical role of what we term the nontranslational displacement component (NTDC) of the DVF. We carry out inversion experiments with an analytical DVF pair, and with DVFs associated with thoracic CT images of six patients at end of expiration and end of inspiration. The NTDC-adaptive iterations are shown to attain a larger convergence region at a faster pace compared to previous nonadaptive DVF inversion iteration algorithms. By our numerical experiments, alternating control yields smaller IC residuals and inversion errors than constant control. Spatially variant control renders smaller residuals and errors by at least an order of magnitude, compared to other schemes, in no more than 10 steps. Inversion results also show remarkable quantitative agreement with analysis-based predictions. Our analysis captures properties of DVF data associated with clinical CT images, and provides new understanding of iterative DVF inversion algorithms with a simple residual feedback control. Adaptive control is necessary and highly effective in the presence of nonsmall NTDCs. The adaptive iterations or the spectral measures, or both, may potentially be incorporated into deformable image registration methods. © 2018 American Association of Physicists in Medicine.

  19. An improved nearly-orthogonal structured mesh generation system with smoothness control functions

    USDA-ARS?s Scientific Manuscript database

    This paper presents an improved nearly-orthogonal structured mesh generation system with a set of smoothness control functions, which were derived based on the ratio between the Jacobian of the transformation matrix and the Jacobian of the metric tensor. The proposed smoothness control functions are...

  20. Magnetotelluric inversion via reverse time migration algorithm of seismic data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ha, Taeyoung; Shin, Changsoo

    2007-07-01

    We propose a new algorithm for two-dimensional magnetotelluric (MT) inversion. Our algorithm is an MT inversion based on the steepest descent method, borrowed from the backpropagation technique of seismic inversion or reverse time migration, introduced in the middle 1980s by Lailly and Tarantola. The steepest descent direction can be calculated efficiently by using the symmetry of numerical Green's function derived from a mixed finite element method proposed by Nedelec for Maxwell's equation, without calculating the Jacobian matrix explicitly. We construct three different objective functions by taking the logarithm of the complex apparent resistivity as introduced in the recent waveform inversionmore » algorithm by Shin and Min. These objective functions can be naturally separated into amplitude inversion, phase inversion and simultaneous inversion. We demonstrate our algorithm by showing three inversion results for synthetic data.« less

  1. Inversion of Density Interfaces Using the Pseudo-Backpropagation Neural Network Method

    NASA Astrophysics Data System (ADS)

    Chen, Xiaohong; Du, Yukun; Liu, Zhan; Zhao, Wenju; Chen, Xiaocheng

    2018-05-01

    This paper presents a new pseudo-backpropagation (BP) neural network method that can invert multi-density interfaces at one time. The new method is based on the conventional forward modeling and inverse modeling theories in addition to conventional pseudo-BP neural network arithmetic. A 3D inversion model for gravity anomalies of multi-density interfaces using the pseudo-BP neural network method is constructed after analyzing the structure and function of the artificial neural network. The corresponding iterative inverse formula of the space field is presented at the same time. Based on trials of gravity anomalies and density noise, the influence of the two kinds of noise on the inverse result is discussed and the scale of noise requested for the stability of the arithmetic is analyzed. The effects of the initial model on the reduction of the ambiguity of the result and improvement of the precision of inversion are discussed. The correctness and validity of the method were verified by the 3D model of the three interfaces. 3D inversion was performed on the observed gravity anomaly data of the Okinawa trough using the program presented herein. The Tertiary basement and Moho depth were obtained from the inversion results, which also testify the adaptability of the method. This study has made a useful attempt for the inversion of gravity density interfaces.

  2. Tracking lung tissue motion and expansion/compression with inverse consistent image registration and spirometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Christensen, Gary E.; Song, Joo Hyun; Lu, Wei

    2007-06-15

    Breathing motion is one of the major limiting factors for reducing dose and irradiation of normal tissue for conventional conformal radiotherapy. This paper describes a relationship between tracking lung motion using spirometry data and image registration of consecutive CT image volumes collected from a multislice CT scanner over multiple breathing periods. Temporal CT sequences from 5 individuals were analyzed in this study. The couch was moved from 11 to 14 different positions to image the entire lung. At each couch position, 15 image volumes were collected over approximately 3 breathing periods. It is assumed that the expansion and contraction ofmore » lung tissue can be modeled as an elastic material. Furthermore, it is assumed that the deformation of the lung is small over one-fifth of a breathing period and therefore the motion of the lung can be adequately modeled using a small deformation linear elastic model. The small deformation inverse consistent linear elastic image registration algorithm is therefore well suited for this problem and was used to register consecutive image scans. The pointwise expansion and compression of lung tissue was measured by computing the Jacobian of the transformations used to register the images. The logarithm of the Jacobian was computed so that expansion and compression of the lung were scaled equally. The log-Jacobian was computed at each voxel in the volume to produce a map of the local expansion and compression of the lung during the breathing period. These log-Jacobian images demonstrate that the lung does not expand uniformly during the breathing period, but rather expands and contracts locally at different rates during inhalation and exhalation. The log-Jacobian numbers were averaged over a cross section of the lung to produce an estimate of the average expansion or compression from one time point to the next and compared to the air flow rate measured by spirometry. In four out of five individuals, the average log-Jacobian value and the air flow rate correlated well (R{sup 2}=0.858 on average for the entire lung). The correlation for the fifth individual was not as good (R{sup 2}=0.377 on average for the entire lung) and can be explained by the small variation in tidal volume for this individual. The correlation of the average log-Jacobian value and the air flow rate for images near the diaphragm correlated well in all five individuals (R{sup 2}=0.943 on average). These preliminary results indicate a strong correlation between the expansion/compression of the lung measured by image registration and the air flow rate measured by spirometry. Predicting the location, motion, and compression/expansion of the tumor and normal tissue using image registration and spirometry could have many important benefits for radiotherapy treatment. These benefits include reducing radiation dose to normal tissue, maximizing dose to the tumor, improving patient care, reducing treatment cost, and increasing patient throughput.« less

  3. Tracking lung tissue motion and expansion/compression with inverse consistent image registration and spirometry.

    PubMed

    Christensen, Gary E; Song, Joo Hyun; Lu, Wei; El Naqa, Issam; Low, Daniel A

    2007-06-01

    Breathing motion is one of the major limiting factors for reducing dose and irradiation of normal tissue for conventional conformal radiotherapy. This paper describes a relationship between tracking lung motion using spirometry data and image registration of consecutive CT image volumes collected from a multislice CT scanner over multiple breathing periods. Temporal CT sequences from 5 individuals were analyzed in this study. The couch was moved from 11 to 14 different positions to image the entire lung. At each couch position, 15 image volumes were collected over approximately 3 breathing periods. It is assumed that the expansion and contraction of lung tissue can be modeled as an elastic material. Furthermore, it is assumed that the deformation of the lung is small over one-fifth of a breathing period and therefore the motion of the lung can be adequately modeled using a small deformation linear elastic model. The small deformation inverse consistent linear elastic image registration algorithm is therefore well suited for this problem and was used to register consecutive image scans. The pointwise expansion and compression of lung tissue was measured by computing the Jacobian of the transformations used to register the images. The logarithm of the Jacobian was computed so that expansion and compression of the lung were scaled equally. The log-Jacobian was computed at each voxel in the volume to produce a map of the local expansion and compression of the lung during the breathing period. These log-Jacobian images demonstrate that the lung does not expand uniformly during the breathing period, but rather expands and contracts locally at different rates during inhalation and exhalation. The log-Jacobian numbers were averaged over a cross section of the lung to produce an estimate of the average expansion or compression from one time point to the next and compared to the air flow rate measured by spirometry. In four out of five individuals, the average log-Jacobian value and the air flow rate correlated well (R2 = 0.858 on average for the entire lung). The correlation for the fifth individual was not as good (R2 = 0.377 on average for the entire lung) and can be explained by the small variation in tidal volume for this individual. The correlation of the average log-Jacobian value and the air flow rate for images near the diaphragm correlated well in all five individuals (R2 = 0.943 on average). These preliminary results indicate a strong correlation between the expansion/compression of the lung measured by image registration and the air flow rate measured by spirometry. Predicting the location, motion, and compression/expansion of the tumor and normal tissue using image registration and spirometry could have many important benefits for radiotherapy treatment. These benefits include reducing radiation dose to normal tissue, maximizing dose to the tumor, improving patient care, reducing treatment cost, and increasing patient throughput.

  4. Inversion of high frequency surface waves with fundamental and higher modes

    USGS Publications Warehouse

    Xia, J.; Miller, R.D.; Park, C.B.; Tian, G.

    2003-01-01

    The phase velocity of Rayleigh-waves of a layered earth model is a function of frequency and four groups of earth parameters: compressional (P)-wave velocity, shear (S)-wave velocity, density, and thickness of layers. For the fundamental mode of Rayleigh waves, analysis of the Jacobian matrix for high frequencies (2-40 Hz) provides a measure of dispersion curve sensitivity to earth model parameters. S-wave velocities are the dominant influence of the four earth model parameters. This thesis is true for higher modes of high frequency Rayleigh waves as well. Our numerical modeling by analysis of the Jacobian matrix supports at least two quite exciting higher mode properties. First, for fundamental and higher mode Rayleigh wave data with the same wavelength, higher modes can "see" deeper than the fundamental mode. Second, higher mode data can increase the resolution of the inverted S-wave velocities. Real world examples show that the inversion process can be stabilized and resolution of the S-wave velocity model can be improved when simultaneously inverting the fundamental and higher mode data. ?? 2002 Elsevier Science B.V. All rights reserved.

  5. An optimal resolved rate law for kinematically redundant manipulators

    NASA Technical Reports Server (NTRS)

    Bourgeois, B. J.

    1987-01-01

    The resolved rate law for a manipulator provides the instantaneous joint rates required to satisfy a given instantaneous hand motion. When the joint space has more degrees of freedom than the task space, the manipulator is kinematically redundant and the kinematic rate equations are underdetermined. These equations can be locally optimized, but the resulting pseudo-inverse solution has been found to cause large joint rates in some cases. A weighting matrix in the locally optimized (pseudo-inverse) solution is dynamically adjusted to control the joint motion as desired. Joint reach limit avoidance is demonstrated in a kinematically redundant planar arm model. The treatment is applicable to redundant manipulators with any number of revolute joints and to non-planar manipulators.

  6. Efficient Inversion of Mult-frequency and Multi-Source Electromagnetic Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gary D. Egbert

    2007-03-22

    The project covered by this report focused on development of efficient but robust non-linear inversion algorithms for electromagnetic induction data, in particular for data collected with multiple receivers, and multiple transmitters, a situation extremely common in eophysical EM subsurface imaging methods. A key observation is that for such multi-transmitter problems each step in commonly used linearized iterative limited memory search schemes such as conjugate gradients (CG) requires solution of forward and adjoint EM problems for each of the N frequencies or sources, essentially generating data sensitivities for an N dimensional data-subspace. These multiple sensitivities allow a good approximation to themore » full Jacobian of the data mapping to be built up in many fewer search steps than would be required by application of textbook optimization methods, which take no account of the multiplicity of forward problems that must be solved for each search step. We have applied this idea to a develop a hybrid inversion scheme that combines features of the iterative limited memory type methods with a Newton-type approach using a partial calculation of the Jacobian. Initial tests on 2D problems show that the new approach produces results essentially identical to a Newton type Occam minimum structure inversion, while running more rapidly than an iterative (fixed regularization parameter) CG style inversion. Memory requirements, while greater than for something like CG, are modest enough that even in 3D the scheme should allow 3D inverse problems to be solved on a common desktop PC, at least for modest (~ 100 sites, 15-20 frequencies) data sets. A secondary focus of the research has been development of a modular system for EM inversion, using an object oriented approach. This system has proven useful for more rapid prototyping of inversion algorithms, in particular allowing initial development and testing to be conducted with two-dimensional example problems, before approaching more computationally cumbersome three-dimensional problems.« less

  7. Adaptive Jacobian Fuzzy Attitude Control for Flexible Spacecraft Combined Attitude and Sun Tracking System

    NASA Astrophysics Data System (ADS)

    Chak, Yew-Chung; Varatharajoo, Renuganth

    2016-07-01

    Many spacecraft attitude control systems today use reaction wheels to deliver precise torques to achieve three-axis attitude stabilization. However, irrecoverable mechanical failure of reaction wheels could potentially lead to mission interruption or total loss. The electrically-powered Solar Array Drive Assemblies (SADA) are usually installed in the pitch axis which rotate the solar arrays to track the Sun, can produce torques to compensate for the pitch-axis wheel failure. In addition, the attitude control of a flexible spacecraft poses a difficult problem. These difficulties include the strong nonlinear coupled dynamics between the rigid hub and flexible solar arrays, and the imprecisely known system parameters, such as inertia matrix, damping ratios, and flexible mode frequencies. In order to overcome these drawbacks, the adaptive Jacobian tracking fuzzy control is proposed for the combined attitude and sun-tracking control problem of a flexible spacecraft during attitude maneuvers in this work. For the adaptation of kinematic and dynamic uncertainties, the proposed scheme uses an adaptive sliding vector based on estimated attitude velocity via approximate Jacobian matrix. The unknown nonlinearities are approximated by deriving the fuzzy models with a set of linguistic If-Then rules using the idea of sector nonlinearity and local approximation in fuzzy partition spaces. The uncertain parameters of the estimated nonlinearities and the Jacobian matrix are being adjusted online by an adaptive law to realize feedback control. The attitude of the spacecraft can be directly controlled with the Jacobian feedback control when the attitude pointing trajectory is designed with respect to the spacecraft coordinate frame itself. A significant feature of this work is that the proposed adaptive Jacobian tracking scheme will result in not only the convergence of angular position and angular velocity tracking errors, but also the convergence of estimated angular velocity to the actual angular velocity. Numerical results are presented to demonstrate the effectiveness of the proposed scheme in tracking the desired attitude, as well as suppressing the elastic deflection effects of solar arrays during maneuver.

  8. A new family Jacobian solver for global three-dimensional modeling of atmospheric chemistry

    NASA Astrophysics Data System (ADS)

    Zhao, Xuepeng; Turco, Richard P.; Shen, Mei

    1999-01-01

    We present a new technique to solve complex sets of photochemical rate equations that is applicable to global modeling of the troposphere and stratosphere. The approach is based on the concept of "families" of species, whose chemical rate equations are tightly coupled. Variations of species concentrations within a family can be determined by inverting a linearized Jacobian matrix representing the family group. Since this group consists of a relatively small number of species the corresponding Jacobian has a low order (a minimatrix) compared to the Jacobian of the entire system. However, we go further and define a super-family that is the set of all families. The super-family is also solved by linearization and matrix inversion. The resulting Super-Family Matrix Inversion (SFMI) scheme is more stable and accurate than common family approaches. We discuss the numerical structure of the SFMI scheme and apply our algorithms to a comprehensive set of photochemical reactions. To evaluate performance, the SFMI scheme is compared with an optimized Gear solver. We find that the SFMI technique can be at least an order of magnitude more efficient than existing chemical solvers while maintaining relative errors in the calculations of 15% or less over a diurnal cycle. The largest SFMI errors arise at sunrise and sunset and during the evening when species concentrations may be very low. We show that sunrise/sunset errors can be minimized through a careful treatment of photodissociation during these periods; the nighttime deviations are negligible from the point of view of acceptable computational accuracy. The stability and flexibility of the SFMI algorithm should be sufficient for most modeling applications until major improvements in other modeling factors are achieved. In addition, because of its balanced computational design, SFMI can easily be adapted to parallel computing architectures. SFMI thus should allow practical long-term integrations of global chemistry coupled to general circulation and climate models, studies of interannual and interdecadal variability in atmospheric composition, simulations of past multidecadal trends owing to anthropogenic emissions, long-term forecasting associated with projected emissions, and sensitivity analyses for a wide range of physical and chemical parameters.

  9. Configuration control of seven-degree-of-freedom arms

    NASA Technical Reports Server (NTRS)

    Seraji, Homayoun (Inventor); Long, Mark K. (Inventor); Lee, Thomas S. (Inventor)

    1992-01-01

    A seven degree of freedom robot arm with a six degree of freedom end effector is controlled by a processor employing a 6 by 7 Jacobian matrix for defining location and orientation of the end effector in terms of the rotation angles of the joints, a 1 (or more) by 7 Jacobian matrix for defining 1 (or more) user specified kinematic functions constraining location or movement of selected portions of the arm in terms of the joint angles, the processor combining the two Jacobian matrices to produce an augmented 7 (or more) by 7 Jacobian matrix, the processor effecting control by computing in accordance with forward kinematics from the augmented 7 by 7 Jacobian matrix and from the seven joint angles of the arm a set of seven desired joint angles for transmittal to the joint servo loops of the arm. One of the kinematic functions constraints the orientation of the elbow plane of the arm. Another one of the kinematic functions minimizes a sum of gravitational torques on the joints. Still another kinematic function constrains the location of the arm to perform collision avoidance. Generically, one kinematic function minimizes a sum of selected mechanical parameters of at least some of the joints associated with weighting coefficients which may be changed during arm movement. The mechanical parameters may be velocity errors or gravity torques associated with individual joints.

  10. Configuration control of seven degree of freedom arms

    NASA Technical Reports Server (NTRS)

    Seraji, Homayoun (Inventor)

    1995-01-01

    A seven-degree-of-freedom robot arm with a six-degree-of-freedom end effector is controlled by a processor employing a 6-by-7 Jacobian matrix for defining location and orientation of the end effector in terms of the rotation angles of the joints, a 1 (or more)-by-7 Jacobian matrix for defining 1 (or more) user-specified kinematic functions constraining location or movement of selected portions of the arm in terms of the joint angles, the processor combining the two Jacobian matrices to produce an augmented 7 (or more)-by-7 Jacobian matrix, the processor effecting control by computing in accordance with forward kinematics from the augmented 7-by-7 Jacobian matrix and from the seven joint angles of the arm a set of seven desired joint angles for transmittal to the joint servo loops of the arms. One of the kinematic functions constrains the orientation of the elbow plane of the arm. Another one of the kinematic functions minimizing a sum of gravitational torques on the joints. Still another one of the kinematic functions constrains the location of the arm to perform collision avoidance. Generically, one of the kinematic functions minimizes a sum of selected mechanical parameters of at least some of the joints associated with weighting coefficients which may be changed during arm movement. The mechanical parameters may be velocity errors or position errors or gravity torques associated with individual joints.

  11. Improved Quasi-Newton method via PSB update for solving systems of nonlinear equations

    NASA Astrophysics Data System (ADS)

    Mamat, Mustafa; Dauda, M. K.; Waziri, M. Y.; Ahmad, Fadhilah; Mohamad, Fatma Susilawati

    2016-10-01

    The Newton method has some shortcomings which includes computation of the Jacobian matrix which may be difficult or even impossible to compute and solving the Newton system in every iteration. Also, the common setback with some quasi-Newton methods is that they need to compute and store an n × n matrix at each iteration, this is computationally costly for large scale problems. To overcome such drawbacks, an improved Method for solving systems of nonlinear equations via PSB (Powell-Symmetric-Broyden) update is proposed. In the proposed method, the approximate Jacobian inverse Hk of PSB is updated and its efficiency has improved thereby require low memory storage, hence the main aim of this paper. The preliminary numerical results show that the proposed method is practically efficient when applied on some benchmark problems.

  12. Inverse consistent non-rigid image registration based on robust point set matching

    PubMed Central

    2014-01-01

    Background Robust point matching (RPM) has been extensively used in non-rigid registration of images to robustly register two sets of image points. However, except for the location at control points, RPM cannot estimate the consistent correspondence between two images because RPM is a unidirectional image matching approach. Therefore, it is an important issue to make an improvement in image registration based on RPM. Methods In our work, a consistent image registration approach based on the point sets matching is proposed to incorporate the property of inverse consistency and improve registration accuracy. Instead of only estimating the forward transformation between the source point sets and the target point sets in state-of-the-art RPM algorithms, the forward and backward transformations between two point sets are estimated concurrently in our algorithm. The inverse consistency constraints are introduced to the cost function of RPM and the fuzzy correspondences between two point sets are estimated based on both the forward and backward transformations simultaneously. A modified consistent landmark thin-plate spline registration is discussed in detail to find the forward and backward transformations during the optimization of RPM. The similarity of image content is also incorporated into point matching in order to improve image matching. Results Synthetic data sets, medical images are employed to demonstrate and validate the performance of our approach. The inverse consistent errors of our algorithm are smaller than RPM. Especially, the topology of transformations is preserved well for our algorithm for the large deformation between point sets. Moreover, the distance errors of our algorithm are similar to that of RPM, and they maintain a downward trend as whole, which demonstrates the convergence of our algorithm. The registration errors for image registrations are evaluated also. Again, our algorithm achieves the lower registration errors in same iteration number. The determinant of the Jacobian matrix of the deformation field is used to analyse the smoothness of the forward and backward transformations. The forward and backward transformations estimated by our algorithm are smooth for small deformation. For registration of lung slices and individual brain slices, large or small determinant of the Jacobian matrix of the deformation fields are observed. Conclusions Results indicate the improvement of the proposed algorithm in bi-directional image registration and the decrease of the inverse consistent errors of the forward and the reverse transformations between two images. PMID:25559889

  13. Systolic Algorithms for Imaging from Space

    DTIC Science & Technology

    1989-07-31

    on a keystone or trapezoidal grid [ Arikan & Munson, 1987]. The image reconstruction algorithm then simply applies an inverse 2-D FFT to the stored...rithm composed of groups of point targets, and we determined the effects of windowing and incor- poration of a Jacobian weighting factor [ Arikan ...the impulse response of the desired filter [ Arikan & Munson, 1989]. The necessary filtering is then accomplished through the physical mechanism of the

  14. Decision Support Tools for Munitions Response Performance Prediction and Risk Assessment

    DTIC Science & Technology

    2016-09-01

    then given by m̂ = ( GTG )−1GTdobs = G†dobs (6) with (7) G† = ( GTG )−1GT denoting the pseudo-inverse. In this formulation, the model vector at location r...estimate for observed data dobs as m̂ = ( GTG )−1GTdobs = G†dobs (3) with G† = ( GTG )−1GT denoting the pseudo-inverse. The dependence of G on the estimated

  15. A spatial operator algebra for manipulator modeling and control

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.; Kreutz, K.; Milman, M.

    1988-01-01

    A powerful new spatial operator algebra for modeling, control, and trajectory design of manipulators is discussed along with its implementation in the Ada programming language. Applications of this algebra to robotics include an operator representation of the manipulator Jacobian matrix; the robot dynamical equations formulated in terms of the spatial algebra, showing the complete equivalence between the recursive Newton-Euler formulations to robot dynamics; the operator factorization and inversion of the manipulator mass matrix which immediately results in O(N) recursive forward dynamics algorithms; the joint accelerations of a manipulator due to a tip contact force; the recursive computation of the equivalent mass matrix as seen at the tip of a manipulator; and recursive forward dynamics of a closed chain system. Finally, additional applications and current research involving the use of the spatial operator algebra are discussed in general terms.

  16. A Matlab toolkit for three-dimensional electrical impedance tomography: a contribution to the Electrical Impedance and Diffuse Optical Reconstruction Software project

    NASA Astrophysics Data System (ADS)

    Polydorides, Nick; Lionheart, William R. B.

    2002-12-01

    The objective of the Electrical Impedance and Diffuse Optical Reconstruction Software project is to develop freely available software that can be used to reconstruct electrical or optical material properties from boundary measurements. Nonlinear and ill posed problems such as electrical impedance and optical tomography are typically approached using a finite element model for the forward calculations and a regularized nonlinear solver for obtaining a unique and stable inverse solution. Most of the commercially available finite element programs are unsuitable for solving these problems because of their conventional inefficient way of calculating the Jacobian, and their lack of accurate electrode modelling. A complete package for the two-dimensional EIT problem was officially released by Vauhkonen et al at the second half of 2000. However most industrial and medical electrical imaging problems are fundamentally three-dimensional. To assist the development we have developed and released a free toolkit of Matlab routines which can be employed to solve the forward and inverse EIT problems in three dimensions based on the complete electrode model along with some basic visualization utilities, in the hope that it will stimulate further development. We also include a derivation of the formula for the Jacobian (or sensitivity) matrix based on the complete electrode model.

  17. An algorithm for hyperspectral remote sensing of aerosols: 1. Development of theoretical framework

    NASA Astrophysics Data System (ADS)

    Hou, Weizhen; Wang, Jun; Xu, Xiaoguang; Reid, Jeffrey S.; Han, Dong

    2016-07-01

    This paper describes the first part of a series of investigations to develop algorithms for simultaneous retrieval of aerosol parameters and surface reflectance from a newly developed hyperspectral instrument, the GEOstationary Trace gas and Aerosol Sensor Optimization (GEO-TASO), by taking full advantage of available hyperspectral measurement information in the visible bands. We describe the theoretical framework of an inversion algorithm for the hyperspectral remote sensing of the aerosol optical properties, in which major principal components (PCs) for surface reflectance is assumed known, and the spectrally dependent aerosol refractive indices are assumed to follow a power-law approximation with four unknown parameters (two for real and two for imaginary part of refractive index). New capabilities for computing the Jacobians of four Stokes parameters of reflected solar radiation at the top of the atmosphere with respect to these unknown aerosol parameters and the weighting coefficients for each PC of surface reflectance are added into the UNified Linearized Vector Radiative Transfer Model (UNL-VRTM), which in turn facilitates the optimization in the inversion process. Theoretical derivations of the formulas for these new capabilities are provided, and the analytical solutions of Jacobians are validated against the finite-difference calculations with relative error less than 0.2%. Finally, self-consistency check of the inversion algorithm is conducted for the idealized green-vegetation and rangeland surfaces that were spectrally characterized by the U.S. Geological Survey digital spectral library. It shows that the first six PCs can yield the reconstruction of spectral surface reflectance with errors less than 1%. Assuming that aerosol properties can be accurately characterized, the inversion yields a retrieval of hyperspectral surface reflectance with an uncertainty of 2% (and root-mean-square error of less than 0.003), which suggests self-consistency in the inversion framework. The next step of using this framework to study the aerosol information content in GEO-TASO measurements is also discussed.

  18. Indirect iterative learning control for a discrete visual servo without a camera-robot model.

    PubMed

    Jiang, Ping; Bamforth, Leon C A; Feng, Zuren; Baruch, John E F; Chen, YangQuan

    2007-08-01

    This paper presents a discrete learning controller for vision-guided robot trajectory imitation with no prior knowledge of the camera-robot model. A teacher demonstrates a desired movement in front of a camera, and then, the robot is tasked to replay it by repetitive tracking. The imitation procedure is considered as a discrete tracking control problem in the image plane, with an unknown and time-varying image Jacobian matrix. Instead of updating the control signal directly, as is usually done in iterative learning control (ILC), a series of neural networks are used to approximate the unknown Jacobian matrix around every sample point in the demonstrated trajectory, and the time-varying weights of local neural networks are identified through repetitive tracking, i.e., indirect ILC. This makes repetitive segmented training possible, and a segmented training strategy is presented to retain the training trajectories solely within the effective region for neural network approximation. However, a singularity problem may occur if an unmodified neural-network-based Jacobian estimation is used to calculate the robot end-effector velocity. A new weight modification algorithm is proposed which ensures invertibility of the estimation, thus circumventing the problem. Stability is further discussed, and the relationship between the approximation capability of the neural network and the tracking accuracy is obtained. Simulations and experiments are carried out to illustrate the validity of the proposed controller for trajectory imitation of robot manipulators with unknown time-varying Jacobian matrices.

  19. Deconvolution using a neural network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lehman, S.K.

    1990-11-15

    Viewing one dimensional deconvolution as a matrix inversion problem, we compare a neural network backpropagation matrix inverse with LMS, and pseudo-inverse. This is a largely an exercise in understanding how our neural network code works. 1 ref.

  20. Decoupling control of vehicle chassis system based on neural network inverse system

    NASA Astrophysics Data System (ADS)

    Wang, Chunyan; Zhao, Wanzhong; Luan, Zhongkai; Gao, Qi; Deng, Ke

    2018-06-01

    Steering and suspension are two important subsystems affecting the handling stability and riding comfort of the chassis system. In order to avoid the interference and coupling of the control channels between active front steering (AFS) and active suspension subsystems (ASS), this paper presents a composite decoupling control method, which consists of a neural network inverse system and a robust controller. The neural network inverse system is composed of a static neural network with several integrators and state feedback of the original chassis system to approach the inverse system of the nonlinear systems. The existence of the inverse system for the chassis system is proved by the reversibility derivation of Interactor algorithm. The robust controller is based on the internal model control (IMC), which is designed to improve the robustness and anti-interference of the decoupled system by adding a pre-compensation controller to the pseudo linear system. The results of the simulation and vehicle test show that the proposed decoupling controller has excellent decoupling performance, which can transform the multivariable system into a number of single input and single output systems, and eliminate the mutual influence and interference. Furthermore, it has satisfactory tracking capability and robust performance, which can improve the comprehensive performance of the chassis system.

  1. Discussion and Practical Aspects on Control Allocation for a Multi-Rotor Helicopter

    NASA Astrophysics Data System (ADS)

    Ducard, G. J. J.; Hua, M.-D.

    2011-09-01

    This paper presents practical methods to improve the flight performance of an unmanned multi-rotor helicopter by using an efficient control allocation strategy. The flying vehicle considered is an hexacopter. It is indeed particularly suited for long missions and for carrying a significant payload such as all the sensors needed in the context of cartography, photogrammetry, inspection, surveillance and transportation. Moreover, a stable flight is often required for precise data recording during the mission. Therefore, a high performance flight control system is required to operate the UAV. However, the flight performance of a multi-rotor vehicle is tightly dependent on the control allocation strategy that is used to map the virtual control vector v = [T, L, M, N ]T composed of the thrust and the torques in roll, pitch and yaw, respectively, to the propellers' speed. This paper shows that a control allocation strategy based on the classical approach of pseudo-inverse matrix only exploits a limited range of the vehicle capabilities to generate thrust and moments. Thus, in this paper, a novel approach is presented, which is based on a weighted pseudo-inverse matrix method capable of exploiting a much larger domain in v. The proposed control allocation algorithm is designed with explicit laws for fast operation and low computational load, suitable for a small microcontroller with limited floating-point operation capability.

  2. Waterjet and laser etching: the nonlinear inverse problem

    NASA Astrophysics Data System (ADS)

    Bilbao-Guillerna, A.; Axinte, D. A.; Billingham, J.; Cadot, G. B. J.

    2017-07-01

    In waterjet and laser milling, material is removed from a solid surface in a succession of layers to create a new shape, in a depth-controlled manner. The inverse problem consists of defining the control parameters, in particular, the two-dimensional beam path, to arrive at a prescribed freeform surface. Waterjet milling (WJM) and pulsed laser ablation (PLA) are studied in this paper, since a generic nonlinear material removal model is appropriate for both of these processes. The inverse problem is usually solved for this kind of process by simply controlling dwell time in proportion to the required depth of milling at a sequence of pixels on the surface. However, this approach is only valid when shallow surfaces are etched, since it does not take into account either the footprint of the beam or its overlapping on successive passes. A discrete adjoint algorithm is proposed in this paper to improve the solution. Nonlinear effects and non-straight passes are included in the optimization, while the calculation of the Jacobian matrix does not require large computation times. Several tests are performed to validate the proposed method and the results show that tracking error is reduced typically by a factor of two in comparison to the pixel-by-pixel approach and the classical raster path strategy with straight passes. The tracking error can be as low as 2-5% and 1-2% for WJM and PLA, respectively, depending on the complexity of the target surface.

  3. Optimization of Time-Dependent Particle Tracing Using Tetrahedral Decomposition

    NASA Technical Reports Server (NTRS)

    Kenwright, David; Lane, David

    1995-01-01

    An efficient algorithm is presented for computing particle paths, streak lines and time lines in time-dependent flows with moving curvilinear grids. The integration, velocity interpolation and step-size control are all performed in physical space which avoids the need to transform the velocity field into computational space. This leads to higher accuracy because there are no Jacobian matrix approximations or expensive matrix inversions. Integration accuracy is maintained using an adaptive step-size control scheme which is regulated by the path line curvature. The problem of cell-searching, point location and interpolation in physical space is simplified by decomposing hexahedral cells into tetrahedral cells. This enables the point location to be done analytically and substantially faster than with a Newton-Raphson iterative method. Results presented show this algorithm is up to six times faster than particle tracers which operate on hexahedral cells yet produces almost identical particle trajectories.

  4. Using a pseudo-dynamic source inversion approach to improve earthquake source imaging

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Song, S. G.; Dalguer, L. A.; Clinton, J. F.

    2014-12-01

    Imaging a high-resolution spatio-temporal slip distribution of an earthquake rupture is a core research goal in seismology. In general we expect to obtain a higher quality source image by improving the observational input data (e.g. using more higher quality near-source stations). However, recent studies show that increasing the surface station density alone does not significantly improve source inversion results (Custodio et al. 2005; Zhang et al. 2014). We introduce correlation structures between the kinematic source parameters: slip, rupture velocity, and peak slip velocity (Song et al. 2009; Song and Dalguer 2013) in the non-linear source inversion. The correlation structures are physical constraints derived from rupture dynamics that effectively regularize the model space and may improve source imaging. We name this approach pseudo-dynamic source inversion. We investigate the effectiveness of this pseudo-dynamic source inversion method by inverting low frequency velocity waveforms from a synthetic dynamic rupture model of a buried vertical strike-slip event (Mw 6.5) in a homogeneous half space. In the inversion, we use a genetic algorithm in a Bayesian framework (Moneli et al. 2008), and a dynamically consistent regularized Yoffe function (Tinti, et al. 2005) was used for a single-window slip velocity function. We search for local rupture velocity directly in the inversion, and calculate the rupture time using a ray-tracing technique. We implement both auto- and cross-correlation of slip, rupture velocity, and peak slip velocity in the prior distribution. Our results suggest that kinematic source model estimates capture the major features of the target dynamic model. The estimated rupture velocity closely matches the target distribution from the dynamic rupture model, and the derived rupture time is smoother than the one we searched directly. By implementing both auto- and cross-correlation of kinematic source parameters, in comparison to traditional smoothing constraints, we are in effect regularizing the model space in a more physics-based manner without loosing resolution of the source image. Further investigation is needed to tune the related parameters of pseudo-dynamic source inversion and relative weighting between the prior and the likelihood function in the Bayesian inversion.

  5. Nonlinear study of the parallel velocity/tearing instability using an implicit, nonlinear resistive MHD solver

    NASA Astrophysics Data System (ADS)

    Chacon, L.; Finn, J. M.; Knoll, D. A.

    2000-10-01

    Recently, a new parallel velocity instability has been found.(J. M. Finn, Phys. Plasmas), 2, 12 (1995) This mode is a tearing mode driven unstable by curvature effects and sound wave coupling in the presence of parallel velocity shear. Under such conditions, linear theory predicts that tearing instabilities will grow even in situations in which the classical tearing mode is stable. This could then be a viable seed mechanism for the neoclassical tearing mode, and hence a non-linear study is of interest. Here, the linear and non-linear stages of this instability are explored using a fully implicit, fully nonlinear 2D reduced resistive MHD code,(L. Chacon et al), ``Implicit, Jacobian-free Newton-Krylov 2D reduced resistive MHD nonlinear solver,'' submitted to J. Comput. Phys. (2000) including viscosity and particle transport effects. The nonlinear implicit time integration is performed using the Newton-Raphson iterative algorithm. Krylov iterative techniques are employed for the required algebraic matrix inversions, implemented Jacobian-free (i.e., without ever forming and storing the Jacobian matrix), and preconditioned with a ``physics-based'' preconditioner. Nonlinear results indicate that, for large total plasma beta and large parallel velocity shear, the instability results in the generation of large poloidal shear flows and large magnetic islands even in regimes when the classical tearing mode is absolutely stable. For small viscosity, the time asymptotic state can be turbulent.

  6. The Copenhagen problem with a quasi-homogeneous potential

    NASA Astrophysics Data System (ADS)

    Fakis, Demetrios; Kalvouridis, Tilemahos

    2017-05-01

    The Copenhagen problem is a well-known case of the famous restricted three-body problem. In this work instead of considering Newtonian potentials and forces we assume that the two primaries create a quasi-homogeneous potential, which means that we insert to the inverse square law of gravitation an inverse cube corrective term in order to approximate various phenomena as the radiation pressure of the primaries or the non-sphericity of them. Based on this new consideration we investigate the equilibrium locations of the small body and their parametric dependence, as well as the zero-velocity curves and surfaces for the planar motion, and the evolution of the regions where this motion is permitted when the Jacobian constant varies.

  7. Decoupling control of a five-phase fault-tolerant permanent magnet motor by radial basis function neural network inverse

    NASA Astrophysics Data System (ADS)

    Chen, Qian; Liu, Guohai; Xu, Dezhi; Xu, Liang; Xu, Gaohong; Aamir, Nazir

    2018-05-01

    This paper proposes a new decoupled control for a five-phase in-wheel fault-tolerant permanent magnet (IW-FTPM) motor drive, in which radial basis function neural network inverse (RBF-NNI) and internal model control (IMC) are combined. The RBF-NNI system is introduced into original system to construct a pseudo-linear system, and IMC is used as a robust controller. Hence, the newly proposed control system incorporates the merits of the IMC and RBF-NNI methods. In order to verify the proposed strategy, an IW-FTPM motor drive is designed based on dSPACE real-time control platform. Then, the experimental results are offered to verify that the d-axis current and the rotor speed are successfully decoupled. Besides, the proposed motor drive exhibits strong robustness even under load torque disturbance.

  8. Analysis and design of a six-degree-of-freedom Stewart platform-based robotic wrist

    NASA Technical Reports Server (NTRS)

    Nguyen, Charles C.; Antrazi, Sami; Zhou, Zhen-Lei

    1991-01-01

    The kinematic analysis and implementation of a six degree of freedom robotic wrist which is mounted to a general open-kinetic chain manipulator to serve as a restbed for studying precision robotic assembly in space is discussed. The wrist design is based on the Stewart Platform mechanism and consists mainly of two platforms and six linear actuators driven by DC motors. Position feedback is achieved by linear displacement transducers mounted along the actuators and force feedback is obtained by a 6 degree of freedom force sensor mounted between the gripper and the payload platform. The robot wrist inverse kinematics which computes the required actuator lengths corresponding to Cartesian variables has a closed-form solution. The forward kinematics is solved iteratively using the Newton-Ralphson method which simultaneously provides a modified Jacobian Matrix which relates length velocities to Cartesian translational velocities and time rates of change of roll-pitch-yaw angles. Results of computer simulation conducted to evaluate the efficiency of the forward kinematics and Modified Jacobian Matrix are discussed.

  9. Iterative Inverse Modeling for Reconciliation of Emission Inventories during the 2006 TexAQS Intensive Field Campaign

    NASA Astrophysics Data System (ADS)

    Xiao, X.; Cohan, D. S.

    2009-12-01

    Substantial uncertainties in current emission inventories have been detected by the Texas Air Quality Study 2006 (TexAQS 2006) intensive field program. These emission uncertainties have caused large inaccuracies in model simulations of air quality and its responses to management strategies. To improve the quantitative understanding of the temporal, spatial, and categorized distributions of primary pollutant emissions by utilizing the corresponding measurements collected during TexAQS 2006, we implemented both the recursive Kalman filter and a batch matrix inversion 4-D data assimilation (FDDA) method in an iterative inverse modeling framework of the CMAQ-DDM model. Equipped with the decoupled direct method, CMAQ-DDM enables simultaneous calculation of the sensitivity coefficients of pollutant concentrations to emissions to be used in the inversions. Primary pollutant concentrations measured by the multiple platforms (TCEQ ground-based, NOAA WP-3D aircraft and Ronald H. Brown vessel, and UH Moody Tower) during TexAQS 2006 have been integrated for the use in the inverse modeling. Firstly pseudo-data analyses have been conducted to assess the two methods, taking a coarse spatial resolution emission inventory as a case. Model base case concentrations of isoprene and ozone at arbitrarily selected ground grid cells were perturbed to generate pseudo measurements with different assumed Gaussian uncertainties expressed by 1-sigma standard deviations. Single-species inversions have been conducted with both methods for isoprene and NOx surface emissions from eight states in the Southeastern United States by using the pseudo measurements of isoprene and ozone, respectively. Utilization of ozone pseudo data to invert for NOx emissions serves only for the purpose of method assessment. Both the Kalman filter and FDDA methods show good performance in tuning arbitrarily shifted a priori emissions to the base case “true” values within 3-4 iterations even for the nonlinear responses of ozone to NOx emissions. While the Kalman filter has better performance under the situation of very large observational uncertainties, the batch matrix FDDA method is better suited for incorporating temporally and spatially irregular data such as those measured by NOAA aircraft and ship. After validating the methods with the pseudo data, the inverse technique is applied to improve emission estimates of NOx from different source sectors and regions in the Houston metropolitan area by using NOx measurements during TexAQS 2006. EPA NEI2005-based and Texas-specified Emission Inventories for 2006 are used as the a priori emission estimates before optimization. The inversion results will be presented and discussed. Future work will conduct inverse modeling for additional species, and then perform a multi-species inversion for emissions consistency and reconciliation with secondary pollutants such as ozone.

  10. Closing the contrast gap between testbed and model prediction with WFIRST-CGI shaped pupil coronagraph

    NASA Astrophysics Data System (ADS)

    Zhou, Hanying; Nemati, Bijan; Krist, John; Cady, Eric; Prada, Camilo M.; Kern, Brian; Poberezhskiy, Ilya

    2016-07-01

    JPL has recently passed an important milestone in its technology development for a proposed NASA WFIRST mission coronagraph: demonstration of better than 1x10-8 contrast over broad bandwidth (10%) on both shaped pupil coronagraph (SPC) and hybrid Lyot coronagraph (HLC) testbeds with the WFIRST obscuration pattern. Challenges remain, however, in the technology readiness for the proposed mission. One is the discrepancies between the achieved contrasts on the testbeds and their corresponding model predictions. A series of testbed diagnoses and modeling activities were planned and carried out on the SPC testbed in order to close the gap. A very useful tool we developed was a derived "measured" testbed wavefront control Jacobian matrix that could be compared with the model-predicted "control" version that was used to generate the high contrast dark hole region in the image plane. The difference between these two is an estimate of the error in the control Jacobian. When the control matrix, which includes both amplitude and phase, was modified to reproduce the error, the simulated performance closely matched the SPC testbed behavior in both contrast floor and contrast convergence speed. This is a step closer toward model validation for high contrast coronagraphs. Further Jacobian analysis and modeling provided clues to the possible sources for the mismatch: DM misregistration and testbed optical wavefront error (WFE) and the deformable mirror (DM) setting for correcting this WFE. These analyses suggested that a high contrast coronagraph has a tight tolerance in the accuracy of its control Jacobian. Modifications to both testbed control model as well as prediction model are being implemented, and future works are discussed.

  11. A complete analytical solution for the inverse instantaneous kinematics of a spherical-revolute-spherical (7R) redundant manipulator

    NASA Technical Reports Server (NTRS)

    Podhorodeski, R. P.; Fenton, R. G.; Goldenberg, A. A.

    1989-01-01

    Using a method based upon resolving joint velocities using reciprocal screw quantities, compact analytical expressions are generated for the inverse solution of the joint rates of a seven revolute (spherical-revolute-spherical) manipulator. The method uses a sequential decomposition of screw coordinates to identify reciprocal screw quantities used in the resolution of a particular joint rate solution, and also to identify a Jacobian null-space basis used for the direct solution of optimal joint rates. The results of the screw decomposition are used to study special configurations of the manipulator, generating expressions for the inverse velocity solution for all non-singular configurations of the manipulator, and identifying singular configurations and their characteristics. Two functions are therefore served: a new general method for the solution of the inverse velocity problem is presented; and complete analytical expressions are derived for the resolution of the joint rates of a seven degree of freedom manipulator useful for telerobotic and industrial robotic application.

  12. Joint inversion of acoustic and resistivity data for the estimation of gas hydrate concentration

    USGS Publications Warehouse

    Lee, Myung W.

    2002-01-01

    Downhole log measurements, such as acoustic or electrical resistivity logs, are frequently used to estimate in situ gas hydrate concentrations in the pore space of sedimentary rocks. Usually the gas hydrate concentration is estimated separately based on each log measurement. However, measurements are related to each other through the gas hydrate concentration, so the gas hydrate concentrations can be estimated by jointly inverting available logs. Because the magnitude of slowness of acoustic and resistivity values differs by more than an order of magnitude, a least-squares method, weighted by the inverse of the observed values, is attempted. Estimating the resistivity of connate water and gas hydrate concentration simultaneously is problematic, because the resistivity of connate water is independent of acoustics. In order to overcome this problem, a coupling constant is introduced in the Jacobian matrix. In the use of different logs to estimate gas hydrate concentration, a joint inversion of different measurements is preferred to the averaging of each inversion result.

  13. Maneuvering strategies using CMGs

    NASA Technical Reports Server (NTRS)

    Oh, H. S.; Vadali, S. R.

    1988-01-01

    This paper considers control strategies for maneuvering spacecraft using Single-Gimbal Control Momentum Gyros (CMGs). A pyramid configuration using four gyros is utilized. Preferred initial gimbal angles for maximum utilization of CMG momentum are obtained for some known torque commands. Feedback control laws are derived from the stability point of view by using the Liapunov's Second Theorem. The gyro rates are obtained by the pseudo-inverse technique. The effect of gimbal rate bounds on controllability are studied for an example maneuver. Singularity avoidance is based on limiting the gyro rates depending on a singularity index.

  14. 1D-VAR Retrieval Using Superchannels

    NASA Technical Reports Server (NTRS)

    Liu, Xu; Zhou, Daniel; Larar, Allen; Smith, William L.; Schluessel, Peter; Mango, Stephen; SaintGermain, Karen

    2008-01-01

    Since modern ultra-spectral remote sensors have thousands of channels, it is difficult to include all of them in a 1D-var retrieval system. We will describe a physical inversion algorithm, which includes all available channels for the atmospheric temperature, moisture, cloud, and surface parameter retrievals. Both the forward model and the inversion algorithm compress the channel radiances into super channels. These super channels are obtained by projecting the radiance spectra onto a set of pre-calculated eigenvectors. The forward model provides both super channel properties and jacobian in EOF space directly. For ultra-spectral sensors such as Infrared Atmospheric Sounding Interferometer (IASI) and the NPOESS Airborne Sounder Testbed Interferometer (NAST), a compression ratio of more than 80 can be achieved, leading to a significant reduction in computations involved in an inversion process. Results will be shown applying the algorithm to real IASI and NAST data.

  15. Accelerated gradient based diffuse optical tomographic image reconstruction.

    PubMed

    Biswas, Samir Kumar; Rajan, K; Vasu, R M

    2011-01-01

    Fast reconstruction of interior optical parameter distribution using a new approach called Broyden-based model iterative image reconstruction (BMOBIIR) and adjoint Broyden-based MOBIIR (ABMOBIIR) of a tissue and a tissue mimicking phantom from boundary measurement data in diffuse optical tomography (DOT). DOT is a nonlinear and ill-posed inverse problem. Newton-based MOBIIR algorithm, which is generally used, requires repeated evaluation of the Jacobian which consumes bulk of the computation time for reconstruction. In this study, we propose a Broyden approach-based accelerated scheme for Jacobian computation and it is combined with conjugate gradient scheme (CGS) for fast reconstruction. The method makes explicit use of secant and adjoint information that can be obtained from forward solution of the diffusion equation. This approach reduces the computational time many fold by approximating the system Jacobian successively through low-rank updates. Simulation studies have been carried out with single as well as multiple inhomogeneities. Algorithms are validated using an experimental study carried out on a pork tissue with fat acting as an inhomogeneity. The results obtained through the proposed BMOBIIR and ABMOBIIR approaches are compared with those of Newton-based MOBIIR algorithm. The mean squared error and execution time are used as metrics for comparing the results of reconstruction. We have shown through experimental and simulation studies that Broyden-based MOBIIR and adjoint Broyden-based methods are capable of reconstructing single as well as multiple inhomogeneities in tissue and a tissue-mimicking phantom. Broyden MOBIIR and adjoint Broyden MOBIIR methods are computationally simple and they result in much faster implementations because they avoid direct evaluation of Jacobian. The image reconstructions have been carried out with different initial values using Newton, Broyden, and adjoint Broyden approaches. These algorithms work well when the initial guess is close to the true solution. However, when initial guess is far away from true solution, Newton-based MOBIIR gives better reconstructed images. The proposed methods are found to be stable with noisy measurement data.

  16. 3-D CSEM data inversion algorithm based on simultaneously active multiple transmitters concept

    NASA Astrophysics Data System (ADS)

    Dehiya, Rahul; Singh, Arun; Gupta, Pravin Kumar; Israil, Mohammad

    2017-05-01

    We present an algorithm for efficient 3-D inversion of marine controlled-source electromagnetic data. The efficiency is achieved by exploiting the redundancy in data. The data redundancy is reduced by compressing the data through stacking of the response of transmitters which are in close proximity. This stacking is equivalent to synthesizing the data as if the multiple transmitters are simultaneously active. The redundancy in data, arising due to close transmitter spacing, has been studied through singular value analysis of the Jacobian formed in 1-D inversion. This study reveals that the transmitter spacing of 100 m, typically used in marine data acquisition, does result in redundancy in the data. In the proposed algorithm, the data are compressed through stacking which leads to both computational advantage and reduction in noise. The performance of the algorithm for noisy data is demonstrated through the studies on two types of noise, viz., uncorrelated additive noise and correlated non-additive noise. It is observed that in case of uncorrelated additive noise, up to a moderately high (10 percent) noise level the algorithm addresses the noise as effectively as the traditional full data inversion. However, when the noise level in the data is high (20 percent), the algorithm outperforms the traditional full data inversion in terms of data misfit. Similar results are obtained in case of correlated non-additive noise and the algorithm performs better if the level of noise is high. The inversion results of a real field data set are also presented to demonstrate the robustness of the algorithm. The significant computational advantage in all cases presented makes this algorithm a better choice.

  17. Full-Physics Inverse Learning Machine for Satellite Remote Sensing Retrievals

    NASA Astrophysics Data System (ADS)

    Loyola, D. G.

    2017-12-01

    The satellite remote sensing retrievals are usually ill-posed inverse problems that are typically solved by finding a state vector that minimizes the residual between simulated data and real measurements. The classical inversion methods are very time-consuming as they require iterative calls to complex radiative-transfer forward models to simulate radiances and Jacobians, and subsequent inversion of relatively large matrices. In this work we present a novel and extremely fast algorithm for solving inverse problems called full-physics inverse learning machine (FP-ILM). The FP-ILM algorithm consists of a training phase in which machine learning techniques are used to derive an inversion operator based on synthetic data generated using a radiative transfer model (which expresses the "full-physics" component) and the smart sampling technique, and an operational phase in which the inversion operator is applied to real measurements. FP-ILM has been successfully applied to the retrieval of the SO2 plume height during volcanic eruptions and to the retrieval of ozone profile shapes from UV/VIS satellite sensors. Furthermore, FP-ILM will be used for the near-real-time processing of the upcoming generation of European Sentinel sensors with their unprecedented spectral and spatial resolution and associated large increases in the amount of data.

  18. New preconditioning strategy for Jacobian-free solvers for variably saturated flows with Richards’ equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lipnikov, Konstantin; Moulton, David; Svyatskiy, Daniil

    2016-04-29

    We develop a new approach for solving the nonlinear Richards’ equation arising in variably saturated flow modeling. The growing complexity of geometric models for simulation of subsurface flows leads to the necessity of using unstructured meshes and advanced discretization methods. Typically, a numerical solution is obtained by first discretizing PDEs and then solving the resulting system of nonlinear discrete equations with a Newton-Raphson-type method. Efficiency and robustness of the existing solvers rely on many factors, including an empiric quality control of intermediate iterates, complexity of the employed discretization method and a customized preconditioner. We propose and analyze a new preconditioningmore » strategy that is based on a stable discretization of the continuum Jacobian. We will show with numerical experiments for challenging problems in subsurface hydrology that this new preconditioner improves convergence of the existing Jacobian-free solvers 3-20 times. Furthermore, we show that the Picard method with this preconditioner becomes a more efficient nonlinear solver than a few widely used Jacobian-free solvers.« less

  19. Patterns of Stochastic Behavior in Dynamically Unstable High-Dimensional Biochemical Networks

    PubMed Central

    Rosenfeld, Simon

    2009-01-01

    The question of dynamical stability and stochastic behavior of large biochemical networks is discussed. It is argued that stringent conditions of asymptotic stability have very little chance to materialize in a multidimensional system described by the differential equations of chemical kinetics. The reason is that the criteria of asymptotic stability (Routh-Hurwitz, Lyapunov criteria, Feinberg’s Deficiency Zero theorem) would impose the limitations of very high algebraic order on the kinetic rates and stoichiometric coefficients, and there are no natural laws that would guarantee their unconditional validity. Highly nonlinear, dynamically unstable systems, however, are not necessarily doomed to collapse, as a simple Jacobian analysis would suggest. It is possible that their dynamics may assume the form of pseudo-random fluctuations quite similar to a shot noise, and, therefore, their behavior may be described in terms of Langevin and Fokker-Plank equations. We have shown by simulation that the resulting pseudo-stochastic processes obey the heavy-tailed Generalized Pareto Distribution with temporal sequence of pulses forming the set of constituent-specific Poisson processes. Being applied to intracellular dynamics, these properties are naturally associated with burstiness, a well documented phenomenon in the biology of gene expression. PMID:19838330

  20. Steering Law Design for Redundant Single Gimbal Control Moment Gyro Systems. M.S. Thesis - Massachusetts Inst. of Technology.

    NASA Technical Reports Server (NTRS)

    Bedrossian, Nazareth Sarkis

    1987-01-01

    The correspondence between robotic manipulators and single gimbal Control Moment Gyro (CMG) systems was exploited to aid in the understanding and design of single gimbal CMG Steering laws. A test for null motion near a singular CMG configuration was derived which is able to distinguish between escapable and unescapable singular states. Detailed analysis of the Jacobian matrix null-space was performed and results were used to develop and test a variety of single gimbal CMG steering laws. Computer simulations showed that all existing singularity avoidance methods are unable to avoid Elliptic internal singularities. A new null motion algorithm using the Moore-Penrose pseudoinverse, however, was shown by simulation to avoid Elliptic type singularities under certain conditions. The SR-inverse, with appropriate null motion was proposed as a general approach to singularity avoidance, because of its ability to avoid singularities through limited introduction of torque error. Simulation results confirmed the superior performance of this method compared to the other available and proposed pseudoinverse-based Steering laws.

  1. Decision Support Tools for Munitions Response Performance Prediction and Risk Assessment

    DTIC Science & Technology

    2013-01-01

    with G, the forward modeling matrix, implicitly dependent on target location. The least squares model estimate is then given by m̂ = ( GTG )−1GTdobs = G...dobs (6) with (7) G† = ( GTG )−1GT denoting the pseudo-inverse. When inverting observed field data for a sensor with tri-axial transmit and receive coils...ities can be expressed as cov(L̂) =β G†(r) cov(d) (G†(r))T βT =β G†(r) Geq α cov(L) α T GTeq (G †(r))T βT (53) where the pseudo-inverse is G† = ( GTG )−1G

  2. Normal-inverse bimodule operation Hadamard transform ion mobility spectrometry.

    PubMed

    Hong, Yan; Huang, Chaoqun; Liu, Sheng; Xia, Lei; Shen, Chengyin; Chu, Yannan

    2018-10-31

    In order to suppress or eliminate the spurious peaks and improve signal-to-noise ratio (SNR) of Hadamard transform ion mobility spectrometry (HT-IMS), a normal-inverse bimodule operation Hadamard transform - ion mobility spectrometry (NIBOHT-IMS) technique was developed. In this novel technique, a normal and inverse pseudo random binary sequence (PRBS) was produced in sequential order by an ion gate controller and utilized to control the ion gate of IMS, and then the normal HT-IMS mobility spectrum and the inverse HT-IMS mobility spectrum were obtained. A NIBOHT-IMS mobility spectrum was gained by subtracting the inverse HT-IMS mobility spectrum from normal HT-IMS mobility spectrum. Experimental results demonstrate that the NIBOHT-IMS technique can significantly suppress or eliminate the spurious peaks, and enhance the SNR by measuring the reactant ions. Furthermore, the gas CHCl 3 and CH 2 Br 2 were measured for evaluating the capability of detecting real sample. The results show that the NIBOHT-IMS technique is able to eliminate the spurious peaks and improve the SNR notably not only for the detection of larger ion signals but also for the detection of small ion signals. Copyright © 2018 Elsevier B.V. All rights reserved.

  3. Regularization Parameter Selection for Nonlinear Iterative Image Restoration and MRI Reconstruction Using GCV and SURE-Based Methods

    PubMed Central

    Ramani, Sathish; Liu, Zhihao; Rosen, Jeffrey; Nielsen, Jon-Fredrik; Fessler, Jeffrey A.

    2012-01-01

    Regularized iterative reconstruction algorithms for imaging inverse problems require selection of appropriate regularization parameter values. We focus on the challenging problem of tuning regularization parameters for nonlinear algorithms for the case of additive (possibly complex) Gaussian noise. Generalized cross-validation (GCV) and (weighted) mean-squared error (MSE) approaches (based on Stein's Unbiased Risk Estimate— SURE) need the Jacobian matrix of the nonlinear reconstruction operator (representative of the iterative algorithm) with respect to the data. We derive the desired Jacobian matrix for two types of nonlinear iterative algorithms: a fast variant of the standard iterative reweighted least-squares method and the contemporary split-Bregman algorithm, both of which can accommodate a wide variety of analysis- and synthesis-type regularizers. The proposed approach iteratively computes two weighted SURE-type measures: Predicted-SURE and Projected-SURE (that require knowledge of noise variance σ2), and GCV (that does not need σ2) for these algorithms. We apply the methods to image restoration and to magnetic resonance image (MRI) reconstruction using total variation (TV) and an analysis-type ℓ1-regularization. We demonstrate through simulations and experiments with real data that minimizing Predicted-SURE and Projected-SURE consistently lead to near-MSE-optimal reconstructions. We also observed that minimizing GCV yields reconstruction results that are near-MSE-optimal for image restoration and slightly sub-optimal for MRI. Theoretical derivations in this work related to Jacobian matrix evaluations can be extended, in principle, to other types of regularizers and reconstruction algorithms. PMID:22531764

  4. Sensitivity analysis of Jacobian determinant used in treatment planning for lung cancer

    NASA Astrophysics Data System (ADS)

    Shao, Wei; Gerard, Sarah E.; Pan, Yue; Patton, Taylor J.; Reinhardt, Joseph M.; Durumeric, Oguz C.; Bayouth, John E.; Christensen, Gary E.

    2018-03-01

    Four-dimensional computed tomography (4DCT) is regularly used to visualize tumor motion in radiation therapy for lung cancer. These 4DCT images can be analyzed to estimate local ventilation by finding a dense correspondence map between the end inhalation and the end exhalation CT image volumes using deformable image registration. Lung regions with ventilation values above a threshold are labeled as regions of high pulmonary function and are avoided when possible in the radiation plan. This paper investigates a sensitivity analysis of the relative Jacobian error to small registration errors. We present a linear approximation of the relative Jacobian error. Next, we give a formula for the sensitivity of the relative Jacobian error with respect to the Jacobian of perturbation displacement field. Preliminary sensitivity analysis results are presented using 4DCT scans from 10 individuals. For each subject, we generated 6400 random smooth biologically plausible perturbation vector fields using a cubic B-spline model. We showed that the correlation between the Jacobian determinant and the Frobenius norm of the sensitivity matrix is close to -1, which implies that the relative Jacobian error in high-functional regions is less sensitive to noise. We also showed that small displacement errors on the average of 0.53 mm may lead to a 10% relative change in Jacobian determinant. We finally showed that the average relative Jacobian error and the sensitivity of the system for all subjects are positively correlated (close to +1), i.e. regions with high sensitivity has more error in Jacobian determinant on average.

  5. Aeroelastic Wing Shaping Control Subject to Actuation Constraints.

    NASA Technical Reports Server (NTRS)

    Swei, Sean Shan-Min; Nguyen, Nhan

    2014-01-01

    This paper considers the control of coupled aeroelastic aircraft model which is configured with Variable Camber Continuous Trailing Edge Flap (VCCTEF) system. The relative deflection between two adjacent flaps is constrained and this actuation constraint is accounted for when designing an effective control law for suppressing the wing vibration. A simple tuned-mass damper mechanism with two attached masses is used as an example to demonstrate the effectiveness of vibration suppression with confined motion of tuned masses. In this paper, a dynamic inversion based pseudo-control hedging (PCH) and bounded control approach is investigated, and for illustration, it is applied to the NASA Generic Transport Model (GTM) configured with VCCTEF system.

  6. Estimation of pseudo-2D shear-velocity section by inversion of high frequency surface waves

    USGS Publications Warehouse

    Luo, Y.; Liu, J.; Xia, J.; Xu, Y.; Liu, Q.

    2006-01-01

    A scheme to generate pseudo-2D shear-velocity sections with high horizontal resolution and low field cost by inversion of high frequency surface waves is presented. It contains six steps. The key step is the joint method of crossed correlation and phase shift scanning. This joint method chooses only two traces to generate image of dispersion curve. For Rayleigh-wave dispersion is most important for estimation of near-surface shear-wave velocity, it can effectively obtain reliable images of dispersion curves with a couple of traces. The result of a synthetic example shows the feasibility of this scheme. ?? 2005 Society of Exploration Geophysicists.

  7. EIT image reconstruction based on a hybrid FE-EFG forward method and the complete-electrode model.

    PubMed

    Hadinia, M; Jafari, R; Soleimani, M

    2016-06-01

    This paper presents the application of the hybrid finite element-element free Galerkin (FE-EFG) method for the forward and inverse problems of electrical impedance tomography (EIT). The proposed method is based on the complete electrode model. Finite element (FE) and element-free Galerkin (EFG) methods are accurate numerical techniques. However, the FE technique has meshing task problems and the EFG method is computationally expensive. In this paper, the hybrid FE-EFG method is applied to take both advantages of FE and EFG methods, the complete electrode model of the forward problem is solved, and an iterative regularized Gauss-Newton method is adopted to solve the inverse problem. The proposed method is applied to compute Jacobian in the inverse problem. Utilizing 2D circular homogenous models, the numerical results are validated with analytical and experimental results and the performance of the hybrid FE-EFG method compared with the FE method is illustrated. Results of image reconstruction are presented for a human chest experimental phantom.

  8. Methods of computing steady-state voltage stability margins of power systems

    DOEpatents

    Chow, Joe Hong; Ghiocel, Scott Gordon

    2018-03-20

    In steady-state voltage stability analysis, as load increases toward a maximum, conventional Newton-Raphson power flow Jacobian matrix becomes increasingly ill-conditioned so power flow fails to converge before reaching maximum loading. A method to directly eliminate this singularity reformulates the power flow problem by introducing an AQ bus with specified bus angle and reactive power consumption of a load bus. For steady-state voltage stability analysis, the angle separation between the swing bus and AQ bus can be varied to control power transfer to the load, rather than specifying the load power itself. For an AQ bus, the power flow formulation is only made up of a reactive power equation, thus reducing the size of the Jacobian matrix by one. This reduced Jacobian matrix is nonsingular at the critical voltage point, eliminating a major difficulty in voltage stability analysis for power system operations.

  9. Multi-GPU Jacobian accelerated computing for soft-field tomography.

    PubMed

    Borsic, A; Attardo, E A; Halter, R J

    2012-10-01

    Image reconstruction in soft-field tomography is based on an inverse problem formulation, where a forward model is fitted to the data. In medical applications, where the anatomy presents complex shapes, it is common to use finite element models (FEMs) to represent the volume of interest and solve a partial differential equation that models the physics of the system. Over the last decade, there has been a shifting interest from 2D modeling to 3D modeling, as the underlying physics of most problems are 3D. Although the increased computational power of modern computers allows working with much larger FEM models, the computational time required to reconstruct 3D images on a fine 3D FEM model can be significant, on the order of hours. For example, in electrical impedance tomography (EIT) applications using a dense 3D FEM mesh with half a million elements, a single reconstruction iteration takes approximately 15-20 min with optimized routines running on a modern multi-core PC. It is desirable to accelerate image reconstruction to enable researchers to more easily and rapidly explore data and reconstruction parameters. Furthermore, providing high-speed reconstructions is essential for some promising clinical application of EIT. For 3D problems, 70% of the computing time is spent building the Jacobian matrix, and 25% of the time in forward solving. In this work, we focus on accelerating the Jacobian computation by using single and multiple GPUs. First, we discuss an optimized implementation on a modern multi-core PC architecture and show how computing time is bounded by the CPU-to-memory bandwidth; this factor limits the rate at which data can be fetched by the CPU. Gains associated with the use of multiple CPU cores are minimal, since data operands cannot be fetched fast enough to saturate the processing power of even a single CPU core. GPUs have much faster memory bandwidths compared to CPUs and better parallelism. We are able to obtain acceleration factors of 20 times on a single NVIDIA S1070 GPU, and of 50 times on four GPUs, bringing the Jacobian computing time for a fine 3D mesh from 12 min to 14 s. We regard this as an important step toward gaining interactive reconstruction times in 3D imaging, particularly when coupled in the future with acceleration of the forward problem. While we demonstrate results for EIT, these results apply to any soft-field imaging modality where the Jacobian matrix is computed with the adjoint method.

  10. Multi-GPU Jacobian Accelerated Computing for Soft Field Tomography

    PubMed Central

    Borsic, A.; Attardo, E. A.; Halter, R. J.

    2012-01-01

    Image reconstruction in soft-field tomography is based on an inverse problem formulation, where a forward model is fitted to the data. In medical applications, where the anatomy presents complex shapes, it is common to use Finite Element Models to represent the volume of interest and to solve a partial differential equation that models the physics of the system. Over the last decade, there has been a shifting interest from 2D modeling to 3D modeling, as the underlying physics of most problems are three-dimensional. Though the increased computational power of modern computers allows working with much larger FEM models, the computational time required to reconstruct 3D images on a fine 3D FEM model can be significant, on the order of hours. For example, in Electrical Impedance Tomography applications using a dense 3D FEM mesh with half a million elements, a single reconstruction iteration takes approximately 15 to 20 minutes with optimized routines running on a modern multi-core PC. It is desirable to accelerate image reconstruction to enable researchers to more easily and rapidly explore data and reconstruction parameters. Further, providing high-speed reconstructions are essential for some promising clinical application of EIT. For 3D problems 70% of the computing time is spent building the Jacobian matrix, and 25% of the time in forward solving. In the present work, we focus on accelerating the Jacobian computation by using single and multiple GPUs. First, we discuss an optimized implementation on a modern multi-core PC architecture and show how computing time is bounded by the CPU-to-memory bandwidth; this factor limits the rate at which data can be fetched by the CPU. Gains associated with use of multiple CPU cores are minimal, since data operands cannot be fetched fast enough to saturate the processing power of even a single CPU core. GPUs have a much faster memory bandwidths compared to CPUs and better parallelism. We are able to obtain acceleration factors of 20 times on a single NVIDIA S1070 GPU, and of 50 times on 4 GPUs, bringing the Jacobian computing time for a fine 3D mesh from 12 minutes to 14 seconds. We regard this as an important step towards gaining interactive reconstruction times in 3D imaging, particularly when coupled in the future with acceleration of the forward problem. While we demonstrate results for Electrical Impedance Tomography, these results apply to any soft-field imaging modality where the Jacobian matrix is computed with the Adjoint Method. PMID:23010857

  11. Development, Verification and Experimental Analysis of High-Fidelity Mathematical Models for Control Moment Gyros

    DTIC Science & Technology

    2011-12-01

    therefore a more general approach uses the pseudo-inverse shown in Equation (12) to obtain the commanded gimbal rate.     1 /T T b N CMG...gimbal motor. Approaching the problem from this perspective increases the complexity significantly and the relationship between motor current and...included in this document confirms the equations that Schaub and Junkins developed. The approaches used in the two derivations are sufficiently

  12. Ni-NiO core-shell inverse opal electrodes for supercapacitors.

    PubMed

    Kim, Jae-Hun; Kang, Soon Hyung; Zhu, Kai; Kim, Jin Young; Neale, Nathan R; Frank, Arthur J

    2011-05-14

    A general template-assisted electrochemical approach was used to synthesize three-dimensional ordered Ni core-NiO shell inverse opals (IOs) as electrodes for supercapacitors. The Ni-NiO IO electrodes displayed pseudo-capacitor behavior, good rate capability and cycling performance. © The Royal Society of Chemistry 2011

  13. Off-diagonal Jacobian support for Nodal BCs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peterson, John W.; Andrs, David; Gaston, Derek R.

    In this brief note, we describe the implementation of o-diagonal Jacobian computations for nodal boundary conditions in the Multiphysics Object Oriented Simulation Environment (MOOSE) [1] framework. There are presently a number of applications [2{5] based on the MOOSE framework that solve complicated physical systems of partial dierential equations whose boundary conditions are often highly nonlinear. Accurately computing the on- and o-diagonal Jacobian and preconditioner entries associated to these constraints is crucial for enabling ecient numerical solvers in these applications. Two key ingredients are required for properly specifying the Jacobian contributions of nonlinear nodal boundary conditions in MOOSE and nite elementmore » codes in general: 1. The ability to zero out entire Jacobian matrix rows after \

  14. Implementing a Matrix-free Analytical Jacobian to Handle Nonlinearities in Models of 3D Lithospheric Deformation

    NASA Astrophysics Data System (ADS)

    Kaus, B.; Popov, A.

    2015-12-01

    The analytical expression for the Jacobian is a key component to achieve fast and robust convergence of the nonlinear Newton-Raphson iterative solver. Accomplishing this task in practice often requires a significant algebraic effort. Therefore it is quite common to use a cheap alternative instead, for example by approximating the Jacobian with a finite difference estimation. Despite its simplicity it is a relatively fragile and unreliable technique that is sensitive to the scaling of the residual and unknowns, as well as to the perturbation parameter selection. Unfortunately no universal rule can be applied to provide both a robust scaling and a perturbation. The approach we use here is to derive the analytical Jacobian for the coupled set of momentum, mass, and energy conservation equations together with the elasto-visco-plastic rheology and a marker in cell/staggered finite difference method. The software project LaMEM (Lithosphere and Mantle Evolution Model) is primarily developed for the thermo-mechanically coupled modeling of the 3D lithospheric deformation. The code is based on a staggered grid finite difference discretization in space, and uses customized scalable solvers form PETSc library to efficiently run on the massively parallel machines (such as IBM Blue Gene/Q). Currently LaMEM relies on the Jacobian-Free Newton-Krylov (JFNK) nonlinear solver, which approximates the Jacobian-vector product using a simple finite difference formula. This approach never requires an assembled Jacobian matrix and uses only the residual computation routine. We use an approximate Jacobian (Picard) matrix to precondition the Krylov solver with the Galerkin geometric multigrid. Because of the inherent problems of the finite difference Jacobian estimation, this approach doesn't always result in stable convergence. In this work we present and discuss a matrix-free technique in which the Jacobian-vector product is replaced by analytically-derived expressions and compare results with those obtained with a finite difference approximation of the Jacobian. This project is funded by ERC Starting Grant 258830 and computer facilities were provided by Jülich supercomputer center (Germany).

  15. Evaluation of the observation operator Jacobian for leaf area index data assimilation with an extended Kalman filter

    NASA Astrophysics Data System (ADS)

    Rüdiger, Christoph; Albergel, CléMent; Mahfouf, Jean-FrançOis; Calvet, Jean-Christophe; Walker, Jeffrey P.

    2010-05-01

    To quantify carbon and water fluxes between the vegetation and the atmosphere in a consistent manner, land surface models now include interactive vegetation components. These models treat the vegetation biomass as a prognostic model state, allowing the model to dynamically adapt the vegetation states to environmental conditions. However, it is expected that the prediction skill of such models can be greatly increased by assimilating biophysical observations such as leaf area index (LAI). The Jacobian of the observation operator, a central aspect of data assimilation methods such as the extended Kalman filter (EKF) and the variational assimilation methods, provides the required linear relationship between the observation and the model states. In this paper, the Jacobian required for assimilating LAI into the Interaction between the Soil, Biosphere and Atmosphere-A-gs land surface model using the EKF is studied. In particular, sensitivity experiments were undertaken on the size of the initial perturbation for estimating the Jacobian and on the length of the time window between initial state and available observation. It was found that small perturbations (0.1% of the state) typically lead to accurate estimates of the Jacobian. While other studies have shown that the assimilation of LAI with 10 day assimilation windows is possible, 1 day assimilation intervals can be chosen to comply with numerical weather prediction requirements. Moreover, the seasonal dependence of the Jacobian revealed contrasted groups of Jacobian values due to environmental factors. Further analyses showed the Jacobian values to vary as a function of the LAI itself, which has important implications for its assimilation in different seasons, as the size of the LAI increments will subsequently vary due to the variability of the Jacobian.

  16. Pseudo-invariants contributing to inverse energy cascades in three-dimensional turbulence

    NASA Astrophysics Data System (ADS)

    Rathmann, Nicholas M.; Ditlevsen, Peter D.

    2017-05-01

    Three-dimensional (3D) turbulence is characterized by a dual forward cascade of both kinetic energy and helicity, a second inviscid flow invariant besides energy, from the integral scale of motion to the viscous dissipative scale. In helical flows, however, such as strongly rotating flows with broken mirror symmetry, an inverse (reversed) energy cascade can be observed analogous to that of two-dimensional turbulence (2D) where enstrophy, a second positive-definite flow invariant, unlike helicity in 3D, effectively blocks the forward cascade of energy. In the spectral-helical decomposition of the Navier-Stokes equation, it has previously been shown that a subset of three-wave (triad) interactions conserve helicity in 3D in a fashion similar to enstrophy in 2D, thus leading to a 2D-like inverse energy cascade in 3D. In this work, we show, both theoretically and numerically, that an additional subset of interactions exist, conserving a new pseudo-invariant in addition to energy and helicity, which contributes either to a forward or an inverse energy cascade depending on the specific triad interaction geometry.

  17. Users manual for the Variable dimension Automatic Synthesis Program (VASP)

    NASA Technical Reports Server (NTRS)

    White, J. S.; Lee, H. Q.

    1971-01-01

    A dictionary and some problems for the Variable Automatic Synthesis Program VASP are submitted. The dictionary contains a description of each subroutine and instructions on its use. The example problems give the user a better perspective on the use of VASP for solving problems in modern control theory. These example problems include dynamic response, optimal control gain, solution of the sampled data matrix Ricatti equation, matrix decomposition, and pseudo inverse of a matrix. Listings of all subroutines are also included. The VASP program has been adapted to run in the conversational mode on the Ames 360/67 computer.

  18. Processing grounded-wire TEM signal in time-frequency-pseudo-seismic domain: A new paradigm

    NASA Astrophysics Data System (ADS)

    Khan, M. Y.; Xue, G. Q.; Chen, W.; Huasen, Z.

    2017-12-01

    Grounded-wire TEM has received great attention in mineral, hydrocarbon and hydrogeological investigations for the last several years. Conventionally, TEM soundings have been presented as apparent resistivity curves as function of time. With development of sophisticated computational algorithms, it became possible to extract more realistic geoelectric information by applying inversion programs to 1-D & 3-D problems. Here, we analyze grounded-wire TEM data by carrying out analysis in time, frequency and pseudo-seismic domain supported by borehole information. At first, H, K, A & Q type geoelectric models are processed using a proven inversion program (1-D Occam inversion). Second, time-to-frequency transformation is conducted from TEM ρa(t) curves to magneto telluric MT ρa(f) curves for the same models based on all-time apparent resistivity curves. Third, 1-D Bostick's algorithm was applied to the transformed resistivity. Finally, EM diffusion field is transformed into propagating wave field obeying the standard wave equation using wavelet transformation technique and constructed pseudo-seismic section. The transformed seismic-like wave indicates that some reflection and refraction phenomena appear when the EM wave field interacts with geoelectric interface at different depth intervals due to contrast in resistivity. The resolution of the transformed TEM data is significantly improved in comparison to apparent resistivity plots. A case study illustrates the successful hydrogeophysical application of proposed approach in recovering water-filled mined-out area in a coal field located in Ye county, Henan province, China. The results support the introduction of pseudo-seismic imaging technology in short-offset version of TEM which can also be an useful aid if integrated with seismic reflection technique to explore possibilities for high resolution EM imaging in future.

  19. Singularity and Nonnormality in the Classification of Compositional Data

    USGS Publications Warehouse

    Bohling, Geoffrey C.; Davis, J.C.; Olea, R.A.; Harff, Jan

    1998-01-01

    Geologists may want to classify compositional data and express the classification as a map. Regionalized classification is a tool that can be used for this purpose, but it incorporates discriminant analysis, which requires the computation and inversion of a covariance matrix. Covariance matrices of compositional data always will be singular (noninvertible) because of the unit-sum constraint. Fortunately, discriminant analyses can be calculated using a pseudo-inverse of the singular covariance matrix; this is done automatically by some statistical packages such as SAS. Granulometric data from the Darss Sill region of the Baltic Sea is used to explore how the pseudo-inversion procedure influences discriminant analysis results, comparing the algorithm used by SAS to the more conventional Moore-Penrose algorithm. Logratio transforms have been recommended to overcome problems associated with analysis of compositional data, including singularity. A regionalized classification of the Darss Sill data after logratio transformation is different only slightly from one based on raw granulometric data, suggesting that closure problems do not influence severely regionalized classification of compositional data.

  20. A musculoskeletal shoulder model based on pseudo-inverse and null-space optimization.

    PubMed

    Terrier, Alexandre; Aeberhard, Martin; Michellod, Yvan; Mullhaupt, Philippe; Gillet, Denis; Farron, Alain; Pioletti, Dominique P

    2010-11-01

    The goal of the present work was assess the feasibility of using a pseudo-inverse and null-space optimization approach in the modeling of the shoulder biomechanics. The method was applied to a simplified musculoskeletal shoulder model. The mechanical system consisted in the arm, and the external forces were the arm weight, 6 scapulo-humeral muscles and the reaction at the glenohumeral joint, which was considered as a spherical joint. The muscle wrapping was considered around the humeral head assumed spherical. The dynamical equations were solved in a Lagrangian approach. The mathematical redundancy of the mechanical system was solved in two steps: a pseudo-inverse optimization to minimize the square of the muscle stress and a null-space optimization to restrict the muscle force to physiological limits. Several movements were simulated. The mathematical and numerical aspects of the constrained redundancy problem were efficiently solved by the proposed method. The prediction of muscle moment arms was consistent with cadaveric measurements and the joint reaction force was consistent with in vivo measurements. This preliminary work demonstrated that the developed algorithm has a great potential for more complex musculoskeletal modeling of the shoulder joint. In particular it could be further applied to a non-spherical joint model, allowing for the natural translation of the humeral head in the glenoid fossa. Copyright © 2010 IPEM. Published by Elsevier Ltd. All rights reserved.

  1. Maximally Informative Statistics for Localization and Mapping

    NASA Technical Reports Server (NTRS)

    Deans, Matthew C.

    2001-01-01

    This paper presents an algorithm for localization and mapping for a mobile robot using monocular vision and odometry as its means of sensing. The approach uses the Variable State Dimension filtering (VSDF) framework to combine aspects of Extended Kalman filtering and nonlinear batch optimization. This paper describes two primary improvements to the VSDF. The first is to use an interpolation scheme based on Gaussian quadrature to linearize measurements rather than relying on analytic Jacobians. The second is to replace the inverse covariance matrix in the VSDF with its Cholesky factor to improve the computational complexity. Results of applying the filter to the problem of localization and mapping with omnidirectional vision are presented.

  2. Active control using control allocation for UAVs with seamless morphing wing

    NASA Astrophysics Data System (ADS)

    Wang, Zheng-jie; Sun, Yin-di; Yang, Da-qing; Guo, Shi-jun

    2012-04-01

    In this paper, a small seamless morphing wing aircraft of MTOW=51 kg is investigated. The leading edge (LE) and trailing edge (TE) control surfaces are positioned in the wing section in span wise. Based on the studying results of aeroelastic wing characteristics, the controller should be designed depending on the flight speed. Compared with a wing of rigid hinged aileron, the morphing wing produces the rolling moment by deflecting the flexible TE and LE surfaces. An iteration method of pseudo-inverse allocation and quadratic programming allocation within the constraints of actuators have be investigated to solve the nonlinear control allocation caused by the aerodynamics of the effectors. The simulation results will show that the control method based on control allocation can achieve the control target.

  3. Active control using control allocation for UAVs with seamless morphing wing

    NASA Astrophysics Data System (ADS)

    Wang, Zheng-jie; Sun, Yin-di; Yang, Da-qing; Guo, Shi-jun

    2011-11-01

    In this paper, a small seamless morphing wing aircraft of MTOW=51 kg is investigated. The leading edge (LE) and trailing edge (TE) control surfaces are positioned in the wing section in span wise. Based on the studying results of aeroelastic wing characteristics, the controller should be designed depending on the flight speed. Compared with a wing of rigid hinged aileron, the morphing wing produces the rolling moment by deflecting the flexible TE and LE surfaces. An iteration method of pseudo-inverse allocation and quadratic programming allocation within the constraints of actuators have be investigated to solve the nonlinear control allocation caused by the aerodynamics of the effectors. The simulation results will show that the control method based on control allocation can achieve the control target.

  4. Network design for quantifying urban CO2 emissions: assessing trade-offs between precision and network density

    NASA Astrophysics Data System (ADS)

    Turner, Alexander J.; Shusterman, Alexis A.; McDonald, Brian C.; Teige, Virginia; Harley, Robert A.; Cohen, Ronald C.

    2016-11-01

    The majority of anthropogenic CO2 emissions are attributable to urban areas. While the emissions from urban electricity generation often occur in locations remote from consumption, many of the other emissions occur within the city limits. Evaluating the effectiveness of strategies for controlling these emissions depends on our ability to observe urban CO2 emissions and attribute them to specific activities. Cost-effective strategies for doing so have yet to be described. Here we characterize the ability of a prototype measurement network, modeled after the Berkeley Atmospheric CO2 Observation Network (BEACO2N) in California's Bay Area, in combination with an inverse model based on the coupled Weather Research and Forecasting/Stochastic Time-Inverted Lagrangian Transport (WRF-STILT) to improve our understanding of urban emissions. The pseudo-measurement network includes 34 sites at roughly 2 km spacing covering an area of roughly 400 km2. The model uses an hourly 1 × 1 km2 emission inventory and 1 × 1 km2 meteorological calculations. We perform an ensemble of Bayesian atmospheric inversions to sample the combined effects of uncertainties of the pseudo-measurements and the model. We vary the estimates of the combined uncertainty of the pseudo-observations and model over a range of 20 to 0.005 ppm and vary the number of sites from 1 to 34. We use these inversions to develop statistical models that estimate the efficacy of the combined model-observing system in reducing uncertainty in CO2 emissions. We examine uncertainty in estimated CO2 fluxes on the urban scale, as well as for sources embedded within the city such as a line source (e.g., a highway) or a point source (e.g., emissions from the stacks of small industrial facilities). Using our inversion framework, we find that a dense network with moderate precision is the preferred setup for estimating area, line, and point sources from a combined uncertainty and cost perspective. The dense network considered here (modeled after the BEACO2N network with an assumed mismatch error of 1 ppm at an hourly temporal resolution) could estimate weekly CO2 emissions from an urban region with less than 5 % error, given our characterization of the combined observation and model uncertainty.

  5. Network design for quantifying urban CO 2 emissions: assessing trade-offs between precision and network density

    DOE PAGES

    Turner, Alexander J.; Shusterman, Alexis A.; McDonald, Brian C.; ...

    2016-11-01

    The majority of anthropogenic CO 2 emissions are attributable to urban areas. While the emissions from urban electricity generation often occur in locations remote from consumption, many of the other emissions occur within the city limits. Evaluating the effectiveness of strategies for controlling these emissions depends on our ability to observe urban CO 2 emissions and attribute them to specific activities. Cost-effective strategies for doing so have yet to be described. Here we characterize the ability of a prototype measurement network, modeled after the Berkeley Atmospheric CO 2 Observation Network (BEACO 2N) in California's Bay Area, in combination with anmore » inverse model based on the coupled Weather Research and Forecasting/Stochastic Time-Inverted Lagrangian Transport (WRF-STILT) to improve our understanding of urban emissions. The pseudo-measurement network includes 34 sites at roughly 2 km spacing covering an area of roughly 400 km 2. The model uses an hourly 1 × 1 km 2 emission inventory and 1 × 1 km 2 meteorological calculations. We perform an ensemble of Bayesian atmospheric inversions to sample the combined effects of uncertainties of the pseudo-measurements and the model. We vary the estimates of the combined uncertainty of the pseudo-observations and model over a range of 20 to 0.005 ppm and vary the number of sites from 1 to 34. We use these inversions to develop statistical models that estimate the efficacy of the combined model–observing system in reducing uncertainty in CO 2 emissions. We examine uncertainty in estimated CO 2 fluxes on the urban scale, as well as for sources embedded within the city such as a line source (e.g., a highway) or a point source (e.g., emissions from the stacks of small industrial facilities). Using our inversion framework, we find that a dense network with moderate precision is the preferred setup for estimating area, line, and point sources from a combined uncertainty and cost perspective. The dense network considered here (modeled after the BEACO 2N network with an assumed mismatch error of 1 ppm at an hourly temporal resolution) could estimate weekly CO 2 emissions from an urban region with less than 5 % error, given our characterization of the combined observation and model uncertainty.« less

  6. Network design for quantifying urban CO 2 emissions: assessing trade-offs between precision and network density

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turner, Alexander J.; Shusterman, Alexis A.; McDonald, Brian C.

    The majority of anthropogenic CO 2 emissions are attributable to urban areas. While the emissions from urban electricity generation often occur in locations remote from consumption, many of the other emissions occur within the city limits. Evaluating the effectiveness of strategies for controlling these emissions depends on our ability to observe urban CO 2 emissions and attribute them to specific activities. Cost-effective strategies for doing so have yet to be described. Here we characterize the ability of a prototype measurement network, modeled after the Berkeley Atmospheric CO 2 Observation Network (BEACO 2N) in California's Bay Area, in combination with anmore » inverse model based on the coupled Weather Research and Forecasting/Stochastic Time-Inverted Lagrangian Transport (WRF-STILT) to improve our understanding of urban emissions. The pseudo-measurement network includes 34 sites at roughly 2 km spacing covering an area of roughly 400 km 2. The model uses an hourly 1 × 1 km 2 emission inventory and 1 × 1 km 2 meteorological calculations. We perform an ensemble of Bayesian atmospheric inversions to sample the combined effects of uncertainties of the pseudo-measurements and the model. We vary the estimates of the combined uncertainty of the pseudo-observations and model over a range of 20 to 0.005 ppm and vary the number of sites from 1 to 34. We use these inversions to develop statistical models that estimate the efficacy of the combined model–observing system in reducing uncertainty in CO 2 emissions. We examine uncertainty in estimated CO 2 fluxes on the urban scale, as well as for sources embedded within the city such as a line source (e.g., a highway) or a point source (e.g., emissions from the stacks of small industrial facilities). Using our inversion framework, we find that a dense network with moderate precision is the preferred setup for estimating area, line, and point sources from a combined uncertainty and cost perspective. The dense network considered here (modeled after the BEACO 2N network with an assumed mismatch error of 1 ppm at an hourly temporal resolution) could estimate weekly CO 2 emissions from an urban region with less than 5 % error, given our characterization of the combined observation and model uncertainty.« less

  7. Computationally efficient control allocation

    NASA Technical Reports Server (NTRS)

    Durham, Wayne (Inventor)

    2001-01-01

    A computationally efficient method for calculating near-optimal solutions to the three-objective, linear control allocation problem is disclosed. The control allocation problem is that of distributing the effort of redundant control effectors to achieve some desired set of objectives. The problem is deemed linear if control effectiveness is affine with respect to the individual control effectors. The optimal solution is that which exploits the collective maximum capability of the effectors within their individual physical limits. Computational efficiency is measured by the number of floating-point operations required for solution. The method presented returned optimal solutions in more than 90% of the cases examined; non-optimal solutions returned by the method were typically much less than 1% different from optimal and the errors tended to become smaller than 0.01% as the number of controls was increased. The magnitude of the errors returned by the present method was much smaller than those that resulted from either pseudo inverse or cascaded generalized inverse solutions. The computational complexity of the method presented varied linearly with increasing numbers of controls; the number of required floating point operations increased from 5.5 i, to seven times faster than did the minimum-norm solution (the pseudoinverse), and at about the same rate as did the cascaded generalized inverse solution. The computational requirements of the method presented were much better than that of previously described facet-searching methods which increase in proportion to the square of the number of controls.

  8. Pseudo 2D elastic waveform inversion for attenuation in the near surface

    NASA Astrophysics Data System (ADS)

    Wang, Yue; Zhang, Jie

    2017-08-01

    Seismic waveform propagation could be significantly affected by heterogeneities in the near surface zone (0 m-500 m depth). As a result, it is important to obtain as much near surface information as possible. Seismic attenuation, characterized by QP and QS factors, may affect seismic waveform in both phase and amplitude; however, it is rarely estimated and applied to the near surface zone for seismic data processing. Applying a 1D elastic full waveform modelling program, we demonstrate that such effects cannot be overlooked in the waveform computation if the value of the Q factor is lower than approximately 100. Further, we develop a pseudo 2D elastic waveform inversion method in the common midpoint (CMP) domain that jointly inverts early arrivals for QP and surface waves for QS. In this method, although the forward problem is in 1D, by applying 2D model regularization, we obtain 2D QP and QS models through simultaneous inversion. A cross-gradient constraint between the QP and Qs models is applied to ensure structural consistency of the 2D inversion results. We present synthetic examples and a real case study from an oil field in China.

  9. pyJac: Analytical Jacobian generator for chemical kinetics

    NASA Astrophysics Data System (ADS)

    Niemeyer, Kyle E.; Curtis, Nicholas J.; Sung, Chih-Jen

    2017-06-01

    Accurate simulations of combustion phenomena require the use of detailed chemical kinetics in order to capture limit phenomena such as ignition and extinction as well as predict pollutant formation. However, the chemical kinetic models for hydrocarbon fuels of practical interest typically have large numbers of species and reactions and exhibit high levels of mathematical stiffness in the governing differential equations, particularly for larger fuel molecules. In order to integrate the stiff equations governing chemical kinetics, generally reactive-flow simulations rely on implicit algorithms that require frequent Jacobian matrix evaluations. Some in situ and a posteriori computational diagnostics methods also require accurate Jacobian matrices, including computational singular perturbation and chemical explosive mode analysis. Typically, finite differences numerically approximate these, but for larger chemical kinetic models this poses significant computational demands since the number of chemical source term evaluations scales with the square of species count. Furthermore, existing analytical Jacobian tools do not optimize evaluations or support emerging SIMD processors such as GPUs. Here we introduce pyJac, a Python-based open-source program that generates analytical Jacobian matrices for use in chemical kinetics modeling and analysis. In addition to producing the necessary customized source code for evaluating reaction rates (including all modern reaction rate formulations), the chemical source terms, and the Jacobian matrix, pyJac uses an optimized evaluation order to minimize computational and memory operations. As a demonstration, we first establish the correctness of the Jacobian matrices for kinetic models of hydrogen, methane, ethylene, and isopentanol oxidation (number of species ranging 13-360) by showing agreement within 0.001% of matrices obtained via automatic differentiation. We then demonstrate the performance achievable on CPUs and GPUs using pyJac via matrix evaluation timing comparisons; the routines produced by pyJac outperformed first-order finite differences by 3-7.5 times and the existing analytical Jacobian software TChem by 1.1-2.2 times on a single-threaded basis. It is noted that TChem is not thread-safe, while pyJac is easily parallelized, and hence can greatly outperform TChem on multicore CPUs. The Jacobian matrix generator we describe here will be useful for reducing the cost of integrating chemical source terms with implicit algorithms in particular and algorithms that require an accurate Jacobian matrix in general. Furthermore, the open-source release of the program and Python-based implementation will enable wide adoption.

  10. Efficient 3D inversions using the Richards equation

    NASA Astrophysics Data System (ADS)

    Cockett, Rowan; Heagy, Lindsey J.; Haber, Eldad

    2018-07-01

    Fluid flow in the vadose zone is governed by the Richards equation; it is parameterized by hydraulic conductivity, which is a nonlinear function of pressure head. Investigations in the vadose zone typically require characterizing distributed hydraulic properties. Water content or pressure head data may include direct measurements made from boreholes. Increasingly, proxy measurements from hydrogeophysics are being used to supply more spatially and temporally dense data sets. Inferring hydraulic parameters from such datasets requires the ability to efficiently solve and optimize the nonlinear time domain Richards equation. This is particularly important as the number of parameters to be estimated in a vadose zone inversion continues to grow. In this paper, we describe an efficient technique to invert for distributed hydraulic properties in 1D, 2D, and 3D. Our technique does not store the Jacobian matrix, but rather computes its product with a vector. Existing literature for the Richards equation inversion explicitly calculates the sensitivity matrix using finite difference or automatic differentiation, however, for large scale problems these methods are constrained by computation and/or memory. Using an implicit sensitivity algorithm enables large scale inversion problems for any distributed hydraulic parameters in the Richards equation to become tractable on modest computational resources. We provide an open source implementation of our technique based on the SimPEG framework, and show it in practice for a 3D inversion of saturated hydraulic conductivity using water content data through time.

  11. Robust adaptive kinematic control of redundant robots

    NASA Technical Reports Server (NTRS)

    Tarokh, M.; Zuck, D. D.

    1992-01-01

    The paper presents a general method for the resolution of redundancy that combines the Jacobian pseudoinverse and augmentation approaches. A direct adaptive control scheme is developed to generate joint angle trajectories for achieving desired end-effector motion as well as additional user defined tasks. The scheme ensures arbitrarily small errors between the desired and the actual motion of the manipulator. Explicit bounds on the errors are established that are directly related to the mismatch between actual and estimated pseudoinverse Jacobian matrix, motion velocity and the controller gain. It is shown that the scheme is tolerant of the mismatch and consequently only infrequent pseudoinverse computations are needed during a typical robot motion. As a result, the scheme is computationally fast, and can be implemented for real-time control of redundant robots. A method is incorporated to cope with the robot singularities allowing the manipulator to get very close or even pass through a singularity while maintaining a good tracking performance and acceptable joint velocities. Computer simulations and experimental results are provided in support of the theoretical developments.

  12. A Newton-Krylov method with an approximate analytical Jacobian for implicit solution of Navier-Stokes equations on staggered overset-curvilinear grids with immersed boundaries.

    PubMed

    Asgharzadeh, Hafez; Borazjani, Iman

    2017-02-15

    The explicit and semi-implicit schemes in flow simulations involving complex geometries and moving boundaries suffer from time-step size restriction and low convergence rates. Implicit schemes can be used to overcome these restrictions, but implementing them to solve the Navier-Stokes equations is not straightforward due to their non-linearity. Among the implicit schemes for nonlinear equations, Newton-based techniques are preferred over fixed-point techniques because of their high convergence rate but each Newton iteration is more expensive than a fixed-point iteration. Krylov subspace methods are one of the most advanced iterative methods that can be combined with Newton methods, i.e., Newton-Krylov Methods (NKMs) to solve non-linear systems of equations. The success of NKMs vastly depends on the scheme for forming the Jacobian, e.g., automatic differentiation is very expensive, and matrix-free methods without a preconditioner slow down as the mesh is refined. A novel, computationally inexpensive analytical Jacobian for NKM is developed to solve unsteady incompressible Navier-Stokes momentum equations on staggered overset-curvilinear grids with immersed boundaries. Moreover, the analytical Jacobian is used to form preconditioner for matrix-free method in order to improve its performance. The NKM with the analytical Jacobian was validated and verified against Taylor-Green vortex, inline oscillations of a cylinder in a fluid initially at rest, and pulsatile flow in a 90 degree bend. The capability of the method in handling complex geometries with multiple overset grids and immersed boundaries is shown by simulating an intracranial aneurysm. It was shown that the NKM with an analytical Jacobian is 1.17 to 14.77 times faster than the fixed-point Runge-Kutta method, and 1.74 to 152.3 times (excluding an intensively stretched grid) faster than automatic differentiation depending on the grid (size) and the flow problem. In addition, it was shown that using only the diagonal of the Jacobian further improves the performance by 42 - 74% compared to the full Jacobian. The NKM with an analytical Jacobian showed better performance than the fixed point Runge-Kutta because it converged with higher time steps and in approximately 30% less iterations even when the grid was stretched and the Reynold number was increased. In fact, stretching the grid decreased the performance of all methods, but the fixed-point Runge-Kutta performance decreased 4.57 and 2.26 times more than NKM with a diagonal Jacobian when the stretching factor was increased, respectively. The NKM with a diagonal analytical Jacobian and matrix-free method with an analytical preconditioner are the fastest methods and the superiority of one to another depends on the flow problem. Furthermore, the implemented methods are fully parallelized with parallel efficiency of 80-90% on the problems tested. The NKM with the analytical Jacobian can guide building preconditioners for other techniques to improve their performance in the future.

  13. A Newton–Krylov method with an approximate analytical Jacobian for implicit solution of Navier–Stokes equations on staggered overset-curvilinear grids with immersed boundaries

    PubMed Central

    Asgharzadeh, Hafez; Borazjani, Iman

    2016-01-01

    The explicit and semi-implicit schemes in flow simulations involving complex geometries and moving boundaries suffer from time-step size restriction and low convergence rates. Implicit schemes can be used to overcome these restrictions, but implementing them to solve the Navier-Stokes equations is not straightforward due to their non-linearity. Among the implicit schemes for nonlinear equations, Newton-based techniques are preferred over fixed-point techniques because of their high convergence rate but each Newton iteration is more expensive than a fixed-point iteration. Krylov subspace methods are one of the most advanced iterative methods that can be combined with Newton methods, i.e., Newton-Krylov Methods (NKMs) to solve non-linear systems of equations. The success of NKMs vastly depends on the scheme for forming the Jacobian, e.g., automatic differentiation is very expensive, and matrix-free methods without a preconditioner slow down as the mesh is refined. A novel, computationally inexpensive analytical Jacobian for NKM is developed to solve unsteady incompressible Navier-Stokes momentum equations on staggered overset-curvilinear grids with immersed boundaries. Moreover, the analytical Jacobian is used to form preconditioner for matrix-free method in order to improve its performance. The NKM with the analytical Jacobian was validated and verified against Taylor-Green vortex, inline oscillations of a cylinder in a fluid initially at rest, and pulsatile flow in a 90 degree bend. The capability of the method in handling complex geometries with multiple overset grids and immersed boundaries is shown by simulating an intracranial aneurysm. It was shown that the NKM with an analytical Jacobian is 1.17 to 14.77 times faster than the fixed-point Runge-Kutta method, and 1.74 to 152.3 times (excluding an intensively stretched grid) faster than automatic differentiation depending on the grid (size) and the flow problem. In addition, it was shown that using only the diagonal of the Jacobian further improves the performance by 42 – 74% compared to the full Jacobian. The NKM with an analytical Jacobian showed better performance than the fixed point Runge-Kutta because it converged with higher time steps and in approximately 30% less iterations even when the grid was stretched and the Reynold number was increased. In fact, stretching the grid decreased the performance of all methods, but the fixed-point Runge-Kutta performance decreased 4.57 and 2.26 times more than NKM with a diagonal Jacobian when the stretching factor was increased, respectively. The NKM with a diagonal analytical Jacobian and matrix-free method with an analytical preconditioner are the fastest methods and the superiority of one to another depends on the flow problem. Furthermore, the implemented methods are fully parallelized with parallel efficiency of 80–90% on the problems tested. The NKM with the analytical Jacobian can guide building preconditioners for other techniques to improve their performance in the future. PMID:28042172

  14. A Newton-Krylov method with an approximate analytical Jacobian for implicit solution of Navier-Stokes equations on staggered overset-curvilinear grids with immersed boundaries

    NASA Astrophysics Data System (ADS)

    Asgharzadeh, Hafez; Borazjani, Iman

    2017-02-01

    The explicit and semi-implicit schemes in flow simulations involving complex geometries and moving boundaries suffer from time-step size restriction and low convergence rates. Implicit schemes can be used to overcome these restrictions, but implementing them to solve the Navier-Stokes equations is not straightforward due to their non-linearity. Among the implicit schemes for non-linear equations, Newton-based techniques are preferred over fixed-point techniques because of their high convergence rate but each Newton iteration is more expensive than a fixed-point iteration. Krylov subspace methods are one of the most advanced iterative methods that can be combined with Newton methods, i.e., Newton-Krylov Methods (NKMs) to solve non-linear systems of equations. The success of NKMs vastly depends on the scheme for forming the Jacobian, e.g., automatic differentiation is very expensive, and matrix-free methods without a preconditioner slow down as the mesh is refined. A novel, computationally inexpensive analytical Jacobian for NKM is developed to solve unsteady incompressible Navier-Stokes momentum equations on staggered overset-curvilinear grids with immersed boundaries. Moreover, the analytical Jacobian is used to form a preconditioner for matrix-free method in order to improve its performance. The NKM with the analytical Jacobian was validated and verified against Taylor-Green vortex, inline oscillations of a cylinder in a fluid initially at rest, and pulsatile flow in a 90 degree bend. The capability of the method in handling complex geometries with multiple overset grids and immersed boundaries is shown by simulating an intracranial aneurysm. It was shown that the NKM with an analytical Jacobian is 1.17 to 14.77 times faster than the fixed-point Runge-Kutta method, and 1.74 to 152.3 times (excluding an intensively stretched grid) faster than automatic differentiation depending on the grid (size) and the flow problem. In addition, it was shown that using only the diagonal of the Jacobian further improves the performance by 42-74% compared to the full Jacobian. The NKM with an analytical Jacobian showed better performance than the fixed point Runge-Kutta because it converged with higher time steps and in approximately 30% less iterations even when the grid was stretched and the Reynold number was increased. In fact, stretching the grid decreased the performance of all methods, but the fixed-point Runge-Kutta performance decreased 4.57 and 2.26 times more than NKM with a diagonal and full Jacobian, respectivley, when the stretching factor was increased. The NKM with a diagonal analytical Jacobian and matrix-free method with an analytical preconditioner are the fastest methods and the superiority of one to another depends on the flow problem. Furthermore, the implemented methods are fully parallelized with parallel efficiency of 80-90% on the problems tested. The NKM with the analytical Jacobian can guide building preconditioners for other techniques to improve their performance in the future.

  15. Statistic inversion of multi-zone transition probability models for aquifer characterization in alluvial fans

    DOE PAGES

    Zhu, Lin; Dai, Zhenxue; Gong, Huili; ...

    2015-06-12

    Understanding the heterogeneity arising from the complex architecture of sedimentary sequences in alluvial fans is challenging. This study develops a statistical inverse framework in a multi-zone transition probability approach for characterizing the heterogeneity in alluvial fans. An analytical solution of the transition probability matrix is used to define the statistical relationships among different hydrofacies and their mean lengths, integral scales, and volumetric proportions. A statistical inversion is conducted to identify the multi-zone transition probability models and estimate the optimal statistical parameters using the modified Gauss–Newton–Levenberg–Marquardt method. The Jacobian matrix is computed by the sensitivity equation method, which results in anmore » accurate inverse solution with quantification of parameter uncertainty. We use the Chaobai River alluvial fan in the Beijing Plain, China, as an example for elucidating the methodology of alluvial fan characterization. The alluvial fan is divided into three sediment zones. In each zone, the explicit mathematical formulations of the transition probability models are constructed with optimized different integral scales and volumetric proportions. The hydrofacies distributions in the three zones are simulated sequentially by the multi-zone transition probability-based indicator simulations. Finally, the result of this study provides the heterogeneous structure of the alluvial fan for further study of flow and transport simulations.« less

  16. Estimation of near-surface shear-wave velocity by inversion of Rayleigh waves

    USGS Publications Warehouse

    Xia, J.; Miller, R.D.; Park, C.B.

    1999-01-01

    The shear-wave (S-wave) velocity of near-surface materials (soil, rocks, pavement) and its effect on seismic-wave propagation are of fundamental interest in many groundwater, engineering, and environmental studies. Rayleigh-wave phase velocity of a layered-earth model is a function of frequency and four groups of earth properties: P-wave velocity, S-wave velocity, density, and thickness of layers. Analysis of the Jacobian matrix provides a measure of dispersion-curve sensitivity to earth properties. S-wave velocities are the dominant influence on a dispersion curve in a high-frequency range (>5 Hz) followed by layer thickness. An iterative solution technique to the weighted equation proved very effective in the high-frequency range when using the Levenberg-Marquardt and singular-value decomposition techniques. Convergence of the weighted solution is guaranteed through selection of the damping factor using the Levenberg-Marquardt method. Synthetic examples demonstrated calculation efficiency and stability of inverse procedures. We verify our method using borehole S-wave velocity measurements.Iterative solutions to the weighted equation by the Levenberg-Marquardt and singular-value decomposition techniques are derived to estimate near-surface shear-wave velocity. Synthetic and real examples demonstrate the calculation efficiency and stability of the inverse procedure. The inverse results of the real example are verified by borehole S-wave velocity measurements.

  17. Feedforward-Feedback Hybrid Control for Magnetic Shape Memory Alloy Actuators Based on the Krasnosel'skii-Pokrovskii Model

    PubMed Central

    Zhou, Miaolei; Zhang, Qi; Wang, Jingyuan

    2014-01-01

    As a new type of smart material, magnetic shape memory alloy has the advantages of a fast response frequency and outstanding strain capability in the field of microdrive and microposition actuators. The hysteresis nonlinearity in magnetic shape memory alloy actuators, however, limits system performance and further application. Here we propose a feedforward-feedback hybrid control method to improve control precision and mitigate the effects of the hysteresis nonlinearity of magnetic shape memory alloy actuators. First, hysteresis nonlinearity compensation for the magnetic shape memory alloy actuator is implemented by establishing a feedforward controller which is an inverse hysteresis model based on Krasnosel'skii-Pokrovskii operator. Secondly, the paper employs the classical Proportion Integration Differentiation feedback control with feedforward control to comprise the hybrid control system, and for further enhancing the adaptive performance of the system and improving the control accuracy, the Radial Basis Function neural network self-tuning Proportion Integration Differentiation feedback control replaces the classical Proportion Integration Differentiation feedback control. Utilizing self-learning ability of the Radial Basis Function neural network obtains Jacobian information of magnetic shape memory alloy actuator for the on-line adjustment of parameters in Proportion Integration Differentiation controller. Finally, simulation results show that the hybrid control method proposed in this paper can greatly improve the control precision of magnetic shape memory alloy actuator and the maximum tracking error is reduced from 1.1% in the open-loop system to 0.43% in the hybrid control system. PMID:24828010

  18. Feedforward-feedback hybrid control for magnetic shape memory alloy actuators based on the Krasnosel'skii-Pokrovskii model.

    PubMed

    Zhou, Miaolei; Zhang, Qi; Wang, Jingyuan

    2014-01-01

    As a new type of smart material, magnetic shape memory alloy has the advantages of a fast response frequency and outstanding strain capability in the field of microdrive and microposition actuators. The hysteresis nonlinearity in magnetic shape memory alloy actuators, however, limits system performance and further application. Here we propose a feedforward-feedback hybrid control method to improve control precision and mitigate the effects of the hysteresis nonlinearity of magnetic shape memory alloy actuators. First, hysteresis nonlinearity compensation for the magnetic shape memory alloy actuator is implemented by establishing a feedforward controller which is an inverse hysteresis model based on Krasnosel'skii-Pokrovskii operator. Secondly, the paper employs the classical Proportion Integration Differentiation feedback control with feedforward control to comprise the hybrid control system, and for further enhancing the adaptive performance of the system and improving the control accuracy, the Radial Basis Function neural network self-tuning Proportion Integration Differentiation feedback control replaces the classical Proportion Integration Differentiation feedback control. Utilizing self-learning ability of the Radial Basis Function neural network obtains Jacobian information of magnetic shape memory alloy actuator for the on-line adjustment of parameters in Proportion Integration Differentiation controller. Finally, simulation results show that the hybrid control method proposed in this paper can greatly improve the control precision of magnetic shape memory alloy actuator and the maximum tracking error is reduced from 1.1% in the open-loop system to 0.43% in the hybrid control system.

  19. Neural network based adaptive control for nonlinear dynamic regimes

    NASA Astrophysics Data System (ADS)

    Shin, Yoonghyun

    Adaptive control designs using neural networks (NNs) based on dynamic inversion are investigated for aerospace vehicles which are operated at highly nonlinear dynamic regimes. NNs play a key role as the principal element of adaptation to approximately cancel the effect of inversion error, which subsequently improves robustness to parametric uncertainty and unmodeled dynamics in nonlinear regimes. An adaptive control scheme previously named 'composite model reference adaptive control' is further developed so that it can be applied to multi-input multi-output output feedback dynamic inversion. It can have adaptive elements in both the dynamic compensator (linear controller) part and/or in the conventional adaptive controller part, also utilizing state estimation information for NN adaptation. This methodology has more flexibility and thus hopefully greater potential than conventional adaptive designs for adaptive flight control in highly nonlinear flight regimes. The stability of the control system is proved through Lyapunov theorems, and validated with simulations. The control designs in this thesis also include the use of 'pseudo-control hedging' techniques which are introduced to prevent the NNs from attempting to adapt to various actuation nonlinearities such as actuator position and rate saturations. Control allocation is introduced for the case of redundant control effectors including thrust vectoring nozzles. A thorough comparison study of conventional and NN-based adaptive designs for a system under a limit cycle, wing-rock, is included in this research, and the NN-based adaptive control designs demonstrate their performances for two highly maneuverable aerial vehicles, NASA F-15 ACTIVE and FQM-117B unmanned aerial vehicle (UAV), operated under various nonlinearities and uncertainties.

  20. CT-derived Biomechanical Metrics Improve Agreement Between Spirometry and Emphysema

    PubMed Central

    Bhatt, Surya P.; Bodduluri, Sandeep; Newell, John D.; Hoffman, Eric A.; Sieren, Jessica C.; Han, Meilan K.; Dransfield, Mark T.; Reinhardt, Joseph M.

    2016-01-01

    Rationale and Objectives Many COPD patients have marked discordance between FEV1 and degree of emphysema on CT. Biomechanical differences between these patients have not been studied. We aimed to identify reasons for the discordance between CT and spirometry in some patients with COPD. Materials and Methods Subjects with GOLD stage I–IV from a large multicenter study (COPDGene) were arranged by percentiles of %predicted FEV1 and emphysema on CT. Three categories were created using differences in percentiles: Catspir with predominant airflow obstruction/minimal emphysema, CatCT with predominant emphysema/minimal airflow obstruction, and Catmatched with matched FEV1 and emphysema. Image registration was used to derive Jacobian determinants, a measure of lung elasticity, anisotropy and strain tensors, to assess biomechanical differences between groups. Regression models were created with the above categories as outcome variable, adjusting for demographics, scanner type, quantitative CT-derived emphysema, gas trapping, and airway thickness (Model 1), and after adding biomechanical CT metrics (Model 2). Results Jacobian determinants, anisotropy and strain tensors were strongly associated with FEV1. With Catmatched as control, Model 2 predicted Catspir and CatCT better than Model 1 (Akaike Information Criterion, AIC 255.8 vs. 320.8). In addition to demographics, the strongest independent predictors of FEV1 were Jacobian mean (β= 1.60,95%CI = 1.16 to 1.98; p<0.001), coefficient of variation (CV) of Jacobian (β= 1.45,95%CI = 0.86 to 2.03; p<0.001) and CV strain (β= 1.82,95%CI = 0.68 to 2.95; p = 0.001). CVs of Jacobian and strain are both potential markers of biomechanical lung heterogeneity. Conclusions CT-derived measures of lung mechanics improve the link between quantitative CT and spirometry, offering the potential for new insights into the linkage between regional parenchymal destruction and global decrement in lung function in COPD patients. PMID:27055745

  1. Jacobian-Based Iterative Method for Magnetic Localization in Robotic Capsule Endoscopy

    PubMed Central

    Di Natali, Christian; Beccani, Marco; Simaan, Nabil; Valdastri, Pietro

    2016-01-01

    The purpose of this study is to validate a Jacobian-based iterative method for real-time localization of magnetically controlled endoscopic capsules. The proposed approach applies finite-element solutions to the magnetic field problem and least-squares interpolations to obtain closed-form and fast estimates of the magnetic field. By defining a closed-form expression for the Jacobian of the magnetic field relative to changes in the capsule pose, we are able to obtain an iterative localization at a faster computational time when compared with prior works, without suffering from the inaccuracies stemming from dipole assumptions. This new algorithm can be used in conjunction with an absolute localization technique that provides initialization values at a slower refresh rate. The proposed approach was assessed via simulation and experimental trials, adopting a wireless capsule equipped with a permanent magnet, six magnetic field sensors, and an inertial measurement unit. The overall refresh rate, including sensor data acquisition and wireless communication was 7 ms, thus enabling closed-loop control strategies for magnetic manipulation running faster than 100 Hz. The average localization error, expressed in cylindrical coordinates was below 7 mm in both the radial and axial components and 5° in the azimuthal component. The average error for the capsule orientation angles, obtained by fusing gyroscope and inclinometer measurements, was below 5°. PMID:27087799

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wannamaker, Philip E.

    We have developed an algorithm for the inversion of magnetotelluric (MT) data to a 3D earth resistivity model based upon the finite element method. Hexahedral edge finite elements are implemented to accommodate discontinuities in the electric field across resistivity boundaries, and to accurately simulate topographic variations. All matrices are reduced and solved using direct solution modules which avoids ill-conditioning endemic to iterative solvers such as conjugate gradients, principally PARDISO for the finite element system and PLASMA for the parameter step estimate. Large model parameterizations can be handled by transforming the Gauss-Newton estimator to data-space form. Accuracy of the forward problemmore » and jacobians has been checked by comparison to integral equations results and by limiting asymptotes. Inverse accuracy and performance has been verified against the public Dublin Secret Test Model 2 and the well-known Mount St Helens 3D MT data set. This algorithm we believe is the most capable yet for forming 3D images of earth resistivity structure and their implications for geothermal fluids and pathways.« less

  3. Use of symbolic computation in robotics education

    NASA Technical Reports Server (NTRS)

    Vira, Naren; Tunstel, Edward

    1992-01-01

    An application of symbolic computation in robotics education is described. A software package is presented which combines generality, user interaction, and user-friendliness with the systematic usage of symbolic computation and artificial intelligence techniques. The software utilizes MACSYMA, a LISP-based symbolic algebra language, to automatically generate closed-form expressions representing forward and inverse kinematics solutions, the Jacobian transformation matrices, robot pose error-compensation models equations, and Lagrange dynamics formulation for N degree-of-freedom, open chain robotic manipulators. The goal of such a package is to aid faculty and students in the robotics course by removing burdensome tasks of mathematical manipulations. The software package has been successfully tested for its accuracy using commercially available robots.

  4. Hydrologic Process-oriented Optimization of Electrical Resistivity Tomography

    NASA Astrophysics Data System (ADS)

    Hinnell, A.; Bechtold, M.; Ferre, T. A.; van der Kruk, J.

    2010-12-01

    Electrical resistivity tomography (ERT) is commonly used in hydrologic investigations. Advances in joint and coupled hydrogeophysical inversion have enhanced the quantitative use of ERT to construct and condition hydrologic models (i.e. identify hydrologic structure and estimate hydrologic parameters). However the selection of which electrical resistivity data to collect and use is often determined by a combination of data requirements for geophysical analysis, intuition on the part of the hydrogeophysicist and logistical constraints of the laboratory or field site. One of the advantages of coupled hydrogeophysical inversion is the direct link between the hydrologic model and the individual geophysical data used to condition the model. That is, there is no requirement to collect geophysical data suitable for independent geophysical inversion. The geophysical measurements collected can be optimized for estimation of hydrologic model parameters rather than to develop a geophysical model. Using a synthetic model of drip irrigation we evaluate the value of individual resistivity measurements to describe the soil hydraulic properties and then use this information to build a data set optimized for characterizing hydrologic processes. We then compare the information content in the optimized data set with the information content in a data set optimized using a Jacobian sensitivity analysis.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Lin; Dai, Zhenxue; Gong, Huili

    Understanding the heterogeneity arising from the complex architecture of sedimentary sequences in alluvial fans is challenging. This study develops a statistical inverse framework in a multi-zone transition probability approach for characterizing the heterogeneity in alluvial fans. An analytical solution of the transition probability matrix is used to define the statistical relationships among different hydrofacies and their mean lengths, integral scales, and volumetric proportions. A statistical inversion is conducted to identify the multi-zone transition probability models and estimate the optimal statistical parameters using the modified Gauss–Newton–Levenberg–Marquardt method. The Jacobian matrix is computed by the sensitivity equation method, which results in anmore » accurate inverse solution with quantification of parameter uncertainty. We use the Chaobai River alluvial fan in the Beijing Plain, China, as an example for elucidating the methodology of alluvial fan characterization. The alluvial fan is divided into three sediment zones. In each zone, the explicit mathematical formulations of the transition probability models are constructed with optimized different integral scales and volumetric proportions. The hydrofacies distributions in the three zones are simulated sequentially by the multi-zone transition probability-based indicator simulations. Finally, the result of this study provides the heterogeneous structure of the alluvial fan for further study of flow and transport simulations.« less

  6. TU-G-BRA-03: Predicting Radiation Therapy Induced Ventilation Changes Using 4DCT Jacobian Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patton, T; Du, K; Bayouth, J

    2015-06-15

    Purpose: Longitudinal changes in lung ventilation following radiation therapy can be mapped using four-dimensional computed tomography(4DCT) and image registration. This study aimed to predict ventilation changes caused by radiation therapy(RT) as a function of pre-RT ventilation and delivered dose. Methods: 4DCT images were acquired before and 3 months after radiation therapy for 13 subjects. Jacobian ventilation maps were calculated from the 4DCT images, warped to a common coordinate system, and a Jacobian ratio map was computed voxel-by-voxel as the ratio of post-RT to pre-RT Jacobian calculations. A leave-one-out method was used to build a response model for each subject: post-RTmore » to pre-RT Jacobian ratio data and dose distributions of 12 subjects were applied to the subject’s pre-RT Jacobian map to predict the post-RT Jacobian. The predicted Jacobian map was compared to the actual post-RT Jacobian map to evaluate efficacy. Within this cohort, 8 subjects had repeat pre-RT scans that were compared as a reference for no ventilation change. Maps were compared using gamma pass rate criteria of 2mm distance-to-agreement and 6% ventilation difference. Gamma pass rates were compared using paired t-tests to determine significant differences. Further analysis masked non-radiation induced changes by excluding voxels below specified dose thresholds. Results: Visual inspection demonstrates the predicted post-RT ventilation map is similar to the actual map in magnitude and distribution. Quantitatively, the percentage of voxels in agreement when excluding voxels receiving below specified doses are: 74%/20Gy, 73%/10Gy, 73%/5Gy, and 71%/0Gy. By comparison, repeat scans produced 73% of voxels within the 6%/2mm criteria. The agreement of the actual post-RT maps with the predicted maps was significantly better than agreement with pre-RT maps (p<0.02). Conclusion: This work validates that significant changes to ventilation post-RT can be predicted. The differences between the predicted and actual outcome are similar to differences between repeat scans with equivalent ventilation. This work was supported by NIH grant CA166703 and a Pilot Grant from University of Iowa Carver College of Medicine.« less

  7. Implicit solution of Navier-Stokes equations on staggered curvilinear grids using a Newton-Krylov method with a novel analytical Jacobian.

    NASA Astrophysics Data System (ADS)

    Borazjani, Iman; Asgharzadeh, Hafez

    2015-11-01

    Flow simulations involving complex geometries and moving boundaries suffer from time-step size restriction and low convergence rates with explicit and semi-implicit schemes. Implicit schemes can be used to overcome these restrictions. However, implementing implicit solver for nonlinear equations including Navier-Stokes is not straightforward. Newton-Krylov subspace methods (NKMs) are one of the most advanced iterative methods to solve non-linear equations such as implicit descritization of the Navier-Stokes equation. The efficiency of NKMs massively depends on the Jacobian formation method, e.g., automatic differentiation is very expensive, and matrix-free methods slow down as the mesh is refined. Analytical Jacobian is inexpensive method, but derivation of analytical Jacobian for Navier-Stokes equation on staggered grid is challenging. The NKM with a novel analytical Jacobian was developed and validated against Taylor-Green vortex and pulsatile flow in a 90 degree bend. The developed method successfully handled the complex geometries such as an intracranial aneurysm with multiple overset grids, and immersed boundaries. It is shown that the NKM with an analytical Jacobian is 3 to 25 times faster than the fixed-point implicit Runge-Kutta method, and more than 100 times faster than automatic differentiation depending on the grid (size) and the flow problem. The developed methods are fully parallelized with parallel efficiency of 80-90% on the problems tested.

  8. A comparison between IMSC, PI and MIMSC methods in controlling the vibration of flexible systems

    NASA Technical Reports Server (NTRS)

    Baz, A.; Poh, S.

    1987-01-01

    A comparative study is presented between three active control algorithms which have proven to be successful in controlling the vibrations of large flexible systems. These algorithms are: the Independent Modal Space Control (IMSC), the Pseudo-inverse (PI), and the Modified Independent Modal Space Control (MIMSC). Emphasis is placed on demonstrating the effectiveness of the MIMSC method in controlling the vibration of large systems with small number of actuators by using an efficient time sharing strategy. Such a strategy favors the MIMSC over the IMSC method, which requires a large number of actuators to control equal number of modes, and also over the PI method which attempts to control large number of modes with smaller number of actuators through the use of an in-exact statistical realization of a modal controller. Numerical examples are presented to illustrate the main features of the three algorithms and the merits of the MIMSC method.

  9. A robust pseudo-inverse spectral filter applied to the Earth Radiation Budget Experiment (ERBE) scanning channels

    NASA Technical Reports Server (NTRS)

    Avis, L. M.; Green, R. N.; Suttles, J. T.; Gupta, S. K.

    1984-01-01

    Computer simulations of a least squares estimator operating on the ERBE scanning channels are discussed. The estimator is designed to minimize the errors produced by nonideal spectral response to spectrally varying and uncertain radiant input. The three ERBE scanning channels cover a shortwave band a longwave band and a ""total'' band from which the pseudo inverse spectral filter estimates the radiance components in the shortwave band and a longwave band. The radiance estimator draws on instantaneous field of view (IFOV) scene type information supplied by another algorithm of the ERBE software, and on a priori probabilistic models of the responses of the scanning channels to the IFOV scene types for given Sun scene spacecraft geometry. It is found that the pseudoinverse spectral filter is stable, tolerant of errors in scene identification and in channel response modeling, and, in the absence of such errors, yields minimum variance and essentially unbiased radiance estimates.

  10. A coupled thermo-mechanical pseudo inverse approach for preform design in forging

    NASA Astrophysics Data System (ADS)

    Thomas, Anoop Ebey; Abbes, Boussad; Li, Yu Ming; Abbes, Fazilay; Guo, Ying-Qiao; Duval, Jean-Louis

    2017-10-01

    Hot forging is a process used to form difficult to form materials as well as to achieve complex geometries. This is possible due to the reduction of yield stress at high temperatures and a subsequent increase in formability. Numerical methods have been used to predict the material yield and the stress/strain states of the final product. Pseudo Inverse Approach (PIA) developed in the context of cold forming provides a quick estimate of the stress and strain fields in the final product for a given initial shape. In this paper, PIA is extended to include the thermal effects on the forging process. A Johnson-Cook thermo-viscoplastic material law is considered and a staggered scheme is employed for the coupling between the mechanical and thermal problems. The results are compared with available commercial codes to show the efficiency and the limitations of PIA.

  11. Viscous anisotropy of textured olivine aggregates: 2. Micromechanical model

    NASA Astrophysics Data System (ADS)

    Hansen, Lars N.; Conrad, Clinton P.; Boneh, Yuval; Skemer, Philip; Warren, Jessica M.; Kohlstedt, David L.

    2016-10-01

    The significant viscous anisotropy that results from crystallographic alignment (texture) of olivine grains in deformed upper mantle rocks strongly influences a large variety of geodynamic processes. Our ability to explore the effects of anisotropic viscosity in simulations of these processes requires a mechanical model that can predict the magnitude of anisotropy and its evolution. Unfortunately, existing models of olivine textural evolution and viscous anisotropy are calibrated for relatively small deformations and simple strain paths, making them less general than desired for many large-scale geodynamic scenarios. Here we develop a new set of micromechanical models to describe the mechanical behavior and textural evolution of olivine through a large range of strains and complex strain histories. For the mechanical behavior, we explore two extreme scenarios, one in which each grain experiences the same stress tensor (Sachs model) and one in which each grain undergoes a strain rate as close as possible to the macroscopic strain rate (pseudo-Taylor model). For the textural evolution, we develop a new model in which the director method is used to control the rate of grain rotation and the available slip systems in olivine are used to control the axis of rotation. Only recently has enough laboratory data on the deformation of olivine become available to calibrate these models. We use these new data to conduct inversions for the best parameters to characterize both the mechanical and textural evolution models. These inversions demonstrate that the calibrated pseudo-Taylor model best reproduces the mechanical observations. Additionally, the pseudo-Taylor textural evolution model can reasonably reproduce the observed texture strength, shape, and orientation after large and complex deformations. A quantitative comparison between our calibrated models and previously published models reveals that our new models excel in predicting the magnitude of viscous anisotropy and the details of the textural evolution. In addition, we demonstrate that the mechanical and textural evolution models can be coupled and used to reproduce mechanical evolution during large-strain torsion tests. This set of models therefore provides a new geodynamic tool for incorporating viscous anisotropy into large-scale numerical simulations.

  12. Motion control of musculoskeletal systems with redundancy.

    PubMed

    Park, Hyunjoo; Durand, Dominique M

    2008-12-01

    Motion control of musculoskeletal systems for functional electrical stimulation (FES) is a challenging problem due to the inherent complexity of the systems. These include being highly nonlinear, strongly coupled, time-varying, time-delayed, and redundant. The redundancy in particular makes it difficult to find an inverse model of the system for control purposes. We have developed a control system for multiple input multiple output (MIMO) redundant musculoskeletal systems with little prior information. The proposed method separates the steady-state properties from the dynamic properties. The dynamic control uses a steady-state inverse model and is implemented with both a PID controller for disturbance rejection and an artificial neural network (ANN) feedforward controller for fast trajectory tracking. A mechanism to control the sum of the muscle excitation levels is also included. To test the performance of the proposed control system, a two degree of freedom ankle-subtalar joint model with eight muscles was used. The simulation results show that separation of steady-state and dynamic control allow small output tracking errors for different reference trajectories such as pseudo-step, sinusoidal and filtered random signals. The proposed control method also demonstrated robustness against system parameter and controller parameter variations. A possible application of this control algorithm is FES control using multiple contact cuff electrodes where mathematical modeling is not feasible and the redundancy makes the control of dynamic movement difficult.

  13. Electromagnetic imaging with an arbitrarily oriented magnetic dipole

    NASA Astrophysics Data System (ADS)

    Guillemoteau, Julien; Sailhac, Pascal; Behaegel, Mickael

    2013-04-01

    We present the theoretical background for the geophysical EM analysis with arbitrarily oriented magnetic dipoles. The first application of such a development is that we would now be able to correct the data when they are not acquired in accordance to the actual interpretation methods. In order to illustrate this case, we study the case of airborne TEM measurements over an inclined ground. This context can be encountered if the measurements are made in mountain area. We show in particular that transient central loop helicopter borne magnetic data should be corrected by a factor proportional to the angle of the slope under the system. In addition, we studied the sensitivity function of a grounded multi-angle frequency domain system. Our development leads to a general Jacobian kernel that could be used for all the induction number and all the position/orientation of both transmitter and receiver in the air layer. Indeed, if one could design a system controlling the angles of Tx and Rx, the present development would allow to interpret such a data set and enhance the ground analysis, especially in order to constrain the 3D anisotropic inverse problem.

  14. Linearized image reconstruction method for ultrasound modulated electrical impedance tomography based on power density distribution

    NASA Astrophysics Data System (ADS)

    Song, Xizi; Xu, Yanbin; Dong, Feng

    2017-04-01

    Electrical resistance tomography (ERT) is a promising measurement technique with important industrial and clinical applications. However, with limited effective measurements, it suffers from poor spatial resolution due to the ill-posedness of the inverse problem. Recently, there has been an increasing research interest in hybrid imaging techniques, utilizing couplings of physical modalities, because these techniques obtain much more effective measurement information and promise high resolution. Ultrasound modulated electrical impedance tomography (UMEIT) is one of the newly developed hybrid imaging techniques, which combines electric and acoustic modalities. A linearized image reconstruction method based on power density is proposed for UMEIT. The interior data, power density distribution, is adopted to reconstruct the conductivity distribution with the proposed image reconstruction method. At the same time, relating the power density change to the change in conductivity, the Jacobian matrix is employed to make the nonlinear problem into a linear one. The analytic formulation of this Jacobian matrix is derived and its effectiveness is also verified. In addition, different excitation patterns are tested and analyzed, and opposite excitation provides the best performance with the proposed method. Also, multiple power density distributions are combined to implement image reconstruction. Finally, image reconstruction is implemented with the linear back-projection (LBP) algorithm. Compared with ERT, with the proposed image reconstruction method, UMEIT can produce reconstructed images with higher quality and better quantitative evaluation results.

  15. On Gammelgaard's Formula for a Star Product with Separation of Variables

    NASA Astrophysics Data System (ADS)

    Karabegov, Alexander

    2013-08-01

    We show that Gammelgaard's formula expressing a star product with separation of variables on a pseudo-Kähler manifold in terms of directed graphs without cycles is equivalent to an inversion formula for an operator on a formal Fock space. We prove this inversion formula directly and thus offer an alternative approach to Gammelgaard's formula which gives more insight into the question why the directed graphs in his formula have no cycles.

  16. Time-lapse three-dimensional inversion of complex conductivity data using an active time constrained (ATC) approach

    USGS Publications Warehouse

    Karaoulis, M.; Revil, A.; Werkema, D.D.; Minsley, B.J.; Woodruff, W.F.; Kemna, A.

    2011-01-01

    Induced polarization (more precisely the magnitude and phase of impedance of the subsurface) is measured using a network of electrodes located at the ground surface or in boreholes. This method yields important information related to the distribution of permeability and contaminants in the shallow subsurface. We propose a new time-lapse 3-D modelling and inversion algorithm to image the evolution of complex conductivity over time. We discretize the subsurface using hexahedron cells. Each cell is assigned a complex resistivity or conductivity value. Using the finite-element approach, we model the in-phase and out-of-phase (quadrature) electrical potentials on the 3-D grid, which are then transformed into apparent complex resistivity. Inhomogeneous Dirichlet boundary conditions are used at the boundary of the domain. The calculation of the Jacobian matrix is based on the principles of reciprocity. The goal of time-lapse inversion is to determine the change in the complex resistivity of each cell of the spatial grid as a function of time. Each model along the time axis is called a 'reference space model'. This approach can be simplified into an inverse problem looking for the optimum of several reference space models using the approximation that the material properties vary linearly in time between two subsequent reference models. Regularizations in both space domain and time domain reduce inversion artefacts and improve the stability of the inversion problem. In addition, the use of the time-lapse equations allows the simultaneous inversion of data obtained at different times in just one inversion step (4-D inversion). The advantages of this new inversion algorithm are demonstrated on synthetic time-lapse data resulting from the simulation of a salt tracer test in a heterogeneous random material described by an anisotropic semi-variogram. ?? 2011 The Authors Geophysical Journal International ?? 2011 RAS.

  17. Coordinated path-following and direct yaw-moment control of autonomous electric vehicles with sideslip angle estimation

    NASA Astrophysics Data System (ADS)

    Guo, Jinghua; Luo, Yugong; Li, Keqiang; Dai, Yifan

    2018-05-01

    This paper presents a novel coordinated path following system (PFS) and direct yaw-moment control (DYC) of autonomous electric vehicles via hierarchical control technique. In the high-level control law design, a new fuzzy factor is introduced based on the magnitude of longitudinal velocity of vehicle, a linear time varying (LTV)-based model predictive controller (MPC) is proposed to acquire the wheel steering angle and external yaw moment. Then, a pseudo inverse (PI) low-level control allocation law is designed to realize the tracking of desired external moment torque and management of the redundant tire actuators. Furthermore, the vehicle sideslip angle is estimated by the data fusion of low-cost GPS and INS, which can be obtained by the integral of modified INS signals with GPS signals as initial value. Finally, the effectiveness of the proposed control system is validated by the simulation and experimental tests.

  18. Estimation of regional lung expansion via 3D image registration

    NASA Astrophysics Data System (ADS)

    Pan, Yan; Kumar, Dinesh; Hoffman, Eric A.; Christensen, Gary E.; McLennan, Geoffrey; Song, Joo Hyun; Ross, Alan; Simon, Brett A.; Reinhardt, Joseph M.

    2005-04-01

    A method is described to estimate regional lung expansion and related biomechanical parameters using multiple CT images of the lungs, acquired at different inflation levels. In this study, the lungs of two sheep were imaged utilizing a multi-detector row CT at different lung inflations in the prone and supine positions. Using the lung surfaces and the airway branch points for guidance, a 3D inverse consistent image registration procedure was used to match different lung volumes at each orientation. The registration was validated using a set of implanted metal markers. After registration, the Jacobian of the deformation field was computed to express regional expansion or contraction. The regional lung expansion at different pressures and different orientations are compared.

  19. Highly Accurate Analytical Approximate Solution to a Nonlinear Pseudo-Oscillator

    NASA Astrophysics Data System (ADS)

    Wu, Baisheng; Liu, Weijia; Lim, C. W.

    2017-07-01

    A second-order Newton method is presented to construct analytical approximate solutions to a nonlinear pseudo-oscillator in which the restoring force is inversely proportional to the dependent variable. The nonlinear equation is first expressed in a specific form, and it is then solved in two steps, a predictor and a corrector step. In each step, the harmonic balance method is used in an appropriate manner to obtain a set of linear algebraic equations. With only one simple second-order Newton iteration step, a short, explicit, and highly accurate analytical approximate solution can be derived. The approximate solutions are valid for all amplitudes of the pseudo-oscillator. Furthermore, the method incorporates second-order Taylor expansion in a natural way, and it is of significant faster convergence rate.

  20. Inversion method applied to the rotation curves of galaxies

    NASA Astrophysics Data System (ADS)

    Márquez-Caicedo, L. A.; Lora-Clavijo, F. D.; Sanabria-Gómez, J. D.

    2017-07-01

    We used simulated annealing, Montecarlo and genetic algorithm methods for matching both numerical data of density and velocity profiles in some low surface brigthness galaxies with theoretical models of Boehmer-Harko, Navarro-Frenk-White and Pseudo Isothermal Profiles for galaxies with dark matter halos. We found that Navarro-Frenk-White model does not fit at all in contrast with the other two models which fit very well. Inversion methods have been widely used in various branches of science including astrophysics (Charbonneau 1995, ApJS, 101, 309). In this work we have used three different parametric inversion methods (MonteCarlo, Genetic Algorithm and Simmulated Annealing) in order to determine the best fit of the observed data of the density and velocity profiles of a set of low surface brigthness galaxies (De Block et al. 2001, ApJ, 122, 2396) with three models of galaxies containing dark mattter. The parameters adjusted by the inversion methods were the central density and a characteristic distance in the Boehmer-Harko BH (Boehmer & Harko 2007, JCAP, 6, 25), Navarro-Frenk-White NFW (Navarro et al. 2007, ApJ, 490, 493) and Pseudo Isothermal Profile PI (Robles & Matos 2012, MNRAS, 422, 282). The results obtained showed that the BH and PI Profile dark matter galaxies fit very well for both the density and the velocity profiles, in contrast the NFW model did not make good adjustments to the profiles in any analized galaxy.

  1. Covariance expressions for eigenvalue and eigenvector problems

    NASA Astrophysics Data System (ADS)

    Liounis, Andrew J.

    There are a number of important scientific and engineering problems whose solutions take the form of an eigenvalue--eigenvector problem. Some notable examples include solutions to linear systems of ordinary differential equations, controllability of linear systems, finite element analysis, chemical kinetics, fitting ellipses to noisy data, and optimal estimation of attitude from unit vectors. In many of these problems, having knowledge of the eigenvalue and eigenvector Jacobians is either necessary or is nearly as important as having the solution itself. For instance, Jacobians are necessary to find the uncertainty in a computed eigenvalue or eigenvector estimate. This uncertainty, which is usually represented as a covariance matrix, has been well studied for problems similar to the eigenvalue and eigenvector problem, such as singular value decomposition. There has been substantially less research on the covariance of an optimal estimate originating from an eigenvalue-eigenvector problem. In this thesis we develop two general expressions for the Jacobians of eigenvalues and eigenvectors with respect to the elements of their parent matrix. The expressions developed make use of only the parent matrix and the eigenvalue and eigenvector pair under consideration. In addition, they are applicable to any general matrix (including complex valued matrices, eigenvalues, and eigenvectors) as long as the eigenvalues are simple. Alongside this, we develop expressions that determine the uncertainty in a vector estimate obtained from an eigenvalue-eigenvector problem given the uncertainty of the terms of the matrix. The Jacobian expressions developed are numerically validated with forward finite, differencing and the covariance expressions are validated using Monte Carlo analysis. Finally, the results from this work are used to determine covariance expressions for a variety of estimation problem examples and are also applied to the design of a dynamical system.

  2. Restricted access Improved hydrogeophysical characterization and monitoring through parallel modeling and inversion of time-domain resistivity andinduced-polarization data

    USGS Publications Warehouse

    Johnson, Timothy C.; Versteeg, Roelof J.; Ward, Andy; Day-Lewis, Frederick D.; Revil, André

    2010-01-01

    Electrical geophysical methods have found wide use in the growing discipline of hydrogeophysics for characterizing the electrical properties of the subsurface and for monitoring subsurface processes in terms of the spatiotemporal changes in subsurface conductivity, chargeability, and source currents they govern. Presently, multichannel and multielectrode data collections systems can collect large data sets in relatively short periods of time. Practitioners, however, often are unable to fully utilize these large data sets and the information they contain because of standard desktop-computer processing limitations. These limitations can be addressed by utilizing the storage and processing capabilities of parallel computing environments. We have developed a parallel distributed-memory forward and inverse modeling algorithm for analyzing resistivity and time-domain induced polar-ization (IP) data. The primary components of the parallel computations include distributed computation of the pole solutions in forward mode, distributed storage and computation of the Jacobian matrix in inverse mode, and parallel execution of the inverse equation solver. We have tested the corresponding parallel code in three efforts: (1) resistivity characterization of the Hanford 300 Area Integrated Field Research Challenge site in Hanford, Washington, U.S.A., (2) resistivity characterization of a volcanic island in the southern Tyrrhenian Sea in Italy, and (3) resistivity and IP monitoring of biostimulation at a Superfund site in Brandywine, Maryland, U.S.A. Inverse analysis of each of these data sets would be limited or impossible in a standard serial computing environment, which underscores the need for parallel high-performance computing to fully utilize the potential of electrical geophysical methods in hydrogeophysical applications.

  3. A Theory of Electrical Conductivity of Pseudo-Binary Equivalent Molten Salt

    NASA Astrophysics Data System (ADS)

    Matsunaga, Shigeki; Koishi, Takahiro; Tamaki, Shigeru

    2008-02-01

    Many years ago, Sundheim proposed the "universal golden rule" by experiments, i.e. the ratio of the partial ionic conductivities in molten binary salt is equal to the inverse mass ratio of each ions, σ+/σ- = m-/m-. In the previous works, we have proved this relation by the theory using Langevin equation, and by molecular dynamics simulations (MD). In this study, the pseudo binary molten salt NaCl-KCl system is investigated in the same theoretical framework as previous works as the serial work in molten salts. The MD results are also reported in connection with the theoretical analysis.

  4. Tensor-GMRES method for large sparse systems of nonlinear equations

    NASA Technical Reports Server (NTRS)

    Feng, Dan; Pulliam, Thomas H.

    1994-01-01

    This paper introduces a tensor-Krylov method, the tensor-GMRES method, for large sparse systems of nonlinear equations. This method is a coupling of tensor model formation and solution techniques for nonlinear equations with Krylov subspace projection techniques for unsymmetric systems of linear equations. Traditional tensor methods for nonlinear equations are based on a quadratic model of the nonlinear function, a standard linear model augmented by a simple second order term. These methods are shown to be significantly more efficient than standard methods both on nonsingular problems and on problems where the Jacobian matrix at the solution is singular. A major disadvantage of the traditional tensor methods is that the solution of the tensor model requires the factorization of the Jacobian matrix, which may not be suitable for problems where the Jacobian matrix is large and has a 'bad' sparsity structure for an efficient factorization. We overcome this difficulty by forming and solving the tensor model using an extension of a Newton-GMRES scheme. Like traditional tensor methods, we show that the new tensor method has significant computational advantages over the analogous Newton counterpart. Consistent with Krylov subspace based methods, the new tensor method does not depend on the factorization of the Jacobian matrix. As a matter of fact, the Jacobian matrix is never needed explicitly.

  5. Deterministic Reconfigurable Control Design for the X-33 Vehicle

    NASA Technical Reports Server (NTRS)

    Wagner, Elaine A.; Burken, John J.; Hanson, Curtis E.; Wohletz, Jerry M.

    1998-01-01

    In the event of a control surface failure, the purpose of a reconfigurable control system is to redistribute the control effort among the remaining working surfaces such that satisfactory stability and performance are retained. Four reconfigurable control design methods were investigated for the X-33 vehicle: Redistributed Pseudo-Inverse, General Constrained Optimization, Automated Failure Dependent Gain Schedule, and an Off-line Nonlinear General Constrained Optimization. The Off-line Nonlinear General Constrained Optimization approach was chosen for implementation on the X-33. Two example failures are shown, a right outboard elevon jam at 25 deg. at a Mach 3 entry condition, and a left rudder jam at 30 degrees. Note however, that reconfigurable control laws have been designed for the entire flight envelope. Comparisons between responses with the nominal controller and reconfigurable controllers show the benefits of reconfiguration. Single jam aerosurface failures were considered, and failure detection and identification is considered accomplished in the actuator controller. The X-33 flight control system will incorporate reconfigurable flight control in the baseline system.

  6. Absolute GPS Positioning Using Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Ramillien, G.

    A new inverse approach for restoring the absolute coordinates of a ground -based station from three or four observed GPS pseudo-ranges is proposed. This stochastic method is based on simulations of natural evolution named genetic algorithms (GA). These iterative procedures provide fairly good and robust estimates of the absolute positions in the Earth's geocentric reference system. For comparison/validation, GA results are compared to the ones obtained using the classical linearized least-square scheme for the determination of the XYZ location proposed by Bancroft (1985) which is strongly limited by the number of available observations (i.e. here, the number of input pseudo-ranges must be four). The r.m.s. accuracy of the non -linear cost function reached by this latter method is typically ~10-4 m2 corresponding to ~300-500-m accuracies for each geocentric coordinate. However, GA can provide more acceptable solutions (r.m.s. errors < 10-5 m2), even when only three instantaneous pseudo-ranges are used, such as a lost of lock during a GPS survey. Tuned GA parameters used in different simulations are N=1000 starting individuals, as well as Pc=60-70% and Pm=30-40% for the crossover probability and mutation rate, respectively. Statistical tests on the ability of GA to recover acceptable coordinates in presence of important levels of noise are made simulating nearly 3000 random samples of erroneous pseudo-ranges. Here, two main sources of measurement errors are considered in the inversion: (1) typical satellite-clock errors and/or 300-metre variance atmospheric delays, and (2) Geometrical Dilution of Precision (GDOP) due to the particular GPS satellite configuration at the time of acquisition. Extracting valuable information and even from low-quality starting range observations, GA offer an interesting alternative for high -precision GPS positioning.

  7. Exploring Strange Nonchaotic Attractors through Jacobian Elliptic Functions

    ERIC Educational Resources Information Center

    Garcia-Hoz, A. Martinez; Chacon, R.

    2011-01-01

    We demonstrate the effectiveness of Jacobian elliptic functions (JEFs) for inquiring into the reshaping effect of quasiperiodic forces in nonlinear nonautonomous systems exhibiting strange nonchaotic attractors (SNAs). Specifically, we characterize analytically and numerically some reshaping-induced transitions starting from SNAs in the context of…

  8. A generalized chemistry version of SPARK

    NASA Technical Reports Server (NTRS)

    Carpenter, Mark H.

    1988-01-01

    An extension of the reacting H2-air computer code SPARK is presented, which enables the code to be used on any reacting flow problem. Routines are developed calculating in a general fashion, the reaction rates, and chemical Jacobians of any reacting system. In addition, an equilibrium routine is added so that the code will have frozen, finite rate, and equilibrium capabilities. The reaction rate for the species is determined from the law of mass action using Arrhenius expressions for the rate constants. The Jacobian routines are determined by numerically or analytically differentiating the law of mass action for each species. The equilibrium routine is based on a Gibbs free energy minimization routine. The routines are written in FORTRAN 77, with special consideration given to vectorization. Run times for the generalized routines are generally 20 percent slower than reaction specific routines. The numerical efficiency of the generalized analytical Jacobian, however, is nearly 300 percent better than the reaction specific numerical Jacobian used in SPARK.

  9. Methods for calculating the electrode position Jacobian for impedance imaging.

    PubMed

    Boyle, A; Crabb, M G; Jehl, M; Lionheart, W R B; Adler, A

    2017-03-01

    Electrical impedance tomography (EIT) or electrical resistivity tomography (ERT) current and measure voltages at the boundary of a domain through electrodes. The movement or incorrect placement of electrodes may lead to modelling errors that result in significant reconstructed image artifacts. These errors may be accounted for by allowing for electrode position estimates in the model. Movement may be reconstructed through a first-order approximation, the electrode position Jacobian. A reconstruction that incorporates electrode position estimates and conductivity can significantly reduce image artifacts. Conversely, if electrode position is ignored it can be difficult to distinguish true conductivity changes from reconstruction artifacts which may increase the risk of a flawed interpretation. In this work, we aim to determine the fastest, most accurate approach for estimating the electrode position Jacobian. Four methods of calculating the electrode position Jacobian were evaluated on a homogeneous halfspace. Results show that Fréchet derivative and rank-one update methods are competitive in computational efficiency but achieve different solutions for certain values of contact impedance and mesh density.

  10. A formula for the entropy of the convolution of Gibbs probabilities on the circle

    NASA Astrophysics Data System (ADS)

    Lopes, Artur O.

    2018-07-01

    Consider the transformation , such that (mod 1), and where S 1 is the unitary circle. Suppose is Hölder continuous and positive, and moreover that, for any , we have that We say that ρ is a Gibbs probability for the Hölder continuous potential , if where is the Ruelle operator for . We call J the Jacobian of ρ. Suppose is the convolution of two Gibbs probabilities and associated, respectively, to and . We show that ν is also Gibbs and its Jacobian is given by . In this case, the entropy is given by the expression For a fixed we consider differentiable variations , , of on the Banach manifold of Gibbs probabilities, where , and we estimate the derivative of the entropy at t  =  0. We also present an expression for the Jacobian of the convolution of a Gibbs probability ρ with the invariant probability with support on a periodic orbit of period two. This expression is based on the Jacobian of ρ and two Radon–Nidodym derivatives.

  11. Recent Developments Related To An Optically Controlled Microwave Phased Array Antenna.

    NASA Astrophysics Data System (ADS)

    Kittel, A.; Peinke, J.; Klein, M.; Baier, G.; Parisi, J.; Rössler, O. E.

    1990-12-01

    A generic 3-dimensional diffeomorphic map, with constant Jacobian determinant, is proposed and looked at numerically. It contains a lower-dimensional basin boundary along which a chaotic motion takes place. This boundary is nowhere differentiable in one direction. Therefore, nowhere differentiable limit sets exist generically in nature.

  12. Implicit and Multigrid Method for Ideal Multigrid Convergence: Direct Numerical Simulation of Separated Flow Around NACA 0012 Airfoil

    NASA Technical Reports Server (NTRS)

    Liu, Chao-Qun; Shan, H.; Jiang, L.

    1999-01-01

    Numerical investigation of flow separation over a NACA 0012 airfoil at large angles of attack has been carried out. The numerical calculation is performed by solving the full Navier-Stokes equations in generalized curvilinear coordinates. The second-order LU-SGS implicit scheme is applied for time integration. This scheme requires no tridiagonal inversion and is capable of being completely vectorized, provided the corresponding Jacobian matrices are properly selected. A fourth-order centered compact scheme is used for spatial derivatives. In order to reduce numerical oscillation, a sixth-order implicit filter is employed. Non-reflecting boundary conditions are imposed at the far-field and outlet boundaries to avoid possible non-physical wave reflection. Complex flow separation and vortex shedding phenomenon have been observed and discussed.

  13. Numerical modeling of axi-symmetrical cold forging process by ``Pseudo Inverse Approach''

    NASA Astrophysics Data System (ADS)

    Halouani, A.; Li, Y. M.; Abbes, B.; Guo, Y. Q.

    2011-05-01

    The incremental approach is widely used for the forging process modeling, it gives good strain and stress estimation, but it is time consuming. A fast Inverse Approach (IA) has been developed for the axi-symmetric cold forging modeling [1-2]. This approach exploits maximum the knowledge of the final part's shape and the assumptions of proportional loading and simplified tool actions make the IA simulation very fast. The IA is proved very useful for the tool design and optimization because of its rapidity and good strain estimation. However, the assumptions mentioned above cannot provide good stress estimation because of neglecting the loading history. A new approach called "Pseudo Inverse Approach" (PIA) was proposed by Batoz, Guo et al.. [3] for the sheet forming modeling, which keeps the IA's advantages but gives good stress estimation by taking into consideration the loading history. Our aim is to adapt the PIA for the cold forging modeling in this paper. The main developments in PIA are resumed as follows: A few intermediate configurations are generated for the given tools' positions to consider the deformation history; the strain increment is calculated by the inverse method between the previous and actual configurations. An incremental algorithm of the plastic integration is used in PIA instead of the total constitutive law used in the IA. An example is used to show the effectiveness and limitations of the PIA for the cold forging process modeling.

  14. A new technique for the characterization of chaff elements

    NASA Astrophysics Data System (ADS)

    Scholfield, David; Myat, Maung; Dauby, Jason; Fesler, Jonathon; Bright, Jonathan

    2011-07-01

    A new technique for the experimental characterization of electromagnetic chaff based on Inverse Synthetic Aperture Radar is presented. This technique allows for the characterization of as few as one filament of chaff in a controlled anechoic environment allowing for stability and repeatability of experimental results. This approach allows for a deeper understanding of the fundamental phenomena of electromagnetic scattering from chaff through an incremental analysis approach. Chaff analysis can now begin with a single element and progress through the build-up of particles into pseudo-cloud structures. This controlled incremental approach is supported by an identical incremental modeling and validation process. Additionally, this technique has the potential to produce considerable savings in financial and schedule cost and provides a stable and repeatable experiment to aid model valuation.

  15. Domain decomposition methods for the parallel computation of reacting flows

    NASA Technical Reports Server (NTRS)

    Keyes, David E.

    1988-01-01

    Domain decomposition is a natural route to parallel computing for partial differential equation solvers. Subdomains of which the original domain of definition is comprised are assigned to independent processors at the price of periodic coordination between processors to compute global parameters and maintain the requisite degree of continuity of the solution at the subdomain interfaces. In the domain-decomposed solution of steady multidimensional systems of PDEs by finite difference methods using a pseudo-transient version of Newton iteration, the only portion of the computation which generally stands in the way of efficient parallelization is the solution of the large, sparse linear systems arising at each Newton step. For some Jacobian matrices drawn from an actual two-dimensional reacting flow problem, comparisons are made between relaxation-based linear solvers and also preconditioned iterative methods of Conjugate Gradient and Chebyshev type, focusing attention on both iteration count and global inner product count. The generalized minimum residual method with block-ILU preconditioning is judged the best serial method among those considered, and parallel numerical experiments on the Encore Multimax demonstrate for it approximately 10-fold speedup on 16 processors.

  16. Inverse Thermal Analysis of Ti-6Al-4V Friction Stir Welds Using Numerical-Analytical Basis Functions with Pseudo-Advection

    NASA Astrophysics Data System (ADS)

    Lambrakos, S. G.

    2018-04-01

    Inverse thermal analysis of Ti-6Al-4V friction stir welds is presented that demonstrates application of a methodology using numerical-analytical basis functions and temperature-field constraint conditions. This analysis provides parametric representation of friction-stir-weld temperature histories that can be adopted as input data to computational procedures for prediction of solid-state phase transformations and mechanical response. These parameterized temperature histories can be used for inverse thermal analysis of friction stir welds having process conditions similar those considered here. Case studies are presented for inverse thermal analysis of friction stir welds that use three-dimensional constraint conditions on calculated temperature fields, which are associated with experimentally measured transformation boundaries and weld-stir-zone cross sections.

  17. Evaluation of Jacobian determinants by Monte Carlo methods: Application to the quasiclassical approximation in molecular scattering

    NASA Technical Reports Server (NTRS)

    Labudde, R. A.

    1971-01-01

    A technique is described which can be used to evaluate Jacobian determinants which occur in classical mechanical and quasiclassical approximation descriptions of molecular scattering. The method may be valuable in the study of reactive scattering using the quasiclassical approximation.

  18. Numerical implementation, verification and validation of two-phase flow four-equation drift flux model with Jacobian-free Newton–Krylov method

    DOE PAGES

    Zou, Ling; Zhao, Haihua; Zhang, Hongbin

    2016-08-24

    This study presents a numerical investigation on using the Jacobian-free Newton–Krylov (JFNK) method to solve the two-phase flow four-equation drift flux model with realistic constitutive correlations (‘closure models’). The drift flux model is based on Isshi and his collaborators’ work. Additional constitutive correlations for vertical channel flow, such as two-phase flow pressure drop, flow regime map, wall boiling and interfacial heat transfer models, were taken from the RELAP5-3D Code Manual and included to complete the model. The staggered grid finite volume method and fully implicit backward Euler method was used for the spatial discretization and time integration schemes, respectively. Themore » Jacobian-free Newton–Krylov method shows no difficulty in solving the two-phase flow drift flux model with a discrete flow regime map. In addition to the Jacobian-free approach, the preconditioning matrix is obtained by using the default finite differencing method provided in the PETSc package, and consequently the labor-intensive implementation of complex analytical Jacobian matrix is avoided. Extensive and successful numerical verification and validation have been performed to prove the correct implementation of the models and methods. Code-to-code comparison with RELAP5-3D has further demonstrated the successful implementation of the drift flux model.« less

  19. Characterization of robotics parallel algorithms and mapping onto a reconfigurable SIMD machine

    NASA Technical Reports Server (NTRS)

    Lee, C. S. G.; Lin, C. T.

    1989-01-01

    The kinematics, dynamics, Jacobian, and their corresponding inverse computations are six essential problems in the control of robot manipulators. Efficient parallel algorithms for these computations are discussed and analyzed. Their characteristics are identified and a scheme on the mapping of these algorithms to a reconfigurable parallel architecture is presented. Based on the characteristics including type of parallelism, degree of parallelism, uniformity of the operations, fundamental operations, data dependencies, and communication requirement, it is shown that most of the algorithms for robotic computations possess highly regular properties and some common structures, especially the linear recursive structure. Moreover, they are well-suited to be implemented on a single-instruction-stream multiple-data-stream (SIMD) computer with reconfigurable interconnection network. The model of a reconfigurable dual network SIMD machine with internal direct feedback is introduced. A systematic procedure internal direct feedback is introduced. A systematic procedure to map these computations to the proposed machine is presented. A new scheduling problem for SIMD machines is investigated and a heuristic algorithm, called neighborhood scheduling, that reorders the processing sequence of subtasks to reduce the communication time is described. Mapping results of a benchmark algorithm are illustrated and discussed.

  20. Nested case-control studies: should one break the matching?

    PubMed

    Borgan, Ørnulf; Keogh, Ruth

    2015-10-01

    In a nested case-control study, controls are selected for each case from the individuals who are at risk at the time at which the case occurs. We say that the controls are matched on study time. To adjust for possible confounding, it is common to match on other variables as well. The standard analysis of nested case-control data is based on a partial likelihood which compares the covariates of each case to those of its matched controls. It has been suggested that one may break the matching of nested case-control data and analyse them as case-cohort data using an inverse probability weighted (IPW) pseudo likelihood. Further, when some covariates are available for all individuals in the cohort, multiple imputation (MI) makes it possible to use all available data in the cohort. In the paper we review the standard method and the IPW and MI approaches, and compare their performance using simulations that cover a range of scenarios, including one and two endpoints.

  1. One-way propagation of bulk states and robust edge states in photonic crystals with broken inversion and time-reversal symmetries

    NASA Astrophysics Data System (ADS)

    Lu, Jin-Cheng; Chen, Xiao-Dong; Deng, Wei-Min; Chen, Min; Dong, Jian-Wen

    2018-07-01

    The valley is a flexible degree of freedom for light manipulation in photonic systems. In this work, we introduce the valley concept in magnetic photonic crystals with broken inversion symmetry. One-way propagation of bulk states is demonstrated by exploiting the pseudo-gap where bulk states only exist at one single valley. In addition, the transition between Hall and valley-Hall nontrivial topological phases is also studied in terms of the competition between the broken inversion and time-reversal symmetries. At the photonic boundary between two topologically distinct photonic crystals, we illustrate the one-way propagation of edge states and demonstrate their robustness against defects.

  2. Propagation of singularities for linearised hybrid data impedance tomography

    NASA Astrophysics Data System (ADS)

    Bal, Guillaume; Hoffmann, Kristoffer; Knudsen, Kim

    2018-02-01

    For a general formulation of linearised hybrid inverse problems in impedance tomography, the qualitative properties of the solutions are analysed. Using an appropriate scalar pseudo-differential formulation, the problems are shown to permit propagating singularities under certain non-elliptic conditions, and the associated directions of propagation are precisely identified relative to the directions in which ellipticity is lost. The same result is found in the setting for the corresponding normal formulation of the scalar pseudo-differential equations. A numerical reconstruction procedure based of the least squares finite element method is derived, and a series of numerical experiments visualise exactly how the loss of ellipticity manifests itself as propagating singularities.

  3. On recovering distributed IP information from inductive source time domain electromagnetic data

    NASA Astrophysics Data System (ADS)

    Kang, Seogi; Oldenburg, Douglas W.

    2016-10-01

    We develop a procedure to invert time domain induced polarization (IP) data for inductive sources. Our approach is based upon the inversion methodology in conventional electrical IP (EIP), which uses a sensitivity function that is independent of time. However, significant modifications are required for inductive source IP (ISIP) because electric fields in the ground do not achieve a steady state. The time-history for these fields needs to be evaluated and then used to define approximate IP currents. The resultant data, either a magnetic field or its derivative, are evaluated through the Biot-Savart law. This forms the desired linear relationship between data and pseudo-chargeability. Our inversion procedure has three steps: (1) Obtain a 3-D background conductivity model. We advocate, where possible, that this be obtained by inverting early-time data that do not suffer significantly from IP effects. (2) Decouple IP responses embedded in the observations by forward modelling the TEM data due to a background conductivity and subtracting these from the observations. (3) Use the linearized sensitivity function to invert data at each time channel and recover pseudo-chargeability. Post-interpretation of the recovered pseudo-chargeabilities at multiple times allows recovery of intrinsic Cole-Cole parameters such as time constant and chargeability. The procedure is applicable to all inductive source survey geometries but we focus upon airborne time domain EM (ATEM) data with a coincident-loop configuration because of the distinctive negative IP signal that is observed over a chargeable body. Several assumptions are adopted to generate our linearized modelling but we systematically test the capability and accuracy of the linearization for ISIP responses arising from different conductivity structures. On test examples we show: (1) our decoupling procedure enhances the ability to extract information about existence and location of chargeable targets directly from the data maps; (2) the horizontal location of a target body can be well recovered through inversion; (3) the overall geometry of a target body might be recovered but for ATEM data a depth weighting is required in the inversion; (4) we can recover estimates of intrinsic τ and η that may be useful for distinguishing between two chargeable targets.

  4. Local performance optimization for a class of redundant eight-degree-of-freedom manipulators

    NASA Technical Reports Server (NTRS)

    Williams, Robert L., II

    1994-01-01

    Local performance optimization for joint limit avoidance and manipulability maximization (singularity avoidance) is obtained by using the Jacobian matrix pseudoinverse and by projecting the gradient of an objective function into the Jacobian null space. Real-time redundancy optimization control is achieved for an eight-joint redundant manipulator having a three-axis spherical shoulder, a single elbow joint, and a four-axis spherical wrist. Symbolic solutions are used for both full-Jacobian and wrist-partitioned pseudoinverses, partitioned null-space projection matrices, and all objective function gradients. A kinematic limitation of this class of manipulators and the limitation's effect on redundancy resolution are discussed. Results obtained with graphical simulation are presented to demonstrate the effectiveness of local redundant manipulator performance optimization. Actual hardware experiments performed to verify the simulated results are also discussed. A major result is that the partitioned solution is desirable because of low computation requirements. The partitioned solution is suboptimal compared with the full solution because translational and rotational terms are optimized separately; however, the results show that the difference is not significant. Singularity analysis reveals that no algorithmic singularities exist for the partitioned solution. The partitioned and full solutions share the same physical manipulator singular conditions. When compared with the full solution, the partitioned solution is shown to be ill-conditioned in smaller neighborhoods of the shared singularities.

  5. Nonlinear wave choked inlets

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The quasi-one dimensional flow program was modified in two ways. The Runge-Kutta subroutine was replaced with a subroutine which used a modified divided difference form of the Adams Pece method and the matrix inversion routine was replaced with a pseudo inverse routine. Calculations were run using both the original and modified programs. Comparison of the calculations showed that the original Runge-Kutta routine could not detect singularity near the throat and was integrating across it. The modified version was able to detect the singularity and therefore gave more valid calculations.

  6. Method for controlling a vehicle with two or more independently steered wheels

    DOEpatents

    Reister, D.B.; Unseren, M.A.

    1995-03-28

    A method is described for independently controlling each steerable drive wheel of a vehicle with two or more such wheels. An instantaneous center of rotation target and a tangential velocity target are inputs to a wheel target system which sends the velocity target and a steering angle target for each drive wheel to a pseudo-velocity target system. The pseudo-velocity target system determines a pseudo-velocity target which is compared to a current pseudo-velocity to determine a pseudo-velocity error. The steering angle targets and the steering angles are inputs to a steering angle control system which outputs to the steering angle encoders, which measure the steering angles. The pseudo-velocity error, the rate of change of the pseudo-velocity error, and the wheel slip between each pair of drive wheels are used to calculate intermediate control variables which, along with the steering angle targets are used to calculate the torque to be applied at each wheel. The current distance traveled for each wheel is then calculated. The current wheel velocities and steering angle targets are used to calculate the cumulative and instantaneous wheel slip and the current pseudo-velocity. 6 figures.

  7. Design of Robust Adaptive Unbalance Response Controllers for Rotors with Magnetic Bearings

    NASA Technical Reports Server (NTRS)

    Knospe, Carl R.; Tamer, Samir M.; Fedigan, Stephen J.

    1996-01-01

    Experimental results have recently demonstrated that an adaptive open loop control strategy can be highly effective in the suppression of unbalance induced vibration on rotors supported in active magnetic bearings. This algorithm, however, relies upon a predetermined gain matrix. Typically, this matrix is determined by an optimal control formulation resulting in the choice of the pseudo-inverse of the nominal influence coefficient matrix as the gain matrix. This solution may result in problems with stability and performance robustness since the estimated influence coefficient matrix is not equal to the actual influence coefficient matrix. Recently, analysis tools have been developed to examine the robustness of this control algorithm with respect to structured uncertainty. Herein, these tools are extended to produce a design procedure for determining the adaptive law's gain matrix. The resulting control algorithm has a guaranteed convergence rate and steady state performance in spite of the uncertainty in the rotor system. Several examples are presented which demonstrate the effectiveness of this approach and its advantages over the standard optimal control formulation.

  8. Finite element solution of optimal control problems with state-control inequality constraints

    NASA Technical Reports Server (NTRS)

    Bless, Robert R.; Hodges, Dewey H.

    1992-01-01

    It is demonstrated that the weak Hamiltonian finite-element formulation is amenable to the solution of optimal control problems with inequality constraints which are functions of both state and control variables. Difficult problems can be treated on account of the ease with which algebraic equations can be generated before having to specify the problem. These equations yield very accurate solutions. Owing to the sparse structure of the resulting Jacobian, computer solutions can be obtained quickly when the sparsity is exploited.

  9. Exploiting the MODIS albedos with the Two-stream Inversion Package (JRC-TIP): 2. Fractions of transmitted and absorbed fluxes in the vegetation and soil layers

    NASA Astrophysics Data System (ADS)

    Pinty, B.; Clerici, M.; Andredakis, I.; Kaminski, T.; Taberner, M.; Verstraete, M. M.; Gobron, N.; Plummer, S.; Widlowski, J.-L.

    2011-05-01

    The two-stream model parameters and associated uncertainties retrieved by inversion against MODIS broadband visible and near-infrared white sky surface albedos were discussed in a companion paper. The present paper concentrates on the partitioning of the solar radiation fluxes delivered by the Joint Research Centre Two-stream Inversion Package (JRC-TIP). The estimation of the various flux fractions related to the vegetation and the background layers separately capitalizes on the probability density functions of the model parameters discussed in the companion paper. The propagation of uncertainties from the observations to the model parameters is achieved via the Hessian of the cost function and yields a covariance matrix of posterior parameter uncertainties. This matrix is propagated to the radiation fluxes via the model's Jacobian matrix of first derivatives. Results exhibit a rather good spatiotemporal consistency given that the prior values on the model parameters are not specified as a function of land cover type and/or vegetation phenological states. A specific investigation based on a scenario imposing stringent conditions of leaf absorbing and scattering properties highlights the impact of such constraints that are, as a matter of fact, currently adopted in vegetation index approaches. Special attention is also given to snow-covered and snow-contaminated areas since these regions encompass significant reflectance changes that strongly affect land surface processes. A definite asset of the JRC-TIP lies in its capability to control and ultimately relax a number of assumptions that are often implicit in traditional approaches. These features greatly help us understand the discrepancies between the different data sets of land surface properties and fluxes that are currently available. Through a series of selected examples, the inverse procedure implemented in the JRC-TIP is shown to be robust, reliable, and compliant with large-scale processing requirements. Furthermore, this package ensures the physical consistency between the set of observations, the two-stream model parameters, and radiation fluxes. It also documents the retrieval of associated uncertainties.

  10. Flux Jacobian matrices and generaled Roe average for an equilibrium real gas

    NASA Technical Reports Server (NTRS)

    Vinokur, Marcel

    1988-01-01

    Inviscid flux Jacobian matrices and their properties used in numerical solutions of conservation laws are extended to general, equilibrium gas laws. Exact and approximate generalizations of the Roe average are presented. Results are given for one-dimensional flow, and then extended to three-dimensional flow with time-varying grids.

  11. Integrated seismic stochastic inversion and multi-attributes to delineate reservoir distribution: Case study MZ fields, Central Sumatra Basin

    NASA Astrophysics Data System (ADS)

    Haris, A.; Novriyani, M.; Suparno, S.; Hidayat, R.; Riyanto, A.

    2017-07-01

    This study presents the integration of seismic stochastic inversion and multi-attributes for delineating the reservoir distribution in term of lithology and porosity in the formation within depth interval between the Top Sihapas and Top Pematang. The method that has been used is a stochastic inversion, which is integrated with multi-attribute seismic by applying neural network Probabilistic Neural Network (PNN). Stochastic methods are used to predict the probability mapping sandstone as the result of impedance varied with 50 realizations that will produce a good probability. Analysis of Stochastic Seismic Tnversion provides more interpretive because it directly gives the value of the property. Our experiment shows that AT of stochastic inversion provides more diverse uncertainty so that the probability value will be close to the actual values. The produced AT is then used for an input of a multi-attribute analysis, which is used to predict the gamma ray, density and porosity logs. To obtain the number of attributes that are used, stepwise regression algorithm is applied. The results are attributes which are used in the process of PNN. This PNN method is chosen because it has the best correlation of others neural network method. Finally, we interpret the product of the multi-attribute analysis are in the form of pseudo-gamma ray volume, density volume and volume of pseudo-porosity to delineate the reservoir distribution. Our interpretation shows that the structural trap is identified in the southeastern part of study area, which is along the anticline.

  12. Visualization of x-ray computer tomography using computer-generated holography

    NASA Astrophysics Data System (ADS)

    Daibo, Masahiro; Tayama, Norio

    1998-09-01

    The theory converted from x-ray projection data to the hologram directly by combining the computer tomography (CT) with the computer generated hologram (CGH), is proposed. The purpose of this study is to offer the theory for realizing the all- electronic and high-speed seeing through 3D visualization system, which is for the application to medical diagnosis and non- destructive testing. First, the CT is expressed using the pseudo- inverse matrix which is obtained by the singular value decomposition. CGH is expressed in the matrix style. Next, `projection to hologram conversion' (PTHC) matrix is calculated by the multiplication of phase matrix of CGH with pseudo-inverse matrix of the CT. Finally, the projection vector is converted to the hologram vector directly, by multiplication of the PTHC matrix with the projection vector. Incorporating holographic analog computation into CT reconstruction, it becomes possible that the calculation amount is drastically reduced. We demonstrate the CT cross section which is reconstituted by He-Ne laser in the 3D space from the real x-ray projection data acquired by x-ray television equipment, using our direct conversion technique.

  13. Kinematics and control of redundant robotic arm based on dielectric elastomer actuators

    NASA Astrophysics Data System (ADS)

    Branz, Francesco; Antonello, Andrea; Carron, Andrea; Carli, Ruggero; Francesconi, Alessandro

    2015-04-01

    Soft robotics is a promising field and its application to space mechanisms could represent a breakthrough in space technologies by enabling new operative scenarios (e.g. soft manipulators, capture systems). Dielectric Elastomers Actuators have been under deep study for a number of years and have shown several advantages that could be of key importance for space applications. Among such advantages the most notable are high conversion efficiency, distributed actuation, self-sensing capability, multi-degree-of-freedom design, light weight and low cost. The big potentialities of double cone actuators have been proven in terms of good performances (i.e. stroke and force/torque), ease of manufacturing and durability. In this work the kinematic, dynamic and control design of a two-joint redundant robotic arm is presented. Two double cone actuators are assembled in series to form a two-link design. Each joint has two degrees of freedom (one rotational and one translational) for a total of four. The arm is designed to move in a 2-D environment (i.e. the horizontal plane) with 4 DoF, consequently having two degrees of redundancy. The redundancy is exploited in order to minimize the joint loads. The kinematic design with redundant Jacobian inversion is presented. The selected control algorithm is described along with the results of a number of dynamic simulations that have been executed for performance verification. Finally, an experimental setup is presented based on a flexible structure that counteracts gravity during testing in order to better emulate future zero-gravity applications.

  14. Surface and Atmospheric Contributions to Passive Microwave Brightness Temperatures for Falling Snow Events

    NASA Technical Reports Server (NTRS)

    Skofronick-Jackson, Gail; Johnson, Benjamin T.

    2011-01-01

    Physically based passive microwave precipitation retrieval algorithms require a set of relationships between satellite -observed brightness temperatures (TBs) and the physical state of the underlying atmosphere and surface. These relationships are nonlinear, such that inversions are ill ]posed especially over variable land surfaces. In order to elucidate these relationships, this work presents a theoretical analysis using TB weighting functions to quantify the percentage influence of the TB resulting from absorption, emission, and/or reflection from the surface, as well as from frozen hydrometeors in clouds, from atmospheric water vapor, and from other contributors. The percentage analysis was also compared to Jacobians. The results are presented for frequencies from 10 to 874 GHz, for individual snow profiles, and for averages over three cloud-resolving model simulations of falling snow. The bulk structure (e.g., ice water path and cloud depth) of the underlying cloud scene was found to affect the resultant TB and percentages, producing different values for blizzard, lake effect, and synoptic snow events. The slant path at a 53 viewing angle increases the hydrometeor contributions relative to nadir viewing channels. Jacobians provide the magnitude and direction of change in the TB values due to a change in the underlying scene; however, the percentage analysis provides detailed information on how that change affected contributions to the TB from the surface, hydrometeors, and water vapor. The TB percentage information presented in this paper provides information about the relative contributions to the TB and supplies key pieces of information required to develop and improve precipitation retrievals over land surfaces.

  15. Retrievals of methane from IASI radiance spectra and comparisons with ground-based FTIR measurements

    NASA Astrophysics Data System (ADS)

    Kerzenmacher, T.; Kumps, N.; de Mazière, M.; Kruglanski, M.; Senten, C.; Vanhaelewyn, G.; Vandaele, A. C.; Vigouroux, C.

    2009-04-01

    The Infrared Atmospheric Sounding Interferometer (IASI), launched on 19 October 2006, is a Fourier transform spectrometer onboard METOP-1, observing the radiance of the Earth's surface and atmosphere in nadir mode. The spectral range covers the 645 to 2760 cm-1 region with a resolution of 0.35 to 0.5 cm-1. A line-by-line spectral simulation and inversion code, ASIMUT, has been developed for the retrieval of chemical species from infrared spectra. The code includes an analytical calculation of the Jacobians for use in the inversion part of the algorithm based on the Optimal Estimation Method. In 2007 we conducted a measurement campaign at St Denis, Île de la Réunion where we performed ground-based solar absorption observations with a infrared Fourier transform spectrometer. ASIMUT has been used to retrieve methane from the ground-based and collocated satellite measurements. For the latter we selected pixels that are situated over the sea. In this presentation we will show the retrieval strategies, the resulting methane column time series above St Denis and the comparisons of the satellite data with the ground-based data sets. Vertical profile information in these data sets will also be discussed.

  16. On linearization and preconditioning for radiation diffusion coupled to material thermal conduction equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feng, Tao, E-mail: fengtao2@mail.ustc.edu.cn; Graduate School of China Academy Engineering Physics, Beijing 100083; An, Hengbin, E-mail: an_hengbin@iapcm.ac.cn

    2013-03-01

    Jacobian-free Newton–Krylov (JFNK) method is an effective algorithm for solving large scale nonlinear equations. One of the most important advantages of JFNK method is that there is no necessity to form and store the Jacobian matrix of the nonlinear system when JFNK method is employed. However, an approximation of the Jacobian is needed for the purpose of preconditioning. In this paper, JFNK method is employed to solve a class of non-equilibrium radiation diffusion coupled to material thermal conduction equations, and two preconditioners are designed by linearizing the equations in two methods. Numerical results show that the two preconditioning methods canmore » improve the convergence behavior and efficiency of JFNK method.« less

  17. Apparently noninvariant terms of nonlinear sigma models in lattice perturbation theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harada, Koji; Hattori, Nozomu; Kubo, Hirofumi

    2009-03-15

    Apparently noninvariant terms (ANTs) that appear in loop diagrams for nonlinear sigma models are revisited in lattice perturbation theory. The calculations have been done mostly with dimensional regularization so far. In order to establish that the existence of ANTs is independent of the regularization scheme, and of the potential ambiguities in the definition of the Jacobian of the change of integration variables from group elements to 'pion' fields, we employ lattice regularization, in which everything (including the Jacobian) is well defined. We show explicitly that lattice perturbation theory produces ANTs in the four-point functions of the pion fields at one-loopmore » and the Jacobian does not play an important role in generating ANTs.« less

  18. A model of precambrian geology of Kansas derived from gravity and magnetic data

    NASA Astrophysics Data System (ADS)

    Xia, Jianghai; Sprowl, Donald R.; Steeples, Don W.

    1996-10-01

    The fabric of the Precambrian geology of Kansas is revealed through inversion of gravity and magnetic data to pseudo-lithology. There are five main steps in the inversion process: (1) reduction of potential-field data to a horizontal plane in the wavenumber domain; (2) separation of the residual anomaly of interest from the regional background, where an assumption is made that the regional anomaly could be represented by some order of polynomial; (3) subtraction of the signal due to the known topography on the Phanerozoic/Precambrian boundary from the residual anomaly (we assume what is left at this stage are the signals due to lateral variation in the Precambrian lithology); (4) inversion of the residual anomaly in the wavenumber domain to density and magnetization distribution in the top part of the Precambrian constrained by the known geologic information; (5) derivation of pseudo-lithology by characterization of density and magnetization. The boundary between the older Central Plains Province to the north and the Southern Granite-Rhyolite Province to the south is clearly delineated. The Midcontinent Rift System appears to widen in central Kansas and involve a considerable portion of southern Kansas. Lithologies in southwestern Kansas appear to change over fairly small areas and include mafic rocks which have not been encountered in drill holes. The texture of the potential field data from southwestern Kansas suggests a history of continental growth by broad extension.

  19. Representation of the inverse of a frame multiplier.

    PubMed

    Balazs, P; Stoeva, D T

    2015-02-15

    Certain mathematical objects appear in a lot of scientific disciplines, like physics, signal processing and, naturally, mathematics. In a general setting they can be described as frame multipliers, consisting of analysis, multiplication by a fixed sequence (called the symbol), and synthesis. In this paper we show a surprising result about the inverse of such operators, if any, as well as new results about a core concept of frame theory, dual frames. We show that for semi-normalized symbols, the inverse of any invertible frame multiplier can always be represented as a frame multiplier with the reciprocal symbol and dual frames of the given ones. Furthermore, one of those dual frames is uniquely determined and the other one can be arbitrarily chosen. We investigate sufficient conditions for the special case, when both dual frames can be chosen to be the canonical duals. In connection to the above, we show that the set of dual frames determines a frame uniquely. Furthermore, for a given frame, the union of all coefficients of its dual frames is dense in [Formula: see text]. We also introduce a class of frames (called pseudo-coherent frames), which includes Gabor frames and coherent frames, and investigate invertible pseudo-coherent frame multipliers, allowing a classification for frame-type operators for these frames. Finally, we give a numerical example for the invertibility of multipliers in the Gabor case.

  20. Representation of the inverse of a frame multiplier☆

    PubMed Central

    Balazs, P.; Stoeva, D.T.

    2015-01-01

    Certain mathematical objects appear in a lot of scientific disciplines, like physics, signal processing and, naturally, mathematics. In a general setting they can be described as frame multipliers, consisting of analysis, multiplication by a fixed sequence (called the symbol), and synthesis. In this paper we show a surprising result about the inverse of such operators, if any, as well as new results about a core concept of frame theory, dual frames. We show that for semi-normalized symbols, the inverse of any invertible frame multiplier can always be represented as a frame multiplier with the reciprocal symbol and dual frames of the given ones. Furthermore, one of those dual frames is uniquely determined and the other one can be arbitrarily chosen. We investigate sufficient conditions for the special case, when both dual frames can be chosen to be the canonical duals. In connection to the above, we show that the set of dual frames determines a frame uniquely. Furthermore, for a given frame, the union of all coefficients of its dual frames is dense in ℓ2. We also introduce a class of frames (called pseudo-coherent frames), which includes Gabor frames and coherent frames, and investigate invertible pseudo-coherent frame multipliers, allowing a classification for frame-type operators for these frames. Finally, we give a numerical example for the invertibility of multipliers in the Gabor case. PMID:25843976

  1. Adaptive Implicit Non-Equilibrium Radiation Diffusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Philip, Bobby; Wang, Zhen; Berrill, Mark A

    2013-01-01

    We describe methods for accurate and efficient long term time integra- tion of non-equilibrium radiation diffusion systems: implicit time integration for effi- cient long term time integration of stiff multiphysics systems, local control theory based step size control to minimize the required global number of time steps while control- ling accuracy, dynamic 3D adaptive mesh refinement (AMR) to minimize memory and computational costs, Jacobian Free Newton-Krylov methods on AMR grids for efficient nonlinear solution, and optimal multilevel preconditioner components that provide level independent solver convergence.

  2. Investigation of the Capability of Compact Polarimetric SAR Interferometry to Estimate Forest Height

    NASA Astrophysics Data System (ADS)

    Zhang, Hong; Xie, Lei; Wang, Chao; Chen, Jiehong

    2013-08-01

    The main objective of this paper is to investigate the capability of compact Polarimetric SAR Interferometry (C-PolInSAR) on forest height estimation. For this, the pseudo fully polarimetric interferomteric (F-PolInSAR) covariance matrix is firstly reconstructed, then the three- stage inversion algorithm, hybrid algorithm, Music and Capon algorithm are applied to both C-PolInSAR covariance matrix and pseudo F-PolInSAR covariance matrix. The availability of forest height estimation is demonstrated using L-band data generated by simulator PolSARProSim and X-band airborne data acquired by East China Research Institute of Electronic Engineering, China Electronics Technology Group Corporation.

  3. Multivariate statistics of the Jacobian matrices in tensor based morphometry and their application to HIV/AIDS.

    PubMed

    Lepore, Natasha; Brun, Caroline A; Chiang, Ming-Chang; Chou, Yi-Yu; Dutton, Rebecca A; Hayashi, Kiralee M; Lopez, Oscar L; Aizenstein, Howard J; Toga, Arthur W; Becker, James T; Thompson, Paul M

    2006-01-01

    Tensor-based morphometry (TBM) is widely used in computational anatomy as a means to understand shape variation between structural brain images. A 3D nonlinear registration technique is typically used to align all brain images to a common neuroanatomical template, and the deformation fields are analyzed statistically to identify group differences in anatomy. However, the differences are usually computed solely from the determinants of the Jacobian matrices that are associated with the deformation fields computed by the registration procedure. Thus, much of the information contained within those matrices gets thrown out in the process. Only the magnitude of the expansions or contractions is examined, while the anisotropy and directional components of the changes are ignored. Here we remedy this problem by computing multivariate shape change statistics using the strain matrices. As the latter do not form a vector space, means and covariances are computed on the manifold of positive-definite matrices to which they belong. We study the brain morphology of 26 HIV/AIDS patients and 14 matched healthy control subjects using our method. The images are registered using a high-dimensional 3D fluid registration algorithm, which optimizes the Jensen-Rényi divergence, an information-theoretic measure of image correspondence. The anisotropy of the deformation is then computed. We apply a manifold version of Hotelling's T2 test to the strain matrices. Our results complement those found from the determinants of the Jacobians alone and provide greater power in detecting group differences in brain structure.

  4. Polarity control at interfaces: Quantifying pseudo-solvent effects in nano-confined systems

    DOE PAGES

    Singappuli-Arachchige, Dilini; Manzano, J. Sebastian; Sherman, Lindy M.; ...

    2016-08-02

    Surface functionalization controls local environments and induces solvent-like effects at liquid–solid interfaces. We explored structure–property relationships between organic groups bound to pore surfaces of mesoporous silica nanoparticles and Stokes shifts of the adsorbed solvatochromic dye Prodan. Correlating shifts of the dye on the surfaces with its shifts in solvents resulted in a local polarity scale for functionalized pores. The scale was validated by studying the effects of pore polarity on quenching of Nile Red fluorescence and on the vibronic band structure of pyrene. Measurements were done in aqueous suspensions of porous particles, proving that the dielectric properties in the poresmore » are different from the bulk solvent. The precise control of pore polarity was used to enhance the catalytic activity of TEMPO in the aerobic oxidation of furfuryl alcohol in water. Furthermore, an inverse relationship was found between pore polarity and activity of TEMPO in the pores, demonstrating that controlling the local polarity around an active site allows modulating the activity of nanoconfined catalysts.« less

  5. Airborne electromagnetic data levelling using principal component analysis based on flight line difference

    NASA Astrophysics Data System (ADS)

    Zhang, Qiong; Peng, Cong; Lu, Yiming; Wang, Hao; Zhu, Kaiguang

    2018-04-01

    A novel technique is developed to level airborne geophysical data using principal component analysis based on flight line difference. In the paper, flight line difference is introduced to enhance the features of levelling error for airborne electromagnetic (AEM) data and improve the correlation between pseudo tie lines. Thus we conduct levelling to the flight line difference data instead of to the original AEM data directly. Pseudo tie lines are selected distributively cross profile direction, avoiding the anomalous regions. Since the levelling errors of selective pseudo tie lines show high correlations, principal component analysis is applied to extract the local levelling errors by low-order principal components reconstruction. Furthermore, we can obtain the levelling errors of original AEM data through inverse difference after spatial interpolation. This levelling method does not need to fly tie lines and design the levelling fitting function. The effectiveness of this method is demonstrated by the levelling results of survey data, comparing with the results from tie-line levelling and flight-line correlation levelling.

  6. Sea of Majorana fermions from pseudo-scalar superconducting order in three dimensional Dirac materials.

    PubMed

    Salehi, Morteza; Jafari, S A

    2017-08-15

    We suggest that spin-singlet pseudo-scalar s-wave superconducting pairing creates a two dimensional sea of Majorana fermions on the surface of three dimensional Dirac superconductors (3DDS). This pseudo-scalar superconducting order parameter Δ 5 , in competition with scalar Dirac mass m, leads to a topological phase transition due to band inversion. We find that a perfect Andreev-Klein reflection is guaranteed by presence of anomalous Andreev reflection along with the conventional one. This effect manifests itself in a resonant peak of the differential conductance. Furthermore, Josephson current of the Δ 5 |m|Δ 5 junction in the presence of anomalous Andreev reflection is fractional with 4π period. Our finding suggests another search area for condensed matter realization of Majorana fermions which are beyond the vortex-core of p-wave superconductors. The required Δ 5 pairing can be extrinsically induced by a conventional s-wave superconductor into a three dimensional Dirac material (3DDM).

  7. Differential Kinematics Of Contemporary Industrial Robots

    NASA Astrophysics Data System (ADS)

    Szkodny, T.

    2014-08-01

    The paper presents a simple method of avoiding singular configurations of contemporary industrial robot manipulators of such renowned companies as ABB, Fanuc, Mitsubishi, Adept, Kawasaki, COMAU and KUKA. To determine the singular configurations of these manipulators a global form of description of the end-effector kinematics was prepared, relative to the other links. On the basis of this description , the formula for the Jacobian was defined in the end-effector coordinates. Next, a closed form of the determinant of the Jacobian was derived. From the formula, singular configurations, where the determinant's value equals zero, were determined. Additionally, geometric interpretations of these configurations were given and they were illustrated. For the exemplary manipulator, small corrections of joint variables preventing the reduction of the Jacobian order were suggested. An analysis of positional errors, caused by these corrections, was presented

  8. Flux Jacobian Matrices For Equilibrium Real Gases

    NASA Technical Reports Server (NTRS)

    Vinokur, Marcel

    1990-01-01

    Improved formulation includes generalized Roe average and extension to three dimensions. Flux Jacobian matrices derived for use in numerical solutions of conservation-law differential equations of inviscid flows of ideal gases extended to real gases. Real-gas formulation of these matrices retains simplifying assumptions of thermodynamic and chemical equilibrium, but adds effects of vibrational excitation, dissociation, and ionization of gas molecules via general equation of state.

  9. A comparison of CMG steering laws for High Energy Astronomy Observatories (HEAOs)

    NASA Technical Reports Server (NTRS)

    Davis, B. G.

    1972-01-01

    A comparison of six selected control moment gyro steering laws for use on the HEAO spacecraft is reported. Basic equations are developed to project the momentum and torque of four skewed, single gimbal CMGs into vehicle coordinates. In response to the spacecraft attitude error signal, six algorithms are derived for controlling the CMG gimbal movements. HEAO performance data are obtained using each steering law and compared on the basis of such factors as accuracy, complexity, singularities, gyro hang-up and failure adaption. Moreover, each law is simulated with and without a magnetic momentum management system. The performance of any steering law is enhanced by the magnetic system. Without magnetics, the gimbal angles get large and there are significant differences in steering law performances due to cross coupling and nonlinearities. The performance of the pseudo inverse law is recommended for HEAO.

  10. A Feasibility Study on a Parallel Mechanism for Examining the Space Shuttle Orbiter Payload Bay Radiators

    NASA Technical Reports Server (NTRS)

    Roberts, Rodney G.; LopezdelCastillo, Eduardo

    1996-01-01

    The goal of the project was to develop the necessary analysis tools for a feasibility study of a cable suspended robot system for examining the space shuttle orbiter payload bay radiators These tools were developed to address design issues such as workspace size, tension requirements on the cable, the necessary accuracy and resolution requirements and the stiffness and movement requirements of the system. This report describes the mathematical models for studying the inverse kinematics, statics, and stiffness of the robot. Each model is described by a matrix. The manipulator Jacobian was also related to the stiffness matrix, which characterized the stiffness of the system. Analysis tools were then developed based on the singular value decomposition (SVD) of the corresponding matrices. It was demonstrated how the SVD can be used to quantify the robot's performance and to provide insight into different design issues.

  11. Implementation of a kappa-epsilon turbulence model to RPLUS3D code

    NASA Technical Reports Server (NTRS)

    Chitsomboon, Tawit

    1992-01-01

    The RPLUS3D code has been developed at the NASA Lewis Research Center to support the National Aerospace Plane (NASP) project. The code has the ability to solve three dimensional flowfields with finite rate combustion of hydrogen and air. The combustion process of the hydrogen-air system are simulated by an 18 reaction path, 8 species chemical kinetic mechanism. The code uses a Lower-Upper (LU) decomposition numerical algorithm as its basis, making it a very efficient and robust code. Except for the Jacobian matrix for the implicit chemistry source terms, there is no inversion of a matrix even though a fully implicit numerical algorithm is used. A k-epsilon turbulence model has recently been incorporated into the code. Initial validations have been conducted for a flow over a flat plate. Results of the validation studies are shown. Some difficulties in implementing the k-epsilon equations to the code are also discussed.

  12. Implementation of a kappa-epsilon turbulence model to RPLUS3D code

    NASA Astrophysics Data System (ADS)

    Chitsomboon, Tawit

    1992-02-01

    The RPLUS3D code has been developed at the NASA Lewis Research Center to support the National Aerospace Plane (NASP) project. The code has the ability to solve three dimensional flowfields with finite rate combustion of hydrogen and air. The combustion process of the hydrogen-air system are simulated by an 18 reaction path, 8 species chemical kinetic mechanism. The code uses a Lower-Upper (LU) decomposition numerical algorithm as its basis, making it a very efficient and robust code. Except for the Jacobian matrix for the implicit chemistry source terms, there is no inversion of a matrix even though a fully implicit numerical algorithm is used. A k-epsilon turbulence model has recently been incorporated into the code. Initial validations have been conducted for a flow over a flat plate. Results of the validation studies are shown. Some difficulties in implementing the k-epsilon equations to the code are also discussed.

  13. Fast calculation of the sensitivity matrix in magnetic induction tomography by tetrahedral edge finite elements and the reciprocity theorem.

    PubMed

    Hollaus, K; Magele, C; Merwa, R; Scharfetter, H

    2004-02-01

    Magnetic induction tomography of biological tissue is used to reconstruct the changes in the complex conductivity distribution by measuring the perturbation of an alternating primary magnetic field. To facilitate the sensitivity analysis and the solution of the inverse problem a fast calculation of the sensitivity matrix, i.e. the Jacobian matrix, which maps the changes of the conductivity distribution onto the changes of the voltage induced in a receiver coil, is needed. The use of finite differences to determine the entries of the sensitivity matrix does not represent a feasible solution because of the high computational costs of the basic eddy current problem. Therefore, the reciprocity theorem was exploited. The basic eddy current problem was simulated by the finite element method using symmetric tetrahedral edge elements of second order. To test the method various simulations were carried out and discussed.

  14. Neutrino masses from a pseudo-Dirac bino

    DOE PAGES

    Coloma, Pilar; Ipek, Seyda

    2016-09-09

    We show that, in U(1) R-symmetric supersymmetric models, the bino and its Dirac partner (the singlino) can play the role of right-handed neutrinos and generate the neutrino masses and mixing, without the need for traditional bilinear or trilinear R-parity violating operators. The two particles form a pseudo-Dirac pair, the “bi νo.” An inverse seesaw texture is generated for the neutrino-biνo sector, and the lightest neutrino is predicted to be massless. Lastly, unlike in most models with heavy right-handed neutrinos, the bi νo can be sizably produced at the LHC through its interactions with colored particles, while respecting low energy constraintsmore » from neutrinoless double-beta decay and charged lepton flavor violation.« less

  15. Biomechanical CT Metrics Are Associated With Patient Outcomes in COPD

    PubMed Central

    Bodduluri, Sandeep; Bhatt, Surya P; Hoffman, Eric A.; Newell, John D.; Martinez, Carlos H.; Dransfield, Mark T.; Han, Meilan K.; Reinhardt, Joseph M.

    2017-01-01

    Background Traditional metrics of lung disease such as those derived from spirometry and static single-volume CT images are used to explain respiratory morbidity in patients with chronic obstructive pulmonary disease (COPD), but are insufficient. We hypothesized that the mean Jacobian determinant, a measure of local lung expansion and contraction with respiration, would contribute independently to clinically relevant functional outcomes. Methods We applied image registration techniques to paired inspiratory-expiratory CT scans and derived the Jacobian determinant of the deformation field between the two lung volumes to map local volume change with respiration. We analyzed 490 participants with COPD with multivariable regression models to assess strengths of association between traditional CT metrics of disease and the Jacobian determinant with respiratory morbidity including dyspnea (mMRC), St Georges Respiratory Questionnaire (SGRQ) score, six-minute walk distance (6MWD), and the BODE index, as well as all-cause mortality. Results The Jacobian determinant was significantly associated with SGRQ (adjusted regression co-efficient β = −11.75,95%CI −21.6 to −1.7;p=0.020), and with 6MWD (β=321.15, 95%CI 134.1 to 508.1;p<0.001), independent of age, sex, race, body-mass-index, FEV1, smoking pack-years, CT emphysema, CT gas trapping, airway wall thickness, and CT scanner protocol. The mean Jacobian determinant was also independently associated with the BODE index (β= −0.41, 95%CI −0.80 to −0.02; p = 0.039), and mortality on follow-up (adjusted hazards ratio = 4.26, 95%CI = 0.93 to 19.23; p = 0.064). Conclusion Biomechanical metrics representing local lung expansion and contraction improve prediction of respiratory morbidity and mortality and offer additional prognostic information beyond traditional measures of lung function and static single-volume CT metrics. PMID:28044005

  16. Spatial patterns of progressive brain volume loss after moderate-severe traumatic brain injury

    PubMed Central

    Jolly, Amy; de Simoni, Sara; Bourke, Niall; Patel, Maneesh C; Scott, Gregory; Sharp, David J

    2018-01-01

    Abstract Traumatic brain injury leads to significant loss of brain volume, which continues into the chronic stage. This can be sensitively measured using volumetric analysis of MRI. Here we: (i) investigated longitudinal patterns of brain atrophy; (ii) tested whether atrophy is greatest in sulcal cortical regions; and (iii) showed how atrophy could be used to power intervention trials aimed at slowing neurodegeneration. In 61 patients with moderate-severe traumatic brain injury (mean age = 41.55 years ± 12.77) and 32 healthy controls (mean age = 34.22 years ± 10.29), cross-sectional and longitudinal (1-year follow-up) brain structure was assessed using voxel-based morphometry on T1-weighted scans. Longitudinal brain volume changes were characterized using a novel neuroimaging analysis pipeline that generates a Jacobian determinant metric, reflecting spatial warping between baseline and follow-up scans. Jacobian determinant values were summarized regionally and compared with clinical and neuropsychological measures. Patients with traumatic brain injury showed lower grey and white matter volume in multiple brain regions compared to controls at baseline. Atrophy over 1 year was pronounced following traumatic brain injury. Patients with traumatic brain injury lost a mean (± standard deviation) of 1.55% ± 2.19 of grey matter volume per year, 1.49% ± 2.20 of white matter volume or 1.51% ± 1.60 of whole brain volume. Healthy controls lost 0.55% ± 1.13 of grey matter volume and gained 0.26% ± 1.11 of white matter volume; equating to a 0.22% ± 0.83 reduction in whole brain volume. Atrophy was greatest in white matter, where the majority (84%) of regions were affected. This effect was independent of and substantially greater than that of ageing. Increased atrophy was also seen in cortical sulci compared to gyri. There was no relationship between atrophy and time since injury or age at baseline. Atrophy rates were related to memory performance at the end of the follow-up period, as well as to changes in memory performance, prior to multiple comparison correction. In conclusion, traumatic brain injury results in progressive loss of brain tissue volume, which continues for many years post-injury. Atrophy is most prominent in the white matter, but is also more pronounced in cortical sulci compared to gyri. These findings suggest the Jacobian determinant provides a method of quantifying brain atrophy following a traumatic brain injury and is informative in determining the long-term neurodegenerative effects after injury. Power calculations indicate that Jacobian determinant images are an efficient surrogate marker in clinical trials of neuroprotective therapeutics. PMID:29309542

  17. Computer program for post-flight evaluation of the control surface response for an attitude controlled missile

    NASA Technical Reports Server (NTRS)

    Knauber, R. N.

    1982-01-01

    A FORTRAN IV coded computer program is presented for post-flight analysis of a missile's control surface response. It includes preprocessing of digitized telemetry data for time lags, biases, non-linear calibration changes and filtering. Measurements include autopilot attitude rate and displacement gyro output and four control surface deflections. Simple first order lags are assumed for the pitch, yaw and roll axes of control. Each actuator is also assumed to be represented by a first order lag. Mixing of pitch, yaw and roll commands to four control surfaces is assumed. A pseudo-inverse technique is used to obtain the pitch, yaw and roll components from the four measured deflections. This program has been used for over 10 years on the NASA/SCOUT launch vehicle for post-flight analysis and was helpful in detecting incipient actuator stall due to excessive hinge moments. The program is currently set up for a CDC CYBER 175 computer system. It requires 34K words of memory and contains 675 cards. A sample problem presented herein including the optional plotting requires eleven (11) seconds of central processor time.

  18. Integration of Mesh Optimization with 3D All-Hex Mesh Generation, LDRD Subcase 3504340000, Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    KNUPP,PATRICK; MITCHELL,SCOTT A.

    1999-11-01

    In an attempt to automatically produce high-quality all-hex meshes, we investigated a mesh improvement strategy: given an initial poor-quality all-hex mesh, we iteratively changed the element connectivity, adding and deleting elements and nodes, and optimized the node positions. We found a set of hex reconnection primitives. We improved the optimization algorithms so they can untangle a negative-Jacobian mesh, even considering Jacobians on the boundary, and subsequently optimize the condition number of elements in an untangled mesh. However, even after applying both the primitives and optimization we were unable to produce high-quality meshes in certain regions. Our experiences suggest that manymore » boundary configurations of quadrilaterals admit no hexahedral mesh with positive Jacobians, although we have no proof of this.« less

  19. Pseudo-dynamic source characterization accounting for rough-fault effects

    NASA Astrophysics Data System (ADS)

    Galis, Martin; Thingbaijam, Kiran K. S.; Mai, P. Martin

    2016-04-01

    Broadband ground-motion simulations, ideally for frequencies up to ~10Hz or higher, are important for earthquake engineering; for example, seismic hazard analysis for critical facilities. An issue with such simulations is realistic generation of radiated wave-field in the desired frequency range. Numerical simulations of dynamic ruptures propagating on rough faults suggest that fault roughness is necessary for realistic high-frequency radiation. However, simulations of dynamic ruptures are too expensive for routine applications. Therefore, simplified synthetic kinematic models are often used. They are usually based on rigorous statistical analysis of rupture models inferred by inversions of seismic and/or geodetic data. However, due to limited resolution of the inversions, these models are valid only for low-frequency range. In addition to the slip, parameters such as rupture-onset time, rise time and source time functions are needed for complete spatiotemporal characterization of the earthquake rupture. But these parameters are poorly resolved in the source inversions. To obtain a physically consistent quantification of these parameters, we simulate and analyze spontaneous dynamic ruptures on rough faults. First, by analyzing the impact of fault roughness on the rupture and seismic radiation, we develop equivalent planar-fault kinematic analogues of the dynamic ruptures. Next, we investigate the spatial interdependencies between the source parameters to allow consistent modeling that emulates the observed behavior of dynamic ruptures capturing the rough-fault effects. Based on these analyses, we formulate a framework for pseudo-dynamic source model, physically consistent with the dynamic ruptures on rough faults.

  20. A model of Precambrian geology of Kansas derived from gravity and magnetic data

    USGS Publications Warehouse

    Xia, J.; Sprowl, D.R.; Steeples, D.W.

    1996-01-01

    The fabric of the Precambrian geology of Kansas is revealed through inversion of gravity and magnetic data to pseudo-lithology. There are five main steps in the inversion process: (1) reduction of potential-field data to a horizontal plane in the wavenumber domain; (2) separation of the residual anomaly of interest from the regional background, where an assumption is made that the regional anomaly could be represented by some order of polynomial; (3) subtraction of the signal due to the known topography on the Phanerozoic/Precambrian boundary from the residual anomaly (we assume what is left at this stage are the signals due to lateral variation in the Precambrian lithology); (4) inversion of the residual anomaly in the wavenumber domain to density and magnetization distribution in the top part of the Precambrian constrained by the known geologic information; (5) derivation of pseudo-lithology by characterization of density and magnetization. The boundary between the older Central Plains Province to the north and the Southern Granite-Rhyolite Province to the south is clearly delineated. The Midcontinent Rift System appears to widen in central Kansas and involve a considerable portion of southern Kansas. Lithologies in southwestern Kansas appear to change over fairly small areas and include mafic rocks which have not been encountered in drill holes. The texture of the potential field data from southwestern Kansas suggests a history of continental growth by broad extension. Copyright ?? 1996 Elsevier Science Ltd.

  1. Magnetotelluric Forward Modeling and Inversion In 3 -d Conductivity Model of The Vesuvio Volcano

    NASA Astrophysics Data System (ADS)

    Spichak, V.; Patella, D.

    Three-dimensional forward modeling of MT fields in the simplified conductivity model of the Vesuvio volcano (T=0.1, 1, 10, 100 and 1000s) indicates that the best image of the magma chamber could be obtained basing on the pseudo-section of the determinant apparent resitivity phase as well as on the real and imaginary components of the electric field. Another important result of the studies conducted is that it was demonstrated the principal opportunity of detection and contouring the magma chamber by 2-D pseudo-sections constructed basing on the data transforms mentioned above. Bayesian three-dimensional inversion of synthetic MT data in the volcano model indicates that it is possible to determine the depth and vertical size of the magma chamber, however, simultaneous detection of the conductivity distribution inside the domain of search is of pure quality. However, if the geometrical parameters of the magma chamber are determined in advance, it becomes quite realistic to find out the conductivity distribution inside. The accuracy of such estimation strongly depends on the uncertainty in its prior value: the more narrow is the prior conductivity palette the closer could be the posterior conductivity distribution to the true one.

  2. Neonatal MRI is associated with future cognition and academic achievement in preterm children

    PubMed Central

    Spencer-Smith, Megan; Thompson, Deanne K.; Doyle, Lex W.; Inder, Terrie E.; Anderson, Peter J.; Klingberg, Torkel

    2015-01-01

    School-age children born preterm are particularly at risk for low mathematical achievement, associated with reduced working memory and number skills. Early identification of preterm children at risk for future impairments using brain markers might assist in referral for early intervention. This study aimed to examine the use of neonatal magnetic resonance imaging measures derived from automated methods (Jacobian maps from deformation-based morphometry; fractional anisotropy maps from diffusion tensor images) to predict skills important for mathematical achievement (working memory, early mathematical skills) at 5 and 7 years in a cohort of preterm children using both univariable (general linear model) and multivariable models (support vector regression). Participants were preterm children born <30 weeks’ gestational age and healthy control children born ≥37 weeks’ gestational age at the Royal Women’s Hospital in Melbourne, Australia between July 2001 and December 2003 and recruited into a prospective longitudinal cohort study. At term-equivalent age ( ±2 weeks) 224 preterm and 46 control infants were recruited for magnetic resonance imaging. Working memory and early mathematics skills were assessed at 5 years (n = 195 preterm; n = 40 controls) and 7 years (n = 197 preterm; n = 43 controls). In the preterm group, results identified localized regions around the insula and putamen in the neonatal Jacobian map that were positively associated with early mathematics at 5 and 7 years (both P < 0.05), even after covarying for important perinatal clinical factors using general linear model but not support vector regression. The neonatal Jacobian map showed the same trend for association with working memory at 7 years (models ranging from P = 0.07 to P = 0.05). Neonatal fractional anisotropy was positively associated with working memory and early mathematics at 5 years (both P < 0.001) even after covarying for clinical factors using support vector regression but not general linear model. These significant relationships were not observed in the control group. In summary, we identified, in the preterm brain, regions around the insula and putamen using neonatal deformation-based morphometry, and brain microstructural organization using neonatal diffusion tensor imaging, associated with skills important for childhood mathematical achievement. Results contribute to the growing evidence for the clinical utility of neonatal magnetic resonance imaging for early identification of preterm infants at risk for childhood cognitive and academic impairment. PMID:26329284

  3. Input-output-controlled nonlinear equation solvers

    NASA Technical Reports Server (NTRS)

    Padovan, Joseph

    1988-01-01

    To upgrade the efficiency and stability of the successive substitution (SS) and Newton-Raphson (NR) schemes, the concept of input-output-controlled solvers (IOCS) is introduced. By employing the formal properties of the constrained version of the SS and NR schemes, the IOCS algorithm can handle indefiniteness of the system Jacobian, can maintain iterate monotonicity, and provide for separate control of load incrementation and iterate excursions, as well as having other features. To illustrate the algorithmic properties, the results for several benchmark examples are presented. These define the associated numerical efficiency and stability of the IOCS.

  4. On Some Separated Algorithms for Separable Nonlinear Least Squares Problems.

    PubMed

    Gan, Min; Chen, C L Philip; Chen, Guang-Yong; Chen, Long

    2017-10-03

    For a class of nonlinear least squares problems, it is usually very beneficial to separate the variables into a linear and a nonlinear part and take full advantage of reliable linear least squares techniques. Consequently, the original problem is turned into a reduced problem which involves only nonlinear parameters. We consider in this paper four separated algorithms for such problems. The first one is the variable projection (VP) algorithm with full Jacobian matrix of Golub and Pereyra. The second and third ones are VP algorithms with simplified Jacobian matrices proposed by Kaufman and Ruano et al. respectively. The fourth one only uses the gradient of the reduced problem. Monte Carlo experiments are conducted to compare the performance of these four algorithms. From the results of the experiments, we find that: 1) the simplified Jacobian proposed by Ruano et al. is not a good choice for the VP algorithm; moreover, it may render the algorithm hard to converge; 2) the fourth algorithm perform moderately among these four algorithms; 3) the VP algorithm with the full Jacobian matrix perform more stable than that of the VP algorithm with Kuafman's simplified one; and 4) the combination of VP algorithm and Levenberg-Marquardt method is more effective than the combination of VP algorithm and Gauss-Newton method.

  5. Energy conversion in magneto-rheological elastomers

    NASA Astrophysics Data System (ADS)

    Sebald, Gael; Nakano, Masami; Lallart, Mickaël; Tian, Tongfei; Diguet, Gildas; Cavaille, Jean-Yves

    2017-12-01

    Magneto-rheological (MR) elastomers contain micro-/nano-sized ferromagnetic particles dispersed in a soft elastomer matrix, and their rheological properties (storage and loss moduli) exhibit a significant dependence on the application of a magnetic field (namely MR effect). Conversely, it is reported in this work that this multiphysics coupling is associated with an inverse effect (i.e. the dependence of the magnetic properties on mechanical strain), denoted as the pseudo-Villari effect. MR elastomers based on soft and hard silicone rubber matrices and carbonyl iron particles were fabricated and characterized. The pseudo-Villari effect was experimentally quantified: a shear strain of 50 % induces magnetic induction field variations up to 10 mT on anisotropic MR elastomer samples, when placed in a 0.2 T applied field, which might theoretically lead to potential energy conversion density in the mJ cm-3 order of magnitude. In case of anisotropic MR elastomers, the absolute variation of stiffness as a function of applied magnetic field is rather independent of matrix properties. Similarly, the pseudo-Villari effect is found to be independent to the stiffness, thus broadening the adaptability of the materials to sensing and energy harvesting target applications. The potential of the pseudo-Villari effect for energy harvesting applications is finally briefly discussed.

  6. Energy conversion in magneto-rheological elastomers

    PubMed Central

    Sebald, Gael; Nakano, Masami; Lallart, Mickaël; Tian, Tongfei; Diguet, Gildas; Cavaille, Jean-Yves

    2017-01-01

    Abstract Magneto-rheological (MR) elastomers contain micro-/nano-sized ferromagnetic particles dispersed in a soft elastomer matrix, and their rheological properties (storage and loss moduli) exhibit a significant dependence on the application of a magnetic field (namely MR effect). Conversely, it is reported in this work that this multiphysics coupling is associated with an inverse effect (i.e. the dependence of the magnetic properties on mechanical strain), denoted as the pseudo-Villari effect. MR elastomers based on soft and hard silicone rubber matrices and carbonyl iron particles were fabricated and characterized. The pseudo-Villari effect was experimentally quantified: a shear strain of 50 % induces magnetic induction field variations up to 10 mT on anisotropic MR elastomer samples, when placed in a 0.2 T applied field, which might theoretically lead to potential energy conversion density in the mJ cm-3 order of magnitude. In case of anisotropic MR elastomers, the absolute variation of stiffness as a function of applied magnetic field is rather independent of matrix properties. Similarly, the pseudo-Villari effect is found to be independent to the stiffness, thus broadening the adaptability of the materials to sensing and energy harvesting target applications. The potential of the pseudo-Villari effect for energy harvesting applications is finally briefly discussed. PMID:29152013

  7. MO-F-CAMPUS-J-05: Toward MRI-Only Radiotherapy: Novel Tissue Segmentation and Pseudo-CT Generation Techniques Based On T1 MRI Sequences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aouadi, S; McGarry, M; Hammoud, R

    Purpose: To develop and validate a 4 class tissue segmentation approach (air cavities, background, bone and soft-tissue) on T1 -weighted brain MRI and to create a pseudo-CT for MRI-only radiation therapy verification. Methods: Contrast-enhanced T1-weighted fast-spin-echo sequences (TR = 756ms, TE= 7.152ms), acquired on a 1.5T GE MRI-Simulator, are used.MRIs are firstly pre-processed to correct for non uniformity using the non parametric, non uniformity intensity normalization algorithm. Subsequently, a logarithmic inverse scaling log(1/image) is applied, prior to segmentation, to better differentiate bone and air from soft-tissues. Finally, the following method is enrolled to classify intensities into air cavities, background, bonemore » and soft-tissue:Thresholded region growing with seed points in image corners is applied to get a mask of Air+Bone+Background. The background is, afterward, separated by the scan-line filling algorithm. The air mask is extracted by morphological opening followed by a post-processing based on knowledge about air regions geometry. The remaining rough bone pre-segmentation is refined by applying 3D geodesic active contours; bone segmentation evolves by the sum of internal forces from contour geometry and external force derived from image gradient magnitude.Pseudo-CT is obtained by assigning −1000HU to air and background voxels, performing linear mapping of soft-tissue MR intensities in [-400HU, 200HU] and inverse linear mapping of bone MR intensities in [200HU, 1000HU]. Results: Three brain patients having registered MRI and CT are used for validation. CT intensities classification into 4 classes is performed by thresholding. Dice and misclassification errors are quantified. Correct classifications for soft-tissue, bone, and air are respectively 89.67%, 77.8%, and 64.5%. Dice indices are acceptable for bone (0.74) and soft-tissue (0.91) but low for air regions (0.48). Pseudo-CT produces DRRs with acceptable clinical visual agreement to CT-based DRR. Conclusion: The proposed approach makes it possible to use T1-weighted MRI to generate accurate pseudo-CT from 4-class segmentation.« less

  8. Post-traumatic hepatic artery pseudo-aneurysm combined with subphrenic liver abscess treated with embolization

    PubMed Central

    Sun, Long; Guan, Yong-Song; Wu, Hua; Pan, Wei-Min; Li, Xiao; He, Qing; Liu, Yuan

    2006-01-01

    A 23-year-old man with post-traumatic hepatic artery pseudo-aneurysm and subphrenic liver abscess was admitted. He underwent coil embolization of hepatic artery pseudo-aneurysm. The pseudo-aneurysm was successfully obstructed and subphrenic liver abscess was controlled. Super-selective trans-catheter coil embolization may represent an effective treatment for hepatic artery pseudo-aneurysm combined with subphrenic liver abscess in the absence of other therapeutic alternatives. PMID:16718774

  9. Post-traumatic hepatic artery pseudo-aneurysm combined with subphrenic liver abscess treated with embolization.

    PubMed

    Sun, Long; Guan, Yong-Song; Wu, Hua; Pan, Wei-Min; Li, Xiao; He, Qing; Liu, Yuan

    2006-05-07

    A 23-year-old man with post-traumatic hepatic artery pseudo-aneurysm and subphrenic liver abscess was admitted. He underwent coil embolization of hepatic artery pseudo-aneurysm. The pseudo-aneurysm was successfully obstructed and subphrenic liver abscess was controlled. Super-selective trans-catheter coil embolization may represent an effective treatment for hepatic artery pseudo-aneurysm combined with subphrenic liver abscess in the absence of other therapeutic alternatives.

  10. A Pseudo Fractional-N Clock Generator with 50% Duty Cycle Output

    NASA Astrophysics Data System (ADS)

    Yang, Wei-Bin; Lo, Yu-Lung; Chao, Ting-Sheng

    A proposed pseudo fractional-N clock generator with 50% duty cycle output is presented by using the pseudo fractional-N controller for SoC chips and the dynamic frequency scaling applications. The different clock frequencies can be generated with the particular phase combinations of a four-stage voltage-controlled oscillator (VCO). It has been fabricated in a 0.13µm CMOS technology, and work with a supply voltage of 1.2V. According to measured results, the frequency range of the proposed pseudo fractional-N clock generator is from 71.4MHz to 1GHz and the peak-to-peak jitter is less than 5% of the output period. Duty cycle error rates of the output clock frequencies are from 0.8% to 2% and the measured power dissipation of the pseudo fractional-N controller is 146µW at 304MHz.

  11. Adapting Better Interpolation Methods to Model Amphibious MT Data Along the Cascadian Subduction Zone.

    NASA Astrophysics Data System (ADS)

    Parris, B. A.; Egbert, G. D.; Key, K.; Livelybrooks, D.

    2016-12-01

    Magnetotellurics (MT) is an electromagnetic technique used to model the inner Earth's electrical conductivity structure. MT data can be analyzed using iterative, linearized inversion techniques to generate models imaging, in particular, conductive partial melts and aqueous fluids that play critical roles in subduction zone processes and volcanism. For example, the Magnetotelluric Observations of Cascadia using a Huge Array (MOCHA) experiment provides amphibious data useful for imaging subducted fluids from trench to mantle wedge corner. When using MOD3DEM(Egbert et al. 2012), a finite difference inversion package, we have encountered problems inverting, particularly, sea floor stations due to the strong, nearby conductivity gradients. As a work-around, we have found that denser, finer model grids near the land-sea interface produce better inversions, as characterized by reduced data residuals. This is partly to be due to our ability to more accurately capture topography and bathymetry. We are experimenting with improved interpolation schemes that more accurately track EM fields across cell boundaries, with an eye to enhancing the accuracy of the simulated responses and, thus, inversion results. We are adapting how MOD3DEM interpolates EM fields in two ways. The first seeks to improve weighting functions for interpolants to better address current continuity across grid boundaries. Electric fields are interpolated using a tri-linear spline technique, where the eight nearest electrical field estimates are each given weights determined by the technique, a kind of weighted average. We are modifying these weights to include cross-boundary conductivity ratios to better model current continuity. We are also adapting some of the techniques discussed in Shantsev et al (2014) to enhance the accuracy of the interpolated fields calculated by our forward solver, as well as to better approximate the sensitivities passed to the software's Jacobian that are used to generate a new forward model during each iteration of the inversion.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loef, P.A.; Smed, T.; Andersson, G.

    The minimum singular value of the power flow Jacobian matrix has been used as a static voltage stability index, indicating the distance between the studied operating point and the steady state voltage stability limit. In this paper a fast method to calculate the minimum singular value and the corresponding (left and right) singular vectors is presented. The main advantages of the developed algorithm are the small amount of computation time needed, and that it only requires information available from an ordinary program for power flow calculations. Furthermore, the proposed method fully utilizes the sparsity of the power flow Jacobian matrixmore » and hence the memory requirements for the computation are low. These advantages are preserved when applied to various submatrices of the Jacobian matrix, which can be useful in constructing special voltage stability indices. The developed algorithm was applied to small test systems as well as to a large (real size) system with over 1000 nodes, with satisfactory results.« less

  13. Chow groups of intersections of quadrics via homological projective duality and (Jacobians of) non-commutative motives

    NASA Astrophysics Data System (ADS)

    Bernardara, M.; Tabuada, G.

    2016-06-01

    Conjectures of Beilinson-Bloch type predict that the low-degree rational Chow groups of intersections of quadrics are one-dimensional. This conjecture was proved by Otwinowska in [20]. By making use of homological projective duality and the recent theory of (Jacobians of) non-commutative motives, we give an alternative proof of this conjecture in the case of a complete intersection of either two quadrics or three odd-dimensional quadrics. Moreover, we prove that in these cases the unique non-trivial algebraic Jacobian is the middle one. As an application, we make use of Vial's work [26], [27] to describe the rational Chow motives of these complete intersections and show that smooth fibrations into such complete intersections over bases S of small dimension satisfy Murre's conjecture (when \\dim (S)≤ 1), Grothendieck's standard conjecture of Lefschetz type (when \\dim (S)≤ 2), and Hodge's conjecture (when \\dim(S)≤ 3).

  14. Parallel Computation of the Jacobian Matrix for Nonlinear Equation Solvers Using MATLAB

    NASA Technical Reports Server (NTRS)

    Rose, Geoffrey K.; Nguyen, Duc T.; Newman, Brett A.

    2017-01-01

    Demonstrating speedup for parallel code on a multicore shared memory PC can be challenging in MATLAB due to underlying parallel operations that are often opaque to the user. This can limit potential for improvement of serial code even for the so-called embarrassingly parallel applications. One such application is the computation of the Jacobian matrix inherent to most nonlinear equation solvers. Computation of this matrix represents the primary bottleneck in nonlinear solver speed such that commercial finite element (FE) and multi-body-dynamic (MBD) codes attempt to minimize computations. A timing study using MATLAB's Parallel Computing Toolbox was performed for numerical computation of the Jacobian. Several approaches for implementing parallel code were investigated while only the single program multiple data (spmd) method using composite objects provided positive results. Parallel code speedup is demonstrated but the goal of linear speedup through the addition of processors was not achieved due to PC architecture.

  15. Improved Genetic Algorithm Based on the Cooperation of Elite and Inverse-elite

    NASA Astrophysics Data System (ADS)

    Kanakubo, Masaaki; Hagiwara, Masafumi

    In this paper, we propose an improved genetic algorithm based on the combination of Bee system and Inverse-elitism, both are effective strategies for the improvement of GA. In the Bee system, in the beginning, each chromosome tries to find good solution individually as global search. When some chromosome is regarded as superior one, the other chromosomes try to find solution around there. However, since chromosomes for global search are generated randomly, Bee system lacks global search ability. On the other hand, in the Inverse-elitism, an inverse-elite whose gene values are reversed from the corresponding elite is produced. This strategy greatly contributes to diversification of chromosomes, but it lacks local search ability. In the proposed method, the Inverse-elitism with Pseudo-simplex method is employed for global search of Bee system in order to strengthen global search ability. In addition, it also has strong local search ability. The proposed method has synergistic effects of the three strategies. We confirmed validity and superior performance of the proposed method by computer simulations.

  16. Characterization of network structure in stereoEEG data using consensus-based partial coherence.

    PubMed

    Ter Wal, Marije; Cardellicchio, Pasquale; LoRusso, Giorgio; Pelliccia, Veronica; Avanzini, Pietro; Orban, Guy A; Tiesinga, Paul He

    2018-06-06

    Coherence is a widely used measure to determine the frequency-resolved functional connectivity between pairs of recording sites, but this measure is confounded by shared inputs to the pair. To remove shared inputs, the 'partial coherence' can be computed by conditioning the spectral matrices of the pair on all other recorded channels, which involves the calculation of a matrix (pseudo-) inverse. It has so far remained a challenge to use the time-resolved partial coherence to analyze intracranial recordings with a large number of recording sites. For instance, calculating the partial coherence using a pseudoinverse method produces a high number of false positives when it is applied to a large number of channels. To address this challenge, we developed a new method that randomly aggregated channels into a smaller number of effective channels on which the calculation of partial coherence was based. We obtained a 'consensus' partial coherence (cPCOH) by repeating this approach for several random aggregations of channels (permutations) and only accepting those activations in time and frequency with a high enough consensus. Using model data we show that the cPCOH method effectively filters out the effect of shared inputs and performs substantially better than the pseudo-inverse. We successfully applied the cPCOH procedure to human stereotactic EEG data and demonstrated three key advantages of this method relative to alternative procedures. First, it reduces the number of false positives relative to the pseudo-inverse method. Second, it allows for titration of the amount of false positives relative to the false negatives by adjusting the consensus threshold, thus allowing the data-analyst to prioritize one over the other to meet specific analysis demands. Third, it substantially reduced the number of identified interactions compared to coherence, providing a sparser network of connections from which clear spatial patterns emerged. These patterns can serve as a starting point of further analyses that provide insight into network dynamics during cognitive processes. These advantages likely generalize to other modalities in which shared inputs introduce confounds, such as electroencephalography (EEG) and magneto-encephalography (MEG). Copyright © 2018. Published by Elsevier Inc.

  17. Definition of Shifts of Optical Transitions Frequencies due to Pulse Perturbation Action by the Photon Echo Signal Form

    NASA Astrophysics Data System (ADS)

    Lisin, V. N.; Shegeda, A. M.; Samartsev, V. V.

    2015-09-01

    A relative phase shift between the different groups of excited dipoles, which appears as result of its frequency splitting due to action of a pulse of electric or magnetic fields, depends on a time, if the pulse overlaps in time with echo-pulse. As а consequence, the echo waveform is changed. The echo time form is modulated. The inverse modulation period well enough approximates Zeeman and pseudo-Stark splitting in the cases of magnetic and, therefore, electrical fields. Thus the g-factors of ground 4I15/2 and excited 4F9/2 optical states of Er3+ ion in LuLiF4 and YLiF4 have been measured and pseudo-Stark shift of R1 line in ruby has been determined.

  18. Pseudo-time-reversal symmetry and topological edge states in two-dimensional acoustic crystals

    PubMed Central

    Mei, Jun; Chen, Zeguo; Wu, Ying

    2016-01-01

    We propose a simple two-dimensional acoustic crystal to realize topologically protected edge states for acoustic waves. The acoustic crystal is composed of a triangular array of core-shell cylinders embedded in a water host. By utilizing the point group symmetry of two doubly degenerate eigenstates at the Γ point, we can construct pseudo-time-reversal symmetry as well as pseudo-spin states in this classical system. We develop an effective Hamiltonian for the associated dispersion bands around the Brillouin zone center, and find the inherent link between the band inversion and the topological phase transition. With numerical simulations, we unambiguously demonstrate the unidirectional propagation of acoustic edge states along the interface between a topologically nontrivial acoustic crystal and a trivial one, and the robustness of the edge states against defects with sharp bends. Our work provides a new design paradigm for manipulating and transporting acoustic waves in a topologically protected manner. Technological applications and devices based on our design are expected in various frequency ranges of interest, spanning from infrasound to ultrasound. PMID:27587311

  19. Sensitivity analysis of eigenvalues for an electro-hydraulic servomechanism

    NASA Astrophysics Data System (ADS)

    Stoia-Djeska, M.; Safta, C. A.; Halanay, A.; Petrescu, C.

    2012-11-01

    Electro-hydraulic servomechanisms (EHSM) are important components of flight control systems and their role is to control the movement of the flying control surfaces in response to the movement of the cockpit controls. As flight-control systems, the EHSMs have a fast dynamic response, a high power to inertia ratio and high control accuracy. The paper is devoted to the study of the sensitivity for an electro-hydraulic servomechanism used for an aircraft aileron action. The mathematical model of the EHSM used in this paper includes a large number of parameters whose actual values may vary within some ranges of uncertainty. It consists in a nonlinear ordinary differential equation system composed by the mass and energy conservation equations, the actuator movement equations and the controller equation. In this work the focus is on the sensitivities of the eigenvalues of the linearized homogeneous system, which are the partial derivatives of the eigenvalues of the state-space system with respect the parameters. These are obtained using a modal approach based on the eigenvectors of the state-space direct and adjoint systems. To calculate the eigenvalues and their sensitivity the system's Jacobian and its partial derivatives with respect the parameters are determined. The calculation of the derivative of the Jacobian matrix with respect to the parameters is not a simple task and for many situations it must be done numerically. The system stability is studied in relation with three parameters: m, the equivalent inertial load of primary control surface reduced to the actuator rod; B, the bulk modulus of oil and p a pressure supply proportionality coefficient. All the sensitivities calculated in this work are in good agreement with those obtained through recalculations.

  20. On the use of finite difference matrix-vector products in Newton-Krylov solvers for implicit climate dynamics with spectral elements

    DOE PAGES

    Woodward, Carol S.; Gardner, David J.; Evans, Katherine J.

    2015-01-01

    Efficient solutions of global climate models require effectively handling disparate length and time scales. Implicit solution approaches allow time integration of the physical system with a step size governed by accuracy of the processes of interest rather than by stability of the fastest time scales present. Implicit approaches, however, require the solution of nonlinear systems within each time step. Usually, a Newton's method is applied to solve these systems. Each iteration of the Newton's method, in turn, requires the solution of a linear model of the nonlinear system. This model employs the Jacobian of the problem-defining nonlinear residual, but thismore » Jacobian can be costly to form. If a Krylov linear solver is used for the solution of the linear system, the action of the Jacobian matrix on a given vector is required. In the case of spectral element methods, the Jacobian is not calculated but only implemented through matrix-vector products. The matrix-vector multiply can also be approximated by a finite difference approximation which may introduce inaccuracy in the overall nonlinear solver. In this paper, we review the advantages and disadvantages of finite difference approximations of these matrix-vector products for climate dynamics within the spectral element shallow water dynamical core of the Community Atmosphere Model.« less

  1. Jacobian-free approximate solvers for hyperbolic systems: Application to relativistic magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Castro, Manuel J.; Gallardo, José M.; Marquina, Antonio

    2017-10-01

    We present recent advances in PVM (Polynomial Viscosity Matrix) methods based on internal approximations to the absolute value function, and compare them with Chebyshev-based PVM solvers. These solvers only require a bound on the maximum wave speed, so no spectral decomposition is needed. Another important feature of the proposed methods is that they are suitable to be written in Jacobian-free form, in which only evaluations of the physical flux are used. This is particularly interesting when considering systems for which the Jacobians involve complex expressions, e.g., the relativistic magnetohydrodynamics (RMHD) equations. On the other hand, the proposed Jacobian-free solvers have also been extended to the case of approximate DOT (Dumbser-Osher-Toro) methods, which can be regarded as simple and efficient approximations to the classical Osher-Solomon method, sharing most of it interesting features and being applicable to general hyperbolic systems. To test the properties of our schemes a number of numerical experiments involving the RMHD equations are presented, both in one and two dimensions. The obtained results are in good agreement with those found in the literature and show that our schemes are robust and accurate, running stable under a satisfactory time step restriction. It is worth emphasizing that, although this work focuses on RMHD, the proposed schemes are suitable to be applied to general hyperbolic systems.

  2. Spherical earth gravity and magnetic anomaly analysis by equivalent point source inversion

    NASA Technical Reports Server (NTRS)

    Von Frese, R. R. B.; Hinze, W. J.; Braile, L. W.

    1981-01-01

    To facilitate geologic interpretation of satellite elevation potential field data, analysis techniques are developed and verified in the spherical domain that are commensurate with conventional flat earth methods of potential field interpretation. A powerful approach to the spherical earth problem relates potential field anomalies to a distribution of equivalent point sources by least squares matrix inversion. Linear transformations of the equivalent source field lead to corresponding geoidal anomalies, pseudo-anomalies, vector anomaly components, spatial derivatives, continuations, and differential magnetic pole reductions. A number of examples using 1 deg-averaged surface free-air gravity anomalies of POGO satellite magnetometer data for the United States, Mexico, and Central America illustrate the capabilities of the method.

  3. Scalable subsurface inverse modeling of huge data sets with an application to tracer concentration breakthrough data from magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Lee, Jonghyun; Yoon, Hongkyu; Kitanidis, Peter K.; Werth, Charles J.; Valocchi, Albert J.

    2016-07-01

    Characterizing subsurface properties is crucial for reliable and cost-effective groundwater supply management and contaminant remediation. With recent advances in sensor technology, large volumes of hydrogeophysical and geochemical data can be obtained to achieve high-resolution images of subsurface properties. However, characterization with such a large amount of information requires prohibitive computational costs associated with "big data" processing and numerous large-scale numerical simulations. To tackle such difficulties, the principal component geostatistical approach (PCGA) has been proposed as a "Jacobian-free" inversion method that requires much smaller forward simulation runs for each iteration than the number of unknown parameters and measurements needed in the traditional inversion methods. PCGA can be conveniently linked to any multiphysics simulation software with independent parallel executions. In this paper, we extend PCGA to handle a large number of measurements (e.g., 106 or more) by constructing a fast preconditioner whose computational cost scales linearly with the data size. For illustration, we characterize the heterogeneous hydraulic conductivity (K) distribution in a laboratory-scale 3-D sand box using about 6 million transient tracer concentration measurements obtained using magnetic resonance imaging. Since each individual observation has little information on the K distribution, the data were compressed by the zeroth temporal moment of breakthrough curves, which is equivalent to the mean travel time under the experimental setting. Only about 2000 forward simulations in total were required to obtain the best estimate with corresponding estimation uncertainty, and the estimated K field captured key patterns of the original packing design, showing the efficiency and effectiveness of the proposed method.

  4. Unifying dynamical and structural stability of equilibria

    NASA Astrophysics Data System (ADS)

    Arnoldi, Jean-François; Haegeman, Bart

    2016-09-01

    We exhibit a fundamental relationship between measures of dynamical and structural stability of linear dynamical systems-e.g. linearized models in the vicinity of equilibria. We show that dynamical stability, quantified via the response to external perturbations (i.e. perturbation of dynamical variables), coincides with the minimal internal perturbation (i.e. perturbations of interactions between variables) able to render the system unstable. First, by reformulating a result of control theory, we explain that harmonic external perturbations reflect the spectral sensitivity of the Jacobian matrix at the equilibrium, with respect to constant changes of its coefficients. However, for this equivalence to hold, imaginary changes of the Jacobian's coefficients have to be allowed. The connection with dynamical stability is thus lost for real dynamical systems. We show that this issue can be avoided, thus recovering the fundamental link between dynamical and structural stability, by considering stochastic noise as external and internal perturbations. More precisely, we demonstrate that a linear system's response to white-noise perturbations directly reflects the intensity of internal white-noise disturbance that it can accommodate before becoming stochastically unstable.

  5. Unifying dynamical and structural stability of equilibria.

    PubMed

    Arnoldi, Jean-François; Haegeman, Bart

    2016-09-01

    We exhibit a fundamental relationship between measures of dynamical and structural stability of linear dynamical systems-e.g. linearized models in the vicinity of equilibria. We show that dynamical stability, quantified via the response to external perturbations (i.e. perturbation of dynamical variables), coincides with the minimal internal perturbation (i.e. perturbations of interactions between variables) able to render the system unstable. First, by reformulating a result of control theory, we explain that harmonic external perturbations reflect the spectral sensitivity of the Jacobian matrix at the equilibrium, with respect to constant changes of its coefficients. However, for this equivalence to hold, imaginary changes of the Jacobian's coefficients have to be allowed. The connection with dynamical stability is thus lost for real dynamical systems. We show that this issue can be avoided, thus recovering the fundamental link between dynamical and structural stability, by considering stochastic noise as external and internal perturbations. More precisely, we demonstrate that a linear system's response to white-noise perturbations directly reflects the intensity of internal white-noise disturbance that it can accommodate before becoming stochastically unstable.

  6. Biomechanical stability analysis of the lambda-model controlling one joint.

    PubMed

    Lan, L; Zhu, K Y

    2007-06-01

    Computer modeling and control of the human motor system might be helpful for understanding the mechanism of human motor system and for the diagnosis and treatment of neuromuscular disorders. In this paper, a brief view of the equilibrium point hypothesis for human motor system modeling is given, and the lambda-model derived from this hypothesis is studied. The stability of the lambda-model based on equilibrium and Jacobian matrix is investigated. The results obtained in this paper suggest that the lambda-model is stable and has a unique equilibrium point under certain conditions.

  7. Manipulator control by exact linearization

    NASA Technical Reports Server (NTRS)

    Kruetz, K.

    1987-01-01

    Comments on the application to rigid link manipulators of geometric control theory, resolved acceleration control, operational space control, and nonlinear decoupling theory are given, and the essential unity of these techniques for externally linearizing and decoupling end effector dynamics is discussed. Exploiting the fact that the mass matrix of a rigid link manipulator is positive definite, a consequence of rigid link manipulators belonging to the class of natural physical systems, it is shown that a necessary and sufficient condition for a locally externally linearizing and output decoupling feedback law to exist is that the end effector Jacobian matrix be nonsingular. Furthermore, this linearizing feedback is easy to produce.

  8. Solution des systemes de controle de grandes dimensions basee sur les boucles de retroaction dans la simulation des reseaux electriques

    NASA Astrophysics Data System (ADS)

    Mugombozi, Chuma Francis

    The generation of electrical energy, as well as its transportation and consumption, requires complex control systems for the regulation of power and frequency. These control systems must take into account, among others, new energy sources such as wind energy and new technologies for interconnection by high voltage DC link. These control systems must be able to monitor and achieve such regulation in accordance with the dynamics of the energy source, faults and other events which may induce transients phenomena into the power network. Such transients conditions have to be analyzed using the most accurate and detailed hence, complex models of control system. In addition, in the feasibility study phase, the calibration or the setup of equipment as well as in the operation of the power network, one may require decision aid tools for engineers. This includes, for instance, knowledge of energy dissipated into the arresters in transient analysis. These tools use simulation programs data as inputs and may require that complex functions be solved with numerical methods. These functions are part of control system in computer simulator. Moreover, the simulation evolves in a broader context of the development of digital controller, distributed and parallel high performance computing and rapid evolutions in computer (multiprocessor) technology. In such context, a continuing improvement of the control equations solver is welcomed. Control systems are modelled using ax=b simultaneous system of equations. These equations are sometimes non-linear with feedback loops and thus require iterative Newton methods, including the formation of a Jacobian matrix and ordering as well as processing by graph theory tools. The proposed approach is based on the formulation of a reduced rank Jacobian matrix. The dimension is reduced up to the count of feedback loops. With this new approach, gains in computation speed are expected without compromising its accuracy when compared to classical full rank Jacobian matrix representation. A directed graph representation is adopted and a proper approach for detecting and removing cycles within the graph is introduced. A condition of all zero eigenvalues of adjacency matrix of the graph is adopted. The transformation of the graph of controls with no cycle permits a formulation of control equations for only feedback points. This yields a general feedback interconnection (GFBI) representation of control, which is the main contribution of this thesis. Methods for solving (non-linear) equations of control systems were deployed into the new GFBI approach. Five variants of the new approach were illustrated including firstly, a basic Newton method (1), a more robust one, the Dogleg method (2) and a fixed-point iterations method (3). I. The presented approach is implemented in Electromagnetic Transient program EMTP-RV and tested on practical systems of various types and levels of complexity: the PLL, an asynchronous machine with 87 blocks reduced to 23 feedback equations by GFBI, and 12 wind power plants integrated to the IEEE-39 buses system. Further analysis, which opens up avenues for future research includes comparison of the proposed approach against existing ones. With respect to the sole representation, it is shown that the proposed approach is equivalent to full classic representation of system of equations through a proper substitution process which complies with topological sequence and by skipping feedback variable identified by GFBI. Moreover, a second comparison with state space based approach, such as that in MATLAB/Simulink, shows that output evaluation step in state-space approach with algebraic constraints is similar to the GFBI. The algebraic constraints are similar to feedback variables. A difference may arise, however, when the number of algebraic constraints is not the optimal number of cuts for the GFBI method: for the PLL, for example, MATLAB/Simulink generated 3 constraints while the GFBI generated only 2. The GFBI method may offer some advantages in this case. A last analysis axis prompted further work in initialization. It is shown that GFBI method may modifies the convergence properties of iterations of the Newton method. The Newton- Kantorovich theorem, using bounds on the norms of the Jacobian, has been applied to the proposed GFBI and classic full representation of control equations. The expressions of the Jacobian norms have been established for generic cases using Coates graph. It appears from the analysis of a simple case, for the same initial conditions, the behaviour of the Newton- Kantorovich theorem differs in both cases. These differences may also be more pronounced in the non-linear case. Further work would be useful to investigate this aspect and, eventually, pave the way to new initialization approaches. Despite these limitations, not to mention areas for improvement in further work, one notes the contribution of this thesis to improve the gain of time on simulation for the solution of control systems. (Abstract shortened by UMI.).

  9. An entropy-variables-based formulation of residual distribution schemes for non-equilibrium flows

    NASA Astrophysics Data System (ADS)

    Garicano-Mena, Jesús; Lani, Andrea; Degrez, Gérard

    2018-06-01

    In this paper we present an extension of Residual Distribution techniques for the simulation of compressible flows in non-equilibrium conditions. The latter are modeled by means of a state-of-the-art multi-species and two-temperature model. An entropy-based variable transformation that symmetrizes the projected advective Jacobian for such a thermophysical model is introduced. Moreover, the transformed advection Jacobian matrix presents a block diagonal structure, with mass-species and electronic-vibrational energy being completely decoupled from the momentum and total energy sub-system. The advantageous structure of the transformed advective Jacobian can be exploited by contour-integration-based Residual Distribution techniques: established schemes that operate on dense matrices can be substituted by the same scheme operating on the momentum-energy subsystem matrix and repeated application of scalar scheme to the mass-species and electronic-vibrational energy terms. Finally, the performance gain of the symmetrizing-variables formulation is quantified on a selection of representative testcases, ranging from subsonic to hypersonic, in inviscid or viscous conditions.

  10. Experiments with conjugate gradient algorithms for homotopy curve tracking

    NASA Technical Reports Server (NTRS)

    Irani, Kashmira M.; Ribbens, Calvin J.; Watson, Layne T.; Kamat, Manohar P.; Walker, Homer F.

    1991-01-01

    There are algorithms for finding zeros or fixed points of nonlinear systems of equations that are globally convergent for almost all starting points, i.e., with probability one. The essence of all such algorithms is the construction of an appropriate homotopy map and then tracking some smooth curve in the zero set of this homotopy map. HOMPACK is a mathematical software package implementing globally convergent homotopy algorithms with three different techniques for tracking a homotopy zero curve, and has separate routines for dense and sparse Jacobian matrices. The HOMPACK algorithms for sparse Jacobian matrices use a preconditioned conjugate gradient algorithm for the computation of the kernel of the homotopy Jacobian matrix, a required linear algebra step for homotopy curve tracking. Here, variants of the conjugate gradient algorithm are implemented in the context of homotopy curve tracking and compared with Craig's preconditioned conjugate gradient method used in HOMPACK. The test problems used include actual large scale, sparse structural mechanics problems.

  11. The “Dry-Run” Analysis: A Method for Evaluating Risk Scores for Confounding Control

    PubMed Central

    Wyss, Richard; Hansen, Ben B.; Ellis, Alan R.; Gagne, Joshua J.; Desai, Rishi J.; Glynn, Robert J.; Stürmer, Til

    2017-01-01

    Abstract A propensity score (PS) model's ability to control confounding can be assessed by evaluating covariate balance across exposure groups after PS adjustment. The optimal strategy for evaluating a disease risk score (DRS) model's ability to control confounding is less clear. DRS models cannot be evaluated through balance checks within the full population, and they are usually assessed through prediction diagnostics and goodness-of-fit tests. A proposed alternative is the “dry-run” analysis, which divides the unexposed population into “pseudo-exposed” and “pseudo-unexposed” groups so that differences on observed covariates resemble differences between the actual exposed and unexposed populations. With no exposure effect separating the pseudo-exposed and pseudo-unexposed groups, a DRS model is evaluated by its ability to retrieve an unconfounded null estimate after adjustment in this pseudo-population. We used simulations and an empirical example to compare traditional DRS performance metrics with the dry-run validation. In simulations, the dry run often improved assessment of confounding control, compared with the C statistic and goodness-of-fit tests. In the empirical example, PS and DRS matching gave similar results and showed good performance in terms of covariate balance (PS matching) and controlling confounding in the dry-run analysis (DRS matching). The dry-run analysis may prove useful in evaluating confounding control through DRS models. PMID:28338910

  12. Robust large-scale parallel nonlinear solvers for simulations.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bader, Brett William; Pawlowski, Roger Patrick; Kolda, Tamara Gibson

    2005-11-01

    This report documents research to develop robust and efficient solution techniques for solving large-scale systems of nonlinear equations. The most widely used method for solving systems of nonlinear equations is Newton's method. While much research has been devoted to augmenting Newton-based solvers (usually with globalization techniques), little has been devoted to exploring the application of different models. Our research has been directed at evaluating techniques using different models than Newton's method: a lower order model, Broyden's method, and a higher order model, the tensor method. We have developed large-scale versions of each of these models and have demonstrated their usemore » in important applications at Sandia. Broyden's method replaces the Jacobian with an approximation, allowing codes that cannot evaluate a Jacobian or have an inaccurate Jacobian to converge to a solution. Limited-memory methods, which have been successful in optimization, allow us to extend this approach to large-scale problems. We compare the robustness and efficiency of Newton's method, modified Newton's method, Jacobian-free Newton-Krylov method, and our limited-memory Broyden method. Comparisons are carried out for large-scale applications of fluid flow simulations and electronic circuit simulations. Results show that, in cases where the Jacobian was inaccurate or could not be computed, Broyden's method converged in some cases where Newton's method failed to converge. We identify conditions where Broyden's method can be more efficient than Newton's method. We also present modifications to a large-scale tensor method, originally proposed by Bouaricha, for greater efficiency, better robustness, and wider applicability. Tensor methods are an alternative to Newton-based methods and are based on computing a step based on a local quadratic model rather than a linear model. The advantage of Bouaricha's method is that it can use any existing linear solver, which makes it simple to write and easily portable. However, the method usually takes twice as long to solve as Newton-GMRES on general problems because it solves two linear systems at each iteration. In this paper, we discuss modifications to Bouaricha's method for a practical implementation, including a special globalization technique and other modifications for greater efficiency. We present numerical results showing computational advantages over Newton-GMRES on some realistic problems. We further discuss a new approach for dealing with singular (or ill-conditioned) matrices. In particular, we modify an algorithm for identifying a turning point so that an increasingly ill-conditioned Jacobian does not prevent convergence.« less

  13. Monte Carlo based method for fluorescence tomographic imaging with lifetime multiplexing using time gates

    PubMed Central

    Chen, Jin; Venugopal, Vivek; Intes, Xavier

    2011-01-01

    Time-resolved fluorescence optical tomography allows 3-dimensional localization of multiple fluorophores based on lifetime contrast while providing a unique data set for improved resolution. However, to employ the full fluorescence time measurements, a light propagation model that accurately simulates weakly diffused and multiple scattered photons is required. In this article, we derive a computationally efficient Monte Carlo based method to compute time-gated fluorescence Jacobians for the simultaneous imaging of two fluorophores with lifetime contrast. The Monte Carlo based formulation is validated on a synthetic murine model simulating the uptake in the kidneys of two distinct fluorophores with lifetime contrast. Experimentally, the method is validated using capillaries filled with 2.5nmol of ICG and IRDye™800CW respectively embedded in a diffuse media mimicking the average optical properties of mice. Combining multiple time gates in one inverse problem allows the simultaneous reconstruction of multiple fluorophores with increased resolution and minimal crosstalk using the proposed formulation. PMID:21483610

  14. An adjoint-based method for a linear mechanically-coupled tumor model: application to estimate the spatial variation of murine glioma growth based on diffusion weighted magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Feng, Xinzeng; Hormuth, David A.; Yankeelov, Thomas E.

    2018-06-01

    We present an efficient numerical method to quantify the spatial variation of glioma growth based on subject-specific medical images using a mechanically-coupled tumor model. The method is illustrated in a murine model of glioma in which we consider the tumor as a growing elastic mass that continuously deforms the surrounding healthy-appearing brain tissue. As an inverse parameter identification problem, we quantify the volumetric growth of glioma and the growth component of deformation by fitting the model predicted cell density to the cell density estimated using the diffusion-weighted magnetic resonance imaging data. Numerically, we developed an adjoint-based approach to solve the optimization problem. Results on a set of experimentally measured, in vivo rat glioma data indicate good agreement between the fitted and measured tumor area and suggest a wide variation of in-plane glioma growth with the growth-induced Jacobian ranging from 1.0 to 6.0.

  15. Dynamic parameter identification of robot arms with servo-controlled electrical motors

    NASA Astrophysics Data System (ADS)

    Jiang, Zhao-Hui; Senda, Hiroshi

    2005-12-01

    This paper addresses the issue of dynamic parameter identification of the robot manipulator with servo-controlled electrical motors. An assumption is made that all kinematical parameters, such as link lengths, are known, and only dynamic parameters containing mass, moment of inertia, and their functions need to be identified. First, we derive dynamics of the robot arm with a linear form of the unknown dynamic parameters by taking dynamic characteristics of the motor and servo unit into consideration. Then, we implement the parameter identification approach to identify the unknown parameters with respect to individual link separately. A pseudo-inverse matrix is used for formulation of the parameter identification. The optimal solution is guaranteed in a sense of least-squares of the mean errors. A Direct Drive (DD) SCARA type industrial robot arm AdeptOne is used as an application example of the parameter identification. Simulations and experiments for both open loop and close loop controls are carried out. Comparison of the results confirms the correctness and usefulness of the parameter identification and the derived dynamic model.

  16. Reflection full-waveform inversion using a modified phase misfit function

    NASA Astrophysics Data System (ADS)

    Cui, Chao; Huang, Jian-Ping; Li, Zhen-Chun; Liao, Wen-Yuan; Guan, Zhe

    2017-09-01

    Reflection full-waveform inversion (RFWI) updates the low- and highwavenumber components, and yields more accurate initial models compared with conventional full-waveform inversion (FWI). However, there is strong nonlinearity in conventional RFWI because of the lack of low-frequency data and the complexity of the amplitude. The separation of phase and amplitude information makes RFWI more linear. Traditional phase-calculation methods face severe phase wrapping. To solve this problem, we propose a modified phase-calculation method that uses the phase-envelope data to obtain the pseudo phase information. Then, we establish a pseudophase-information-based objective function for RFWI, with the corresponding source and gradient terms. Numerical tests verify that the proposed calculation method using the phase-envelope data guarantees the stability and accuracy of the phase information and the convergence of the objective function. The application on a portion of the Sigsbee2A model and comparison with inversion results of the improved RFWI and conventional FWI methods verify that the pseudophase-based RFWI produces a highly accurate and efficient velocity model. Moreover, the proposed method is robust to noise and high frequency.

  17. Calculating Nozzle Side Loads using Acceleration Measurements of Test-Based Models

    NASA Technical Reports Server (NTRS)

    Brown, Andrew M.; Ruf, Joe

    2007-01-01

    As part of a NASA/MSFC research program to evaluate the effect of different nozzle contours on the well-known but poorly characterized "side load" phenomena, we attempt to back out the net force on a sub-scale nozzle during cold-flow testing using acceleration measurements. Because modeling the test facility dynamics is problematic, new techniques for creating a "pseudo-model" of the facility and nozzle directly from modal test results are applied. Extensive verification procedures were undertaken, resulting in a loading scale factor necessary for agreement between test and model based frequency response functions. Side loads are then obtained by applying a wide-band random load onto the system model, obtaining nozzle response PSD's, and iterating both the amplitude and frequency of the input until a good comparison of the response with the measured response PSD for a specific time point is obtained. The final calculated loading can be used to compare different nozzle profiles for assessment during rocket engine nozzle development and as a basis for accurate design of the nozzle and engine structure to withstand these loads. The techniques applied within this procedure have extensive applicability to timely and accurate characterization of all test fixtures used for modal test.A viewgraph presentation on a model-test based pseudo-model used to calculate side loads on rocket engine nozzles is included. The topics include: 1) Side Loads in Rocket Nozzles; 2) Present Side Loads Research at NASA/MSFC; 3) Structural Dynamic Model Generation; 4) Pseudo-Model Generation; 5) Implementation; 6) Calibration of Pseudo-Model Response; 7) Pseudo-Model Response Verification; 8) Inverse Force Determination; 9) Results; and 10) Recent Work.

  18. Evolution of the regions of the 3D particle motion in the regular polygon problem of (N+1) bodies with a quasi-homogeneous potential

    NASA Astrophysics Data System (ADS)

    Fakis, Demetrios; Kalvouridis, Tilemahos

    2017-09-01

    The regular polygon problem of (N+1) bodies deals with the dynamics of a small body, natural or artificial, in the force field of N big bodies, the ν=N-1 of which have equal masses and form an imaginary regular ν -gon, while the Nth body with a different mass is located at the center of mass of the system. In this work, instead of considering Newtonian potentials and forces, we assume that the big bodies create quasi-homogeneous potentials, in the sense that we insert to the inverse square Newtonian law of gravitation an inverse cube corrective term, aiming to approximate various phenomena due to their shape or to the radiation emitting from the primaries. Based on this new consideration, we apply a general methodology in order to investigate by means of the zero-velocity surfaces, the regions where 3D motions of the small body are allowed, their evolutions and parametric variations, their topological bifurcations, as well as the existing trapping domains of the particle. Here we note that this process is definitely a fundamental step of great importance in the study of many dynamical systems characterized by a Jacobian-type integral of motion in the long way of searching for solutions of any kind.

  19. Design and Optimization of a Hybrid-Driven Waist Rehabilitation Robot

    PubMed Central

    Zi, Bin; Yin, Guangcai; Zhang, Dan

    2016-01-01

    In this paper a waist rehabilitation robot driven by cables and pneumatic artificial muscles (PAMs) has been conceptualized and designed. In the process of mechanism design, the human body structure, the waist movement characteristics, and the actuators’ driving characteristics are the main considerable factors to make the hybrid-driven waist rehabilitation robot (HWRR) cost-effective, safe, flexible, and well-adapted. A variety of sensors are chosen to measure the position and orientation of the recovery patient to ensure patient safety at the same time as the structure design. According to the structure specialty and function, the HWRR is divided into two independent parallel robots: the waist twist device and the lower limb traction device. Then these two devices are analyzed and evaluated, respectively. Considering the characters of the human body in the HWRR, the inverse kinematics and statics are studied when the waist and the lower limb are considered as a spring and link, respectively. Based on the inverse kinematics and statics, the effect of the contraction parameter of the PAM is considered in the optimization of the waist twist device, and the lower limb traction device is optimized using particle swarm optimization (PSO) to minimize the global conditioning number over the feasible workspace. As a result of the optimization, an optimal rehabilitation robot design is obtained and the condition number of the Jacobian matrix over the feasible workspace is also calculated. PMID:27983626

  20. A non-stochastic iterative computational method to model light propagation in turbid media

    NASA Astrophysics Data System (ADS)

    McIntyre, Thomas J.; Zemp, Roger J.

    2015-03-01

    Monte Carlo models are widely used to model light transport in turbid media, however their results implicitly contain stochastic variations. These fluctuations are not ideal, especially for inverse problems where Jacobian matrix errors can lead to large uncertainties upon matrix inversion. Yet Monte Carlo approaches are more computationally favorable than solving the full Radiative Transport Equation. Here, a non-stochastic computational method of estimating fluence distributions in turbid media is proposed, which is called the Non-Stochastic Propagation by Iterative Radiance Evaluation method (NSPIRE). Rather than using stochastic means to determine a random walk for each photon packet, the propagation of light from any element to all other elements in a grid is modelled simultaneously. For locally homogeneous anisotropic turbid media, the matrices used to represent scattering and projection are shown to be block Toeplitz, which leads to computational simplifications via convolution operators. To evaluate the accuracy of the algorithm, 2D simulations were done and compared against Monte Carlo models for the cases of an isotropic point source and a pencil beam incident on a semi-infinite turbid medium. The model was shown to have a mean percent error less than 2%. The algorithm represents a new paradigm in radiative transport modelling and may offer a non-stochastic alternative to modeling light transport in anisotropic scattering media for applications where the diffusion approximation is insufficient.

  1. GARLIC - A general purpose atmospheric radiative transfer line-by-line infrared-microwave code: Implementation and evaluation

    NASA Astrophysics Data System (ADS)

    Schreier, Franz; Gimeno García, Sebastián; Hedelt, Pascal; Hess, Michael; Mendrok, Jana; Vasquez, Mayte; Xu, Jian

    2014-04-01

    A suite of programs for high resolution infrared-microwave atmospheric radiative transfer modeling has been developed with emphasis on efficient and reliable numerical algorithms and a modular approach appropriate for simulation and/or retrieval in a variety of applications. The Generic Atmospheric Radiation Line-by-line Infrared Code - GARLIC - is suitable for arbitrary observation geometry, instrumental field-of-view, and line shape. The core of GARLIC's subroutines constitutes the basis of forward models used to implement inversion codes to retrieve atmospheric state parameters from limb and nadir sounding instruments. This paper briefly introduces the physical and mathematical basics of GARLIC and its descendants and continues with an in-depth presentation of various implementation aspects: An optimized Voigt function algorithm combined with a two-grid approach is used to accelerate the line-by-line modeling of molecular cross sections; various quadrature methods are implemented to evaluate the Schwarzschild and Beer integrals; and Jacobians, i.e. derivatives with respect to the unknowns of the atmospheric inverse problem, are implemented by means of automatic differentiation. For an assessment of GARLIC's performance, a comparison of the quadrature methods for solution of the path integral is provided. Verification and validation are demonstrated using intercomparisons with other line-by-line codes and comparisons of synthetic spectra with spectra observed on Earth and from Venus.

  2. Design and Optimization of a Hybrid-Driven Waist Rehabilitation Robot.

    PubMed

    Zi, Bin; Yin, Guangcai; Zhang, Dan

    2016-12-14

    In this paper a waist rehabilitation robot driven by cables and pneumatic artificial muscles (PAMs) has been conceptualized and designed. In the process of mechanism design, the human body structure, the waist movement characteristics, and the actuators' driving characteristics are the main considerable factors to make the hybrid-driven waist rehabilitation robot (HWRR) cost-effective, safe, flexible, and well-adapted. A variety of sensors are chosen to measure the position and orientation of the recovery patient to ensure patient safety at the same time as the structure design. According to the structure specialty and function, the HWRR is divided into two independent parallel robots: the waist twist device and the lower limb traction device. Then these two devices are analyzed and evaluated, respectively. Considering the characters of the human body in the HWRR, the inverse kinematics and statics are studied when the waist and the lower limb are considered as a spring and link, respectively. Based on the inverse kinematics and statics, the effect of the contraction parameter of the PAM is considered in the optimization of the waist twist device, and the lower limb traction device is optimized using particle swarm optimization (PSO) to minimize the global conditioning number over the feasible workspace. As a result of the optimization, an optimal rehabilitation robot design is obtained and the condition number of the Jacobian matrix over the feasible workspace is also calculated.

  3. Inversion of Attributes and Full Waveforms of Ground Penetrating Radar Data Using PEST

    NASA Astrophysics Data System (ADS)

    Jazayeri, S.; Kruse, S.; Esmaeili, S.

    2015-12-01

    We seek to establish a method, based on freely available software, for inverting GPR signals for the underlying physical properties (electrical permittivity, magnetic permeability, target geometries). Such a procedure should be useful for classroom instruction and for analyzing surface GPR surveys over simple targets. We explore the applicability of the PEST parameter estimation software package for GPR inversion (www.pesthomepage.org). PEST is designed to invert data sets with large numbers of parameters, and offers a variety of inversion methods. Although primarily used in hydrogeology, the code has been applied to a wide variety of physical problems. The PEST code requires forward model input; the forward model of the GPR signal is done with the GPRMax package (www.gprmax.com). The problem of extracting the physical characteristics of a subsurface anomaly from the GPR data is highly nonlinear. For synthetic models of simple targets in homogeneous backgrounds, we find PEST's nonlinear Gauss-Marquardt-Levenberg algorithm is preferred. This method requires an initial model, for which the weighted differences between model-generated data and those of the "true" synthetic model (the objective function) are calculated. In order to do this, the Jacobian matrix and the derivatives of the observation data in respect to the model parameters are computed using a finite differences method. Next, the iterative process of building new models by updating the initial values starts in order to minimize the objective function. Another measure of the goodness of the final acceptable model is the correlation coefficient which is calculated based on the method of Cooley and Naff. An accepted final model satisfies both of these conditions. Models to date show that physical properties of simple isolated targets against homogeneous backgrounds can be obtained from multiple traces from common-offset surface surveys. Ongoing work examines the inversion capabilities with more complex target geometries and heterogeneous soils.

  4. Codimension-1 Sliding Bifurcations of a Filippov Pest Growth Model with Threshold Policy

    NASA Astrophysics Data System (ADS)

    Tang, Sanyi; Tang, Guangyao; Qin, Wenjie

    A Filippov system is proposed to describe the stage structured nonsmooth pest growth with threshold policy control (TPC). The TPC measure is represented by the total density of both juveniles and adults being chosen as an index for decisions on when to implement chemical control strategies. The proposed Filippov system can have three pieces of sliding segments and three pseudo-equilibria, which result in rich sliding mode bifurcations and local sliding bifurcations including boundary node (boundary focus, or boundary saddle) and tangency bifurcations. As the threshold density varies the model exhibits the interesting global sliding bifurcations sequentially: touching → buckling → crossing → sliding homoclinic orbit to a pseudo-saddle → crossing → touching bifurcations. In particular, bifurcation of a homoclinic orbit to a pseudo-saddle with a figure of eight shape, to a pseudo-saddle-node or to a standard saddle-node have been observed for some parameter sets. This implies that control outcomes are sensitive to the threshold level, and hence it is crucial to choose the threshold level to initiate control strategy. One more sliding segment (or pseudo-equilibrium) is induced by the total density of a population guided switching policy, compared to only the juvenile density guided policy, implying that this control policy is more effective in terms of preventing multiple pest outbreaks or causing the density of pests to stabilize at a desired level such as an economic threshold.

  5. A robust bi-orthogonal/dynamically-orthogonal method using the covariance pseudo-inverse with application to stochastic flow problems

    NASA Astrophysics Data System (ADS)

    Babaee, Hessam; Choi, Minseok; Sapsis, Themistoklis P.; Karniadakis, George Em

    2017-09-01

    We develop a new robust methodology for the stochastic Navier-Stokes equations based on the dynamically-orthogonal (DO) and bi-orthogonal (BO) methods [1-3]. Both approaches are variants of a generalized Karhunen-Loève (KL) expansion in which both the stochastic coefficients and the spatial basis evolve according to system dynamics, hence, capturing the low-dimensional structure of the solution. The DO and BO formulations are mathematically equivalent [3], but they exhibit computationally complimentary properties. Specifically, the BO formulation may fail due to crossing of the eigenvalues of the covariance matrix, while both BO and DO become unstable when there is a high condition number of the covariance matrix or zero eigenvalues. To this end, we combine the two methods into a robust hybrid framework and in addition we employ a pseudo-inverse technique to invert the covariance matrix. The robustness of the proposed method stems from addressing the following issues in the DO/BO formulation: (i) eigenvalue crossing: we resolve the issue of eigenvalue crossing in the BO formulation by switching to the DO near eigenvalue crossing using the equivalence theorem and switching back to BO when the distance between eigenvalues is larger than a threshold value; (ii) ill-conditioned covariance matrix: we utilize a pseudo-inverse strategy to invert the covariance matrix; (iii) adaptivity: we utilize an adaptive strategy to add/remove modes to resolve the covariance matrix up to a threshold value. In particular, we introduce a soft-threshold criterion to allow the system to adapt to the newly added/removed mode and therefore avoid repetitive and unnecessary mode addition/removal. When the total variance approaches zero, we show that the DO/BO formulation becomes equivalent to the evolution equation of the Optimally Time-Dependent modes [4]. We demonstrate the capability of the proposed methodology with several numerical examples, namely (i) stochastic Burgers equation: we analyze the performance of the method in the presence of eigenvalue crossing and zero eigenvalues; (ii) stochastic Kovasznay flow: we examine the method in the presence of a singular covariance matrix; and (iii) we examine the adaptivity of the method for an incompressible flow over a cylinder where for large stochastic forcing thirteen DO/BO modes are active.

  6. Electronic structure and spectroscopic properties of mononuclear manganese(III) Schiff base complexes: a systematic study on [Mn(acen)X] complexes by EPR, UV/vis, and MCD spectroscopy (X = Hal, NCS).

    PubMed

    Westphal, Anne; Klinkebiel, Arne; Berends, Hans-Martin; Broda, Henning; Kurz, Philipp; Tuczek, Felix

    2013-03-04

    The manganese(III) Schiff base complexes [Mn(acen)X] (H2acen: N,N'-ethylenebis(acetylacetone)imine, X: I(-), Br(-), Cl(-), NCS(-)) are considered as model systems for a combined study of the electronic structure using vibrational, UV/vis absorption, parallel-mode electron paramagnetic resonance (EPR) and low-temperature magnetic circular dichroism (MCD) spectroscopy. By variation of the co-ligand X, the influence of the axial ligand field within a given square-pyramidal coordination geometry on the UV/vis, EPR, and MCD spectra of the title compounds is investigated. Between 25000 and 35000 cm(-1), the low-temperature MCD spectra are dominated by two very intense, oppositely signed pseudo-A terms, referred to as "double pseudo-A terms", which change their signs within the [Mn(acen)X] series dependent on the axial ligand X. Based on molecular orbital (MO) and symmetry considerations, these features are assigned to π(n.b.)(s, a) → yz, z(2) ligand-to-metal charge transfer transitions. The individual MCD signs are directly determined from the calculated MOs of the [Mn(acen)X] complexes. The observed sign change is explained by an inversion of symmetry among the π(n.b.)(s, a) donor orbitals which leads to an interchange of the positive and negative pseudo-A terms constituting the "double pseudo-A term".

  7. N-Methyl Inversion in Pseudo-Pelletierine

    NASA Astrophysics Data System (ADS)

    Vallejo-López, Montserrat; Ecija, Patricia; Cocinero, Emilio J.; Lesarri, Alberto; Basterretxea, Francisco J.; Fernández, José A.

    2016-06-01

    We have previously conducted rotational studies of several tropanes, since this bicyclic structural motif forms the core of different alkaloids of pharmaceutical interest. Now we report on the conformational properties and molecular structure of pseudo-pelletierine (9-methyl-9-azabicyclo[3.3.1]nonan-3-one), probed in a jet expansion with Fourier-transform microwave spectroscopy. Pseudo-pelletierine is an azabicycle with two fused six-membered rings, where the N-methyl group can produce inverting axial o equatorial conformations. The two conformations were detected in the rotational spectrum, recorded in the region 6-18 GHz. Unlike tropinone and N-methylpiperidone, where the most stable conformer is equatorial, the axial species was found dominant for pseudo-pelletierine. All monosubstituted isotopic species (13C, 15N and 18O) were identified for the axial conformer, leading to an accurate determination of the effective and substitution structures. An estimation of conformational populations was derived from relative intensities. The experimental results will be compared with ab initio (MP2) and DFT (M06-2X, B3LYP) calculations. E. J. Cocinero, A. Lesarri, P. Écija, J.-U. Grabow, J. A. Fernández, F. Castaño, Phys. Chem. Chem. Phys. 2010, 49, 4503 P. Écija, E. J. Cocinero, A. Lesarri, F. J. Basterretxea, J. A. Fernández, F. Castaño, Chem. Phys. Chem. 2013, 14, 1830 P. Écija, M. Vallejo-Lopez, I. Uriarte, F. J. Basterretxea, A. Lesarri, J. A. Fernández, E. J. Cocinero, submitted 2016

  8. Pseudo paths towards minimum energy states in network dynamics

    NASA Astrophysics Data System (ADS)

    Hedayatifar, L.; Hassanibesheli, F.; Shirazi, A. H.; Vasheghani Farahani, S.; Jafari, G. R.

    2017-10-01

    The dynamics of networks forming on Heider balance theory moves towards lower tension states. The condition derived from this theory enforces agents to reevaluate and modify their interactions to achieve equilibrium. These possible changes in network's topology can be considered as various paths that guide systems to minimum energy states. Based on this theory the final destination of a system could reside on a local minimum energy, ;jammed state;, or the global minimum energy, balanced states. The question we would like to address is whether jammed states just appear by chance? Or there exist some pseudo paths that bound a system towards a jammed state. We introduce an indicator to suspect the location of a jammed state based on the Inverse Participation Ratio method (IPR). We provide a margin before a local minimum where the number of possible paths dramatically drastically decreases. This is a condition that proves adequate for ending up on a jammed states.

  9. Laplace-Fourier-domain dispersion analysis of an average derivative optimal scheme for scalar-wave equation

    NASA Astrophysics Data System (ADS)

    Chen, Jing-Bo

    2014-06-01

    By using low-frequency components of the damped wavefield, Laplace-Fourier-domain full waveform inversion (FWI) can recover a long-wavelength velocity model from the original undamped seismic data lacking low-frequency information. Laplace-Fourier-domain modelling is an important foundation of Laplace-Fourier-domain FWI. Based on the numerical phase velocity and the numerical attenuation propagation velocity, a method for performing Laplace-Fourier-domain numerical dispersion analysis is developed in this paper. This method is applied to an average-derivative optimal scheme. The results show that within the relative error of 1 per cent, the Laplace-Fourier-domain average-derivative optimal scheme requires seven gridpoints per smallest wavelength and smallest pseudo-wavelength for both equal and unequal directional sampling intervals. In contrast, the classical five-point scheme requires 23 gridpoints per smallest wavelength and smallest pseudo-wavelength to achieve the same accuracy. Numerical experiments demonstrate the theoretical analysis.

  10. Dealing with Uncertainties in Initial Orbit Determination

    NASA Technical Reports Server (NTRS)

    Armellin, Roberto; Di Lizia, Pierluigi; Zanetti, Renato

    2015-01-01

    A method to deal with uncertainties in initial orbit determination (IOD) is presented. This is based on the use of Taylor differential algebra (DA) to nonlinearly map the observation uncertainties from the observation space to the state space. When a minimum set of observations is available DA is used to expand the solution of the IOD problem in Taylor series with respect to measurement errors. When more observations are available high order inversion tools are exploited to obtain full state pseudo-observations at a common epoch. The mean and covariance of these pseudo-observations are nonlinearly computed by evaluating the expectation of high order Taylor polynomials. Finally, a linear scheme is employed to update the current knowledge of the orbit. Angles-only observations are considered and simplified Keplerian dynamics adopted to ease the explanation. Three test cases of orbit determination of artificial satellites in different orbital regimes are presented to discuss the feature and performances of the proposed methodology.

  11. Scalable subsurface inverse modeling of huge data sets with an application to tracer concentration breakthrough data from magnetic resonance imaging

    DOE PAGES

    Lee, Jonghyun; Yoon, Hongkyu; Kitanidis, Peter K.; ...

    2016-06-09

    When characterizing subsurface properties is crucial for reliable and cost-effective groundwater supply management and contaminant remediation. With recent advances in sensor technology, large volumes of hydro-geophysical and geochemical data can be obtained to achieve high-resolution images of subsurface properties. However, characterization with such a large amount of information requires prohibitive computational costs associated with “big data” processing and numerous large-scale numerical simulations. To tackle such difficulties, the Principal Component Geostatistical Approach (PCGA) has been proposed as a “Jacobian-free” inversion method that requires much smaller forward simulation runs for each iteration than the number of unknown parameters and measurements needed inmore » the traditional inversion methods. PCGA can be conveniently linked to any multi-physics simulation software with independent parallel executions. In our paper, we extend PCGA to handle a large number of measurements (e.g. 106 or more) by constructing a fast preconditioner whose computational cost scales linearly with the data size. For illustration, we characterize the heterogeneous hydraulic conductivity (K) distribution in a laboratory-scale 3-D sand box using about 6 million transient tracer concentration measurements obtained using magnetic resonance imaging. Since each individual observation has little information on the K distribution, the data was compressed by the zero-th temporal moment of breakthrough curves, which is equivalent to the mean travel time under the experimental setting. Moreover, only about 2,000 forward simulations in total were required to obtain the best estimate with corresponding estimation uncertainty, and the estimated K field captured key patterns of the original packing design, showing the efficiency and effectiveness of the proposed method. This article is protected by copyright. All rights reserved.« less

  12. Agradient velocity, vortical motion and gravity waves in a rotating shallow-water model

    NASA Astrophysics Data System (ADS)

    Sutyrin Georgi, G.

    2004-07-01

    A new approach to modelling slow vortical motion and fast inertia-gravity waves is suggested within the rotating shallow-water primitive equations with arbitrary topography. The velocity is exactly expressed as a sum of the gradient wind, described by the Bernoulli function,B, and the remaining agradient part, proportional to the velocity tendency. Then the equation for inverse potential vorticity,Q, as well as momentum equations for agradient velocity include the same source of intrinsic flow evolution expressed as a single term J (B, Q), where J is the Jacobian operator (for any steady state J (B, Q) = 0). Two components of agradient velocity are responsible for the fast inertia-gravity wave propagation similar to the traditionally used divergence and ageostrophic vorticity. This approach allows for the construction of balance relations for vortical dynamics and potential vorticity inversion schemes even for moderate Rossby and Froude numbers assuming the characteristic value of |J(B, Q)| = to be small. The components of agradient velocity are used as the fast variables slaved to potential vorticity that allows for diagnostic estimates of the velocity tendency, the direct potential vorticity inversion with the accuracy of 2 and the corresponding potential vorticity-conserving agradient velocity balance model (AVBM). The ultimate limitations of constructing the balance are revealed in the form of the ellipticity condition for balanced tendency of the Bernoulli function which incorporates both known criteria of the formal stability: the gradient wind modified by the characteristic vortical Rossby wave phase speed should be subcritical. The accuracy of the AVBM is illustrated by considering the linear normal modes and coastal Kelvin waves in the f-plane channel with topography.

  13. Conservative-variable average states for equilibrium gas multi-dimensional fluxes

    NASA Technical Reports Server (NTRS)

    Iannelli, G. S.

    1992-01-01

    Modern split component evaluations of the flux vector Jacobians are thoroughly analyzed for equilibrium-gas average-state determinations. It is shown that all such derivations satisfy a fundamental eigenvalue consistency theorem. A conservative-variable average state is then developed for arbitrary equilibrium-gas equations of state and curvilinear-coordinate fluxes. Original expressions for eigenvalues, sound speed, Mach number, and eigenvectors are then determined for a general average Jacobian, and it is shown that the average eigenvalues, Mach number, and eigenvectors may not coincide with their classical pointwise counterparts. A general equilibrium-gas equation of state is then discussed for conservative-variable computational fluid dynamics (CFD) Euler formulations. The associated derivations lead to unique compatibility relations that constrain the pressure Jacobian derivatives. Thereafter, alternative forms for the pressure variation and average sound speed are developed in terms of two average pressure Jacobian derivatives. Significantly, no additional degree of freedom exists in the determination of these two average partial derivatives of pressure. Therefore, they are simultaneously computed exactly without any auxiliary relation, hence without any geometric solution projection or arbitrary scale factors. Several alternative formulations are then compared and key differences highlighted with emphasis on the determination of the pressure variation and average sound speed. The relevant underlying assumptions are identified, including some subtle approximations that are inherently employed in published average-state procedures. Finally, a representative test case is discussed for which an intrinsically exact average state is determined. This exact state is then compared with the predictions of recent methods, and their inherent approximations are appropriately quantified.

  14. Top-down NOx and SO2 emissions simultaneously estimated from different OMI retrievals and inversion frameworks

    NASA Astrophysics Data System (ADS)

    Qu, Z.; Henze, D. K.; Wang, J.; Xu, X.; Wang, Y.

    2017-12-01

    Quantifying emissions trends of nitrogen oxides (NOx) and sulfur dioxide (SO2) is important for improving understanding of air pollution and the effectiveness of emission control strategies. We estimate long-term (2005-2016) global (2° x 2.5° resolution) and regional (North America and East Asia at 0.5° x 0.667° resolution) NOx emissions using a recently developed hybrid (mass-balance / 4D-Var) method with GEOS-Chem. NASA standard product and DOMINO retrievals of NO2 column are both used to constrain emissions; comparison of these results provides insight into regions where trends are most robust with respect to retrieval uncertainties, and highlights regions where seemingly significant trends are retrieval-specific. To incorporate chemical interactions among species, we extend our hybrid method to assimilate NO2 and SO2 observations and optimize NOx and SO2 emissions simultaneously. Due to chemical interactions, inclusion of SO2 observations leads to 30% grid-scale differences in posterior NOx emissions compared to those constrained only by NO2 observations. When assimilating and optimizing both species in pseudo observation tests, the sum of the normalized mean squared error (compared to the true emissions) of NOx and SO2 posterior emissions are 54-63% smaller than when observing/constraining a single species. NOx and SO2 emissions are also correlated through the amount of fuel combustion. To incorporate this correlation into the inversion, we optimize seven sector-specific emission scaling factors, including industry, energy, residential, aviation, transportation, shipping and agriculture. We compare posterior emissions from inversions optimizing only species' emissions, only sector-based emissions, and both species' and sector-based emissions. In situ measurements of NOx and SO2 are applied to evaluate the performance of these inversions. The impacts of the inversion on PM2.5 and O3 concentrations and premature deaths are also evaluated.

  15. Numerical simulations of induction and MWD logging tools and data inversion method with X-window interface on a UNIX workstation

    NASA Astrophysics Data System (ADS)

    Tian, Xiang-Dong

    The purpose of this research is to simulate induction and measuring-while-drilling (MWD) logs. In simulation of logs, there are two tasks. The first task, the forward modeling procedure, is to compute the logs from known formation. The second task, the inversion procedure, is to determine the unknown properties of the formation from the measured field logs. In general, the inversion procedure requires the solution of a forward model. In this study, a stable numerical method to simulate induction and MWD logs is presented. The proposed algorithm is based on a horizontal eigenmode expansion method. Vertical propagation of modes is modeled by a three-layer module. The multilayer cases are treated as a cascade of these modules. The mode tracing algorithm possesses stable characteristics that are superior to other methods. This method is applied to simulate the logs in the formations with both vertical and horizontal layers, and also used to study the groove effects of the MWD tool. The results are very good. Two-dimensional inversion of induction logs is an nonlinear problem. Nonlinear functions of the apparent conductivity are expanded into a Taylor series. After truncating the high order terms in this Taylor series, the nonlinear functions are linearized. An iterative procedure is then devised to solve the inversion problem. In each iteration, the Jacobian matrix is calculated, and a small variation computed using the least-squares method is used to modify the background medium. Finally, the inverted medium is obtained. The horizontal eigenstate method is used to solve the forward problem. It is found that a good inverted formation can be obtained by using measurements. In order to help the user simulate the induction logs conveniently, a Wellog Simulator, based on the X-window system, is developed. The application software (FORTRAN codes) embedded in the Simulator is designed to simulate the responses of the induction tools in the layered formation with dipping beds. The graphic user-interface part of the Wellog Simulator is implemented with C and Motif. Through the user interface, the user can prepare the simulation data, select the tools, simulate the logs and plot the results.

  16. Weighted augmented Jacobian matrix with a variable coefficient method for kinematics mapping of space teleoperation based on human-robot motion similarity

    NASA Astrophysics Data System (ADS)

    Shi, Zhong; Huang, Xuexiang; Hu, Tianjian; Tan, Qian; Hou, Yuzhuo

    2016-10-01

    Space teleoperation is an important space technology, and human-robot motion similarity can improve the flexibility and intuition of space teleoperation. This paper aims to obtain an appropriate kinematics mapping method of coupled Cartesian-joint space for space teleoperation. First, the coupled Cartesian-joint similarity principles concerning kinematics differences are defined. Then, a novel weighted augmented Jacobian matrix with a variable coefficient (WAJM-VC) method for kinematics mapping is proposed. The Jacobian matrix is augmented to achieve a global similarity of human-robot motion. A clamping weighted least norm scheme is introduced to achieve local optimizations, and the operating ratio coefficient is variable to pursue similarity in the elbow joint. Similarity in Cartesian space and the property of joint constraint satisfaction is analysed to determine the damping factor and clamping velocity. Finally, a teleoperation system based on human motion capture is established, and the experimental results indicate that the proposed WAJM-VC method can improve the flexibility and intuition of space teleoperation to complete complex space tasks.

  17. Dual algebraic formulation of differential GPS

    NASA Astrophysics Data System (ADS)

    Lannes, A.; Dur, S.

    2003-05-01

    A new approach to differential GPS is presented. The corresponding theoretical framework calls on elementary concepts of algebraic graph theory. The notion of double difference, which is related to that of closure in the sense of Kirchhoff, is revisited in this context. The Moore-Penrose pseudo-inverse of the closure operator plays a key role in the corresponding dual formulation. This approach, which is very attractive from a conceptual point of view, sheds a new light on the Teunissen formulation.

  18. A novel artificial neural network method for biomedical prediction based on matrix pseudo-inversion.

    PubMed

    Cai, Binghuang; Jiang, Xia

    2014-04-01

    Biomedical prediction based on clinical and genome-wide data has become increasingly important in disease diagnosis and classification. To solve the prediction problem in an effective manner for the improvement of clinical care, we develop a novel Artificial Neural Network (ANN) method based on Matrix Pseudo-Inversion (MPI) for use in biomedical applications. The MPI-ANN is constructed as a three-layer (i.e., input, hidden, and output layers) feed-forward neural network, and the weights connecting the hidden and output layers are directly determined based on MPI without a lengthy learning iteration. The LASSO (Least Absolute Shrinkage and Selection Operator) method is also presented for comparative purposes. Single Nucleotide Polymorphism (SNP) simulated data and real breast cancer data are employed to validate the performance of the MPI-ANN method via 5-fold cross validation. Experimental results demonstrate the efficacy of the developed MPI-ANN for disease classification and prediction, in view of the significantly superior accuracy (i.e., the rate of correct predictions), as compared with LASSO. The results based on the real breast cancer data also show that the MPI-ANN has better performance than other machine learning methods (including support vector machine (SVM), logistic regression (LR), and an iterative ANN). In addition, experiments demonstrate that our MPI-ANN could be used for bio-marker selection as well. Copyright © 2013 Elsevier Inc. All rights reserved.

  19. Computing sparse derivatives and consecutive zeros problem

    NASA Astrophysics Data System (ADS)

    Chandra, B. V. Ravi; Hossain, Shahadat

    2013-02-01

    We describe a substitution based sparse Jacobian matrix determination method using algorithmic differentiation. Utilizing the a priori known sparsity pattern, a compression scheme is determined using graph coloring. The "compressed pattern" of the Jacobian matrix is then reordered into a form suitable for computation by substitution. We show that the column reordering of the compressed pattern matrix (so as to align the zero entries into consecutive locations in each row) can be viewed as a variant of traveling salesman problem. Preliminary computational results show that on the test problems the performance of nearest-neighbor type heuristic algorithms is highly encouraging.

  20. Generation of a pseudo-2D shear-wave velocity section by inversion of a series of 1D dispersion curves

    USGS Publications Warehouse

    Luo, Y.; Xia, J.; Liu, J.; Xu, Y.; Liu, Q.

    2008-01-01

    Multichannel Analysis of Surface Waves utilizes a multichannel recording system to estimate near-surface shear (S)-wave velocities from high-frequency Rayleigh waves. A pseudo-2D S-wave velocity (vS) section is constructed by aligning 1D models at the midpoint of each receiver spread and using a spatial interpolation scheme. The horizontal resolution of the section is therefore most influenced by the receiver spread length and the source interval. The receiver spread length sets the theoretical lower limit and any vS structure with its lateral dimension smaller than this length will not be properly resolved in the final vS section. A source interval smaller than the spread length will not improve the horizontal resolution because spatial smearing has already been introduced by the receiver spread. In this paper, we first analyze the horizontal resolution of a pair of synthetic traces. Resolution analysis shows that (1) a pair of traces with a smaller receiver spacing achieves higher horizontal resolution of inverted S-wave velocities but results in a larger relative error; (2) the relative error of the phase velocity at a high frequency is smaller than at a low frequency; and (3) a relative error of the inverted S-wave velocity is affected by the signal-to-noise ratio of data. These results provide us with a guideline to balance the trade-off between receiver spacing (horizontal resolution) and accuracy of the inverted S-wave velocity. We then present a scheme to generate a pseudo-2D S-wave velocity section with high horizontal resolution using multichannel records by inverting high-frequency surface-wave dispersion curves calculated through cross-correlation combined with a phase-shift scanning method. This method chooses only a pair of consecutive traces within a shot gather to calculate a dispersion curve. We finally invert surface-wave dispersion curves of synthetic and real-world data. Inversion results of both synthetic and real-world data demonstrate that inverting high-frequency surface-wave dispersion curves - by a pair of traces through cross-correlation with phase-shift scanning method and with the damped least-square method and the singular-value decomposition technique - can feasibly achieve a reliable pseudo-2D S-wave velocity section with relatively high horizontal resolution. ?? 2008 Elsevier B.V. All rights reserved.

  1. Pseudo Dynamic Testing and Seismic Rehabilitation of Iraqi Brick, Bearing and Shear Walls

    DTIC Science & Technology

    2008-04-01

    R es ea rc h L ab or at or y Approved for public release; distribution is unlimited. ERDC/CERL TR-08-6 April 2008 Pseudo Dynamic Testing and...Model 307-50 and one Satec 100 kip servo-hydraulic actuator controlled by closed-loop servo controllers and an Instron 8800 multi-axis controller and RS...Plus testing software.* The Satec actuator was operated in displacement control mode, and the 50 kip CGS actuators were operated in modal control

  2. Semi-implicit iterative methods for low Mach number turbulent reacting flows: Operator splitting versus approximate factorization

    NASA Astrophysics Data System (ADS)

    MacArt, Jonathan F.; Mueller, Michael E.

    2016-12-01

    Two formally second-order accurate, semi-implicit, iterative methods for the solution of scalar transport-reaction equations are developed for Direct Numerical Simulation (DNS) of low Mach number turbulent reacting flows. The first is a monolithic scheme based on a linearly implicit midpoint method utilizing an approximately factorized exact Jacobian of the transport and reaction operators. The second is an operator splitting scheme based on the Strang splitting approach. The accuracy properties of these schemes, as well as their stability, cost, and the effect of chemical mechanism size on relative performance, are assessed in two one-dimensional test configurations comprising an unsteady premixed flame and an unsteady nonpremixed ignition, which have substantially different Damköhler numbers and relative stiffness of transport to chemistry. All schemes demonstrate their formal order of accuracy in the fully-coupled convergence tests. Compared to a (non-)factorized scheme with a diagonal approximation to the chemical Jacobian, the monolithic, factorized scheme using the exact chemical Jacobian is shown to be both more stable and more economical. This is due to an improved convergence rate of the iterative procedure, and the difference between the two schemes in convergence rate grows as the time step increases. The stability properties of the Strang splitting scheme are demonstrated to outpace those of Lie splitting and monolithic schemes in simulations at high Damköhler number; however, in this regime, the monolithic scheme using the approximately factorized exact Jacobian is found to be the most economical at practical CFL numbers. The performance of the schemes is further evaluated in a simulation of a three-dimensional, spatially evolving, turbulent nonpremixed planar jet flame.

  3. Optimal trace inequality constants for interior penalty discontinuous Galerkin discretisations of elliptic operators using arbitrary elements with non-constant Jacobians

    NASA Astrophysics Data System (ADS)

    Owens, A. R.; Kópházi, J.; Eaton, M. D.

    2017-12-01

    In this paper, a new method to numerically calculate the trace inequality constants, which arise in the calculation of penalty parameters for interior penalty discretisations of elliptic operators, is presented. These constants are provably optimal for the inequality of interest. As their calculation is based on the solution of a generalised eigenvalue problem involving the volumetric and face stiffness matrices, the method is applicable to any element type for which these matrices can be calculated, including standard finite elements and the non-uniform rational B-splines of isogeometric analysis. In particular, the presented method does not require the Jacobian of the element to be constant, and so can be applied to a much wider variety of element shapes than are currently available in the literature. Numerical results are presented for a variety of finite element and isogeometric cases. When the Jacobian is constant, it is demonstrated that the new method produces lower penalty parameters than existing methods in the literature in all cases, which translates directly into savings in the solution time of the resulting linear system. When the Jacobian is not constant, it is shown that the naive application of existing approaches can result in penalty parameters that do not guarantee coercivity of the bilinear form, and by extension, the stability of the solution. The method of manufactured solutions is applied to a model reaction-diffusion equation with a range of parameters, and it is found that using penalty parameters based on the new trace inequality constants result in better conditioned linear systems, which can be solved approximately 11% faster than those produced by the methods from the literature.

  4. Realistic Subsurface Anomaly Discrimination Using Electromagnetic Induction and an SVM Classifier

    DTIC Science & Technology

    2010-01-01

    proposed by Pasion and Oldenburg [25]: Q(t) = kt−βe−γt. (10) Various combinations of these fitting parameters can be used as inputs to classifier... Pasion -Oldenburg parameters k, β, and γ for each anomaly by a direct nonlinear least-squares fit of (10) and by linear (pseudo)inversion of its...combinations of the Pasion -Oldenburg parameters. Com- bining k and γ yields results similar to those of k and R, as Figure 7 and Table 2 show. Figure 8 and

  5. Causal inference in survival analysis using pseudo-observations.

    PubMed

    Andersen, Per K; Syriopoulou, Elisavet; Parner, Erik T

    2017-07-30

    Causal inference for non-censored response variables, such as binary or quantitative outcomes, is often based on either (1) direct standardization ('G-formula') or (2) inverse probability of treatment assignment weights ('propensity score'). To do causal inference in survival analysis, one needs to address right-censoring, and often, special techniques are required for that purpose. We will show how censoring can be dealt with 'once and for all' by means of so-called pseudo-observations when doing causal inference in survival analysis. The pseudo-observations can be used as a replacement of the outcomes without censoring when applying 'standard' causal inference methods, such as (1) or (2) earlier. We study this idea for estimating the average causal effect of a binary treatment on the survival probability, the restricted mean lifetime, and the cumulative incidence in a competing risks situation. The methods will be illustrated in a small simulation study and via a study of patients with acute myeloid leukemia who received either myeloablative or non-myeloablative conditioning before allogeneic hematopoetic cell transplantation. We will estimate the average causal effect of the conditioning regime on outcomes such as the 3-year overall survival probability and the 3-year risk of chronic graft-versus-host disease. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  6. Diminished gray matter in the hippocampus of cannabis users: possible protective effects of cannabidiol.

    PubMed

    Demirakca, Traute; Sartorius, Alexander; Ende, Gabriele; Meyer, Nadja; Welzel, Helga; Skopp, Gisela; Mann, Karl; Hermann, Derik

    2011-04-01

    Chronic cannabis use has been associated with memory deficits and a volume reduction of the hippocampus, but none of the studies accounted for different effects of tetrahydrocannabinol (THC) and cannabidiol (CBD). Using a voxel based morphometry approach optimized for small subcortical structures (DARTEL) gray matter (GM) concentration and volume of the hippocampus were measured in 11 chronic recreational cannabis users and 13 healthy controls, and correlated with THC and CBD from hair analyses. GM volume was calculated by modulating VBM using Jacobian determinants derived from the spatial normalization. Cannabis users showed lower GM volume located in a cluster of the right anterior hippocampus (P(uncorr)=0.002; effect size Cohen's d=1.34). In a regression analysis an inverse correlation of the ratio THC/CBD with the volume of the right hippocampus (P(uncorr) p<0.001, Cohen's d=3.43) was observed. Furthermore Cannabidiol correlated positively with GM concentration (unmodulated VBM data), but not with GM volume (modulated VBM) in the bilateral hippocampus (P=0.03 after correction for hippocampal volume; left hippocampus Cohen's d=4.37 and right hippocampus 4.65). Lower volume in the right hippocampus in chronic cannabis users was corroborated. Higher THC and lower CBD was associated with this volume reduction indicating neurotoxic effects of THC and neuroprotective effects of CBD. This confirms existing preclinical and clinical results. As a possible mechanism the influence of cannabinoids on hippocampal neurogenesis is suggested. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  7. A comparison of viscous-plastic sea ice solvers with and without replacement pressure

    NASA Astrophysics Data System (ADS)

    Kimmritz, Madlen; Losch, Martin; Danilov, Sergey

    2017-07-01

    Recent developments of the explicit elastic-viscous-plastic (EVP) solvers call for a new comparison with implicit solvers for the equations of viscous-plastic sea ice dynamics. In Arctic sea ice simulations, the modified and the adaptive EVP solvers, and the implicit Jacobian-free Newton-Krylov (JFNK) solver are compared against each other. The adaptive EVP method shows convergence rates that are generally similar or even better than those of the modified EVP method, but the convergence of the EVP methods is found to depend dramatically on the use of the replacement pressure (RP). Apparently, using the RP can affect the pseudo-elastic waves in the EVP methods by introducing extra non-physical oscillations so that, in the extreme case, convergence to the VP solution can be lost altogether. The JFNK solver also suffers from higher failure rates with RP implying that with RP the momentum equations are stiffer and more difficult to solve. For practical purposes, both EVP methods can be used efficiently with an unexpectedly low number of sub-cycling steps without compromising the solutions. The differences between the RP solutions and the NoRP solutions (when the RP is not being used) can be reduced with lower thresholds of viscous regularization at the cost of increasing stiffness of the equations, and hence the computational costs of solving them.

  8. Assessing the effect of a partly unobserved, exogenous, binary time-dependent covariate on survival probabilities using generalised pseudo-values.

    PubMed

    Pötschger, Ulrike; Heinzl, Harald; Valsecchi, Maria Grazia; Mittlböck, Martina

    2018-01-19

    Investigating the impact of a time-dependent intervention on the probability of long-term survival is statistically challenging. A typical example is stem-cell transplantation performed after successful donor identification from registered donors. Here, a suggested simple analysis based on the exogenous donor availability status according to registered donors would allow the estimation and comparison of survival probabilities. As donor search is usually ceased after a patient's event, donor availability status is incompletely observed, so that this simple comparison is not possible and the waiting time to donor identification needs to be addressed in the analysis to avoid bias. It is methodologically unclear, how to directly address cumulative long-term treatment effects without relying on proportional hazards while avoiding waiting time bias. The pseudo-value regression technique is able to handle the first two issues; a novel generalisation of this technique also avoids waiting time bias. Inverse-probability-of-censoring weighting is used to account for the partly unobserved exogenous covariate donor availability. Simulation studies demonstrate unbiasedness and satisfying coverage probabilities of the new method. A real data example demonstrates that study results based on generalised pseudo-values have a clear medical interpretation which supports the clinical decision making process. The proposed generalisation of the pseudo-value regression technique enables to compare survival probabilities between two independent groups where group membership becomes known over time and remains partly unknown. Hence, cumulative long-term treatment effects are directly addressed without relying on proportional hazards while avoiding waiting time bias.

  9. Kinematics of the six-degree-of-freedom force-reflecting Kraft Master

    NASA Technical Reports Server (NTRS)

    Williams, Robert L., II

    1991-01-01

    Presented here are kinematic equations for a six degree of freedom force-reflecting hand controller. The forward kinematics solution is developed and shown in simplified form. The Jacobian matrix, which uses terms from the forward kinematics solution, is derived. Both of these kinematic solutions require joint angle inputs. A calibration method is presented to determine the hand controller joint angles given the respective potentiometer readings. The kinematic relationship describing the mechanical coupling between the hand and controller shoulder and elbow joints is given. These kinematic equations may be used in an algorithm to control the hand controller as a telerobotic system component. The purpose of the hand controller is two-fold: operator commands to the telerobotic system are entered using the hand controller, and contact forces and moments from the task are reflected to the operator via the hand controller.

  10. Flight Control Design for an Autonomous Rotorcraft Using Pseudo-Sliding Mode Control and Waypoint Navigation

    NASA Astrophysics Data System (ADS)

    Mallory, Nicolas Joseph

    The design of robust automated flight control systems for aircraft of varying size and complexity is a topic of continuing interest for both military and civilian industries. By merging the benefits of robustness from sliding mode control (SMC) with the familiarity and transparency of design tradeoff offered by frequency domain approaches, this thesis presents pseudo-sliding mode control as a viable option for designing automated flight control systems for complex six degree-of-freedom aircraft. The infinite frequency control switching of SMC is replaced, by necessity, with control inputs that are continuous in nature. An introduction to SMC theory is presented, followed by a detailed design of a pseudo-sliding mode control and automated flight control system for a six degree-of-freedom model of a Hughes OH6 helicopter. This model is then controlled through three different waypoint missions that demonstrate the stability of the system and the aircraft's ability to follow certain maneuvers despite time delays, large changes in model parameters and vehicle dynamics, actuator dynamics, sensor noise, and atmospheric disturbances.

  11. Inversion of geothermal heat flux in a thermomechanically coupled nonlinear Stokes ice sheet model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Hongyu; Petra, Noemi; Stadler, Georg

    We address the inverse problem of inferring the basal geothermal heat flux from surface velocity observations using a steady-state thermomechanically coupled nonlinear Stokes ice flow model. This is a challenging inverse problem since the map from basal heat flux to surface velocity observables is indirect: the heat flux is a boundary condition for the thermal advection–diffusion equation, which couples to the nonlinear Stokes ice flow equations; together they determine the surface ice flow velocity. This multiphysics inverse problem is formulated as a nonlinear least-squares optimization problem with a cost functional that includes the data misfit between surface velocity observations andmore » model predictions. A Tikhonov regularization term is added to render the problem well posed. We derive adjoint-based gradient and Hessian expressions for the resulting partial differential equation (PDE)-constrained optimization problem and propose an inexact Newton method for its solution. As a consequence of the Petrov–Galerkin discretization of the energy equation, we show that discretization and differentiation do not commute; that is, the order in which we discretize the cost functional and differentiate it affects the correctness of the gradient. Using two- and three-dimensional model problems, we study the prospects for and limitations of the inference of the geothermal heat flux field from surface velocity observations. The results show that the reconstruction improves as the noise level in the observations decreases and that short-wavelength variations in the geothermal heat flux are difficult to recover. We analyze the ill-posedness of the inverse problem as a function of the number of observations by examining the spectrum of the Hessian of the cost functional. Motivated by the popularity of operator-split or staggered solvers for forward multiphysics problems – i.e., those that drop two-way coupling terms to yield a one-way coupled forward Jacobian – we study the effect on the inversion of a one-way coupling of the adjoint energy and Stokes equations. Here, we show that taking such a one-way coupled approach for the adjoint equations can lead to an incorrect gradient and premature termination of optimization iterations. This is due to loss of a descent direction stemming from inconsistency of the gradient with the contours of the cost functional. Nevertheless, one may still obtain a reasonable approximate inverse solution particularly if important features of the reconstructed solution emerge early in optimization iterations, before the premature termination.« less

  12. Inversion of geothermal heat flux in a thermomechanically coupled nonlinear Stokes ice sheet model

    DOE PAGES

    Zhu, Hongyu; Petra, Noemi; Stadler, Georg; ...

    2016-07-13

    We address the inverse problem of inferring the basal geothermal heat flux from surface velocity observations using a steady-state thermomechanically coupled nonlinear Stokes ice flow model. This is a challenging inverse problem since the map from basal heat flux to surface velocity observables is indirect: the heat flux is a boundary condition for the thermal advection–diffusion equation, which couples to the nonlinear Stokes ice flow equations; together they determine the surface ice flow velocity. This multiphysics inverse problem is formulated as a nonlinear least-squares optimization problem with a cost functional that includes the data misfit between surface velocity observations andmore » model predictions. A Tikhonov regularization term is added to render the problem well posed. We derive adjoint-based gradient and Hessian expressions for the resulting partial differential equation (PDE)-constrained optimization problem and propose an inexact Newton method for its solution. As a consequence of the Petrov–Galerkin discretization of the energy equation, we show that discretization and differentiation do not commute; that is, the order in which we discretize the cost functional and differentiate it affects the correctness of the gradient. Using two- and three-dimensional model problems, we study the prospects for and limitations of the inference of the geothermal heat flux field from surface velocity observations. The results show that the reconstruction improves as the noise level in the observations decreases and that short-wavelength variations in the geothermal heat flux are difficult to recover. We analyze the ill-posedness of the inverse problem as a function of the number of observations by examining the spectrum of the Hessian of the cost functional. Motivated by the popularity of operator-split or staggered solvers for forward multiphysics problems – i.e., those that drop two-way coupling terms to yield a one-way coupled forward Jacobian – we study the effect on the inversion of a one-way coupling of the adjoint energy and Stokes equations. Here, we show that taking such a one-way coupled approach for the adjoint equations can lead to an incorrect gradient and premature termination of optimization iterations. This is due to loss of a descent direction stemming from inconsistency of the gradient with the contours of the cost functional. Nevertheless, one may still obtain a reasonable approximate inverse solution particularly if important features of the reconstructed solution emerge early in optimization iterations, before the premature termination.« less

  13. Inversion of geothermal heat flux in a thermomechanically coupled nonlinear Stokes ice sheet model

    NASA Astrophysics Data System (ADS)

    Zhu, Hongyu; Petra, Noemi; Stadler, Georg; Isaac, Tobin; Hughes, Thomas J. R.; Ghattas, Omar

    2016-07-01

    We address the inverse problem of inferring the basal geothermal heat flux from surface velocity observations using a steady-state thermomechanically coupled nonlinear Stokes ice flow model. This is a challenging inverse problem since the map from basal heat flux to surface velocity observables is indirect: the heat flux is a boundary condition for the thermal advection-diffusion equation, which couples to the nonlinear Stokes ice flow equations; together they determine the surface ice flow velocity. This multiphysics inverse problem is formulated as a nonlinear least-squares optimization problem with a cost functional that includes the data misfit between surface velocity observations and model predictions. A Tikhonov regularization term is added to render the problem well posed. We derive adjoint-based gradient and Hessian expressions for the resulting partial differential equation (PDE)-constrained optimization problem and propose an inexact Newton method for its solution. As a consequence of the Petrov-Galerkin discretization of the energy equation, we show that discretization and differentiation do not commute; that is, the order in which we discretize the cost functional and differentiate it affects the correctness of the gradient. Using two- and three-dimensional model problems, we study the prospects for and limitations of the inference of the geothermal heat flux field from surface velocity observations. The results show that the reconstruction improves as the noise level in the observations decreases and that short-wavelength variations in the geothermal heat flux are difficult to recover. We analyze the ill-posedness of the inverse problem as a function of the number of observations by examining the spectrum of the Hessian of the cost functional. Motivated by the popularity of operator-split or staggered solvers for forward multiphysics problems - i.e., those that drop two-way coupling terms to yield a one-way coupled forward Jacobian - we study the effect on the inversion of a one-way coupling of the adjoint energy and Stokes equations. We show that taking such a one-way coupled approach for the adjoint equations can lead to an incorrect gradient and premature termination of optimization iterations. This is due to loss of a descent direction stemming from inconsistency of the gradient with the contours of the cost functional. Nevertheless, one may still obtain a reasonable approximate inverse solution particularly if important features of the reconstructed solution emerge early in optimization iterations, before the premature termination.

  14. Research on allocation efficiency of the daisy chain allocation algorithm

    NASA Astrophysics Data System (ADS)

    Shi, Jingping; Zhang, Weiguo

    2013-03-01

    With the improvement of the aircraft performance in reliability, maneuverability and survivability, the number of the control effectors increases a lot. How to distribute the three-axis moments into the control surfaces reasonably becomes an important problem. Daisy chain method is simple and easy to be carried out in the design of the allocation system. But it can not solve the allocation problem for entire attainable moment subset. For the lateral-directional allocation problem, the allocation efficiency of the daisy chain can be directly measured by the area of its subset of attainable moments. Because of the non-linear allocation characteristic, the subset of attainable moments of daisy-chain method is a complex non-convex polygon, and it is difficult to solve directly. By analyzing the two-dimensional allocation problems with a "micro-element" idea, a numerical calculation algorithm is proposed to compute the area of the non-convex polygon. In order to improve the allocation efficiency of the algorithm, a genetic algorithm with the allocation efficiency chosen as the fitness function is proposed to find the best pseudo-inverse matrix.

  15. Finite Nilpotent BRST Transformations in Hamiltonian Formulation

    NASA Astrophysics Data System (ADS)

    Rai, Sumit Kumar; Mandal, Bhabani Prasad

    2013-10-01

    We consider the finite field dependent BRST (FFBRST) transformations in the context of Hamiltonian formulation using Batalin-Fradkin-Vilkovisky method. The non-trivial Jacobian of such transformations is calculated in extended phase space. The contribution from Jacobian can be written as exponential of some local functional of fields which can be added to the effective Hamiltonian of the system. Thus, FFBRST in Hamiltonian formulation with extended phase space also connects different effective theories. We establish this result with the help of two explicit examples. We also show that the FFBRST transformations is similar to the canonical transformations in the sector of Lagrange multiplier and its corresponding momenta.

  16. Recovery Discontinuous Galerkin Jacobian-Free Newton-Krylov Method for All-Speed Flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    HyeongKae Park; Robert Nourgaliev; Vincent Mousseau

    2008-07-01

    A novel numerical algorithm (rDG-JFNK) for all-speed fluid flows with heat conduction and viscosity is introduced. The rDG-JFNK combines the Discontinuous Galerkin spatial discretization with the implicit Runge-Kutta time integration under the Jacobian-free Newton-Krylov framework. We solve fully-compressible Navier-Stokes equations without operator-splitting of hyperbolic, diffusion and reaction terms, which enables fully-coupled high-order temporal discretization. The stability constraint is removed due to the L-stable Explicit, Singly Diagonal Implicit Runge-Kutta (ESDIRK) scheme. The governing equations are solved in the conservative form, which allows one to accurately compute shock dynamics, as well as low-speed flows. For spatial discretization, we develop a “recovery” familymore » of DG, exhibiting nearly-spectral accuracy. To precondition the Krylov-based linear solver (GMRES), we developed an “Operator-Split”-(OS) Physics Based Preconditioner (PBP), in which we transform/simplify the fully-coupled system to a sequence of segregated scalar problems, each can be solved efficiently with Multigrid method. Each scalar problem is designed to target/cluster eigenvalues of the Jacobian matrix associated with a specific physics.« less

  17. Acceleration methods for multi-physics compressible flow

    NASA Astrophysics Data System (ADS)

    Peles, Oren; Turkel, Eli

    2018-04-01

    In this work we investigate the Runge-Kutta (RK)/Implicit smoother scheme as a convergence accelerator for complex multi-physics flow problems including turbulent, reactive and also two-phase flows. The flows considered are subsonic, transonic and supersonic flows in complex geometries, and also can be either steady or unsteady flows. All of these problems are considered to be a very stiff. We then introduce an acceleration method for the compressible Navier-Stokes equations. We start with the multigrid method for pure subsonic flow, including reactive flows. We then add the Rossow-Swanson-Turkel RK/Implicit smoother that enables performing all these complex flow simulations with a reasonable CFL number. We next discuss the RK/Implicit smoother for time dependent problem and also for low Mach numbers. The preconditioner includes an intrinsic low Mach number treatment inside the smoother operator. We also develop a modified Roe scheme with a corresponding flux Jacobian matrix. We then give the extension of the method for real gas and reactive flow. Reactive flows are governed by a system of inhomogeneous Navier-Stokes equations with very stiff source terms. The extension of the RK/Implicit smoother requires an approximation of the source term Jacobian. The properties of the Jacobian are very important for the stability of the method. We discuss what the chemical physics theory of chemical kinetics tells about the mathematical properties of the Jacobian matrix. We focus on the implication of the Le-Chatelier's principle on the sign of the diagonal entries of the Jacobian. We present the implementation of the method for turbulent flow. We use a two RANS turbulent model - one equation model - Spalart-Allmaras and a two-equation model - k-ω SST model. The last extension is for two-phase flows with a gas as a main phase and Eulerian representation of a dispersed particles phase (EDP). We present some examples for such flow computations inside a ballistic evaluation rocket motor. The numerical examples in this work include transonic flow about a RAE2822 airfoil, about a M6 Onera wing, NACA0012 airfoil at very low Mach number, two-phase flow inside a Ballistic evaluation motor (BEM), a turbulent reactive shear layer and a time dependent Sod's tube problem.

  18. Principal polynomial analysis.

    PubMed

    Laparra, Valero; Jiménez, Sandra; Tuia, Devis; Camps-Valls, Gustau; Malo, Jesus

    2014-11-01

    This paper presents a new framework for manifold learning based on a sequence of principal polynomials that capture the possibly nonlinear nature of the data. The proposed Principal Polynomial Analysis (PPA) generalizes PCA by modeling the directions of maximal variance by means of curves, instead of straight lines. Contrarily to previous approaches, PPA reduces to performing simple univariate regressions, which makes it computationally feasible and robust. Moreover, PPA shows a number of interesting analytical properties. First, PPA is a volume-preserving map, which in turn guarantees the existence of the inverse. Second, such an inverse can be obtained in closed form. Invertibility is an important advantage over other learning methods, because it permits to understand the identified features in the input domain where the data has physical meaning. Moreover, it allows to evaluate the performance of dimensionality reduction in sensible (input-domain) units. Volume preservation also allows an easy computation of information theoretic quantities, such as the reduction in multi-information after the transform. Third, the analytical nature of PPA leads to a clear geometrical interpretation of the manifold: it allows the computation of Frenet-Serret frames (local features) and of generalized curvatures at any point of the space. And fourth, the analytical Jacobian allows the computation of the metric induced by the data, thus generalizing the Mahalanobis distance. These properties are demonstrated theoretically and illustrated experimentally. The performance of PPA is evaluated in dimensionality and redundancy reduction, in both synthetic and real datasets from the UCI repository.

  19. Polarization-dependent optics using gauge-field metamaterials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Fu; Xiao, Shiyi; Li, Jensen, E-mail: j.li@bham.ac.uk

    2015-12-14

    We show that effective gauge field for photons with polarization-split dispersion surfaces, being realized using uniaxial metamaterials, can be used for polarization control with unique opportunities. The metamaterials with the proposed gauge field correspond to a special choice of eigenpolarizations on the Poincaré sphere as pseudo-spins, in contrary to those from either conventional birefringent crystals or optical active media. It gives rise to all-angle polarization control and a generic route to manipulate photon trajectories or polarizations in the pseudo-spin domain. As demonstrations, we show beam splitting (birefringent polarizer), all-angle polarization control, unidirectional polarization filter, and interferometer as various polarization controlmore » devices in the pseudo-spin domain. We expect that more polarization-dependent devices can be designed under the same framework.« less

  20. Left fusiform BOLD responses are inversely related to word-likeness in a one-back task.

    PubMed

    Wang, Xiaojuan; Yang, Jianfeng; Shu, Hua; Zevin, Jason D

    2011-04-01

    Although its precise functional contribution to reading remains unclear, there is broad consensus that an activity in the left mid-fusiform gyrus is highly sensitive to written words and word-like stimuli. In the current study, we take advantage of a particularity of the Chinese writing system in order to manipulate word-likeness parametrically, from real characters, to pseudo-characters that vary in whether they contain phonological and semantic cues, to artificial stimuli with varying surface similarity to real characters. In a one-back task, BOLD activity in the left mid-fusiform was inversely related to word-likeness, such that the least activity was observed in response to real characters, and the greatest to artificial stimuli that violate the orthotactic constraints of the writing system. One possible explanation for this surprising result is that the short-term memory demands of the one-back task put more pressure on the visual system when other sources of information cannot be used to aid in detecting repeated stimuli. For real characters and, to a lesser extent for pseudo-characters, information about meaning and pronunciation can contribute to performance, whereas artificial stimuli are entirely dependent on visual information. Consistent with this view, functional connectivity analyses revealed a strong positive relationship between left mid-fusiform and other visual areas, whereas areas typically involved in phonological and semantic processing for text were negatively correlated with this region. Copyright © 2011 Elsevier Inc. All rights reserved.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chacon, Luis; Stanier, Adam John

    Here, we demonstrate a scalable fully implicit algorithm for the two-field low-β extended MHD model. This reduced model describes plasma behavior in the presence of strong guide fields, and is of significant practical impact both in nature and in laboratory plasmas. The model displays strong hyperbolic behavior, as manifested by the presence of fast dispersive waves, which make a fully implicit treatment very challenging. In this study, we employ a Jacobian-free Newton–Krylov nonlinear solver, for which we propose a physics-based preconditioner that renders the linearized set of equations suitable for inversion with multigrid methods. As a result, the algorithm ismore » shown to scale both algorithmically (i.e., the iteration count is insensitive to grid refinement and timestep size) and in parallel in a weak-scaling sense, with the wall-clock time scaling weakly with the number of cores for up to 4096 cores. For a 4096 × 4096 mesh, we demonstrate a wall-clock-time speedup of ~6700 with respect to explicit algorithms. The model is validated linearly (against linear theory predictions) and nonlinearly (against fully kinetic simulations), demonstrating excellent agreement.« less

  2. A Distributed Simulation Facility to Support Human Factors Research in Advanced Air Transportation Technology

    NASA Technical Reports Server (NTRS)

    Amonlirdviman, Keith; Farley, Todd C.; Hansman, R. John, Jr.; Ladik, John F.; Sherer, Dana Z.

    1998-01-01

    A distributed real-time simulation of the civil air traffic environment developed to support human factors research in advanced air transportation technology is presented. The distributed environment is based on a custom simulation architecture designed for simplicity and flexibility in human experiments. Standard Internet protocols are used to create the distributed environment, linking all advanced cockpit simulator, all Air Traffic Control simulator, and a pseudo-aircraft control and simulation management station. The pseudo-aircraft control station also functions as a scenario design tool for coordinating human factors experiments. This station incorporates a pseudo-pilot interface designed to reduce workload for human operators piloting multiple aircraft simultaneously in real time. The application of this distributed simulation facility to support a study of the effect of shared information (via air-ground datalink) on pilot/controller shared situation awareness and re-route negotiation is also presented.

  3. The hip strength:ankle proprioceptive threshold ratio predicts falls and injury in diabetic neuropathy

    PubMed Central

    Richardson, James K.; DeMott, Trina; Allet, Lara; Kim; Ashton-Miller, James A.

    2014-01-01

    Introduction We determined lower limb neuromuscular capacities associated with falls and fall-related injuries in older people with declining peripheral nerve function. Methods Thirty-two subjects (67.4 ± 13.4 years; 19 with type 2 diabetes), representing a spectrum of peripheral neurologic function, were evaluated with frontal plane proprioceptive thresholds at the ankle, frontal plane motor function at the ankle and hip, and prospective follow-up for 1 year. Results Falls and fall-related injuries were reported by 20 (62.5%) and 14 (43.8%) subjects, respectively. The ratio of hip adductor rate of torque development to ankle proprioceptive threshold (HipSTR/AnkPRO) predicted falls (pseudo-R2 = .726) and injury (pseudo-R2 = .382). No other variable maintained significance in the presence of HipSTR/AnkPRO. Discussion Fall and injury risk in the population studied is related inversely to HipSTR/AnkPRO. Increasing rapidly available hip strength in patients with neuropathic ankle sensory impairment may decrease risk of falls and related injuries. PMID:24282041

  4. Enantiopure pseudo-C3-symmetric titanium alkoxide with propeller-like chirality.

    PubMed

    Axe, Philip; Bull, Steven D; Davidson, Matthew G; Gilfillan, Carly J; Jones, Matthew D; Robinson, Diane E J E; Turner, Luke E; Mitchell, William L

    2007-01-18

    An enantiopure amine tris(phenolate) ligand containing a single stereogenic center has been used to control the propeller-like chirality of a derived pseudo-C3-symmetric titanium isopropoxide complex with excellent levels of diastereocontrol. [structure: see text].

  5. Proximalisation of the tibial tubercle gives a good outcome in patients undergoing revision total knee arthroplasty who have pseudo patella baja.

    PubMed

    Vandeputte, F-J; Vandenneucker, H

    2017-07-01

    The aim of this study was to compare the outcome of revision total knee arthroplasty (TKA) with and without proximalisation of the tibial tubercle in patients with a failed primary TKA who have pseudo patella baja. All revision TKAs, performed between January 2008 and November 2013 at a tertiary referral University Orthopaedic Department were retrospectively reviewed. Pseudo patella baja was defined using the modified Insall-Salvati and the Blackburne-Peel ratios. A proximalisation of the tibial tubercle was performed in 13 patients with pseudo patella baja who were matched with a control group of 13 patients for gender, age, height, weight, body mass index, length of surgery and Blackburne-Peel ratio. Outcome was assessed two years post-operatively using the Knee Society Score (KSS). The increase in KSS was significantly higher in the osteotomy group compared with the control group. The outcome was statistically better in patients in whom proximalisation of > 1 cm had been achieved compared with those in whom the proximalisation was < 1 cm. In this retrospective case-control study, a proximal transfer of the tibial tubercle at revision TKA in patients with pseudo patella baja gives good outcomes without major complications. Cite this article: Bone Joint J 2017;99-B:912-16. ©2017 The British Editorial Society of Bone & Joint Surgery.

  6. Sensitivity analysis of dynamic biological systems with time-delays.

    PubMed

    Wu, Wu Hsiung; Wang, Feng Sheng; Chang, Maw Shang

    2010-10-15

    Mathematical modeling has been applied to the study and analysis of complex biological systems for a long time. Some processes in biological systems, such as the gene expression and feedback control in signal transduction networks, involve a time delay. These systems are represented as delay differential equation (DDE) models. Numerical sensitivity analysis of a DDE model by the direct method requires the solutions of model and sensitivity equations with time-delays. The major effort is the computation of Jacobian matrix when computing the solution of sensitivity equations. The computation of partial derivatives of complex equations either by the analytic method or by symbolic manipulation is time consuming, inconvenient, and prone to introduce human errors. To address this problem, an automatic approach to obtain the derivatives of complex functions efficiently and accurately is necessary. We have proposed an efficient algorithm with an adaptive step size control to compute the solution and dynamic sensitivities of biological systems described by ordinal differential equations (ODEs). The adaptive direct-decoupled algorithm is extended to solve the solution and dynamic sensitivities of time-delay systems describing by DDEs. To save the human effort and avoid the human errors in the computation of partial derivatives, an automatic differentiation technique is embedded in the extended algorithm to evaluate the Jacobian matrix. The extended algorithm is implemented and applied to two realistic models with time-delays: the cardiovascular control system and the TNF-α signal transduction network. The results show that the extended algorithm is a good tool for dynamic sensitivity analysis on DDE models with less user intervention. By comparing with direct-coupled methods in theory, the extended algorithm is efficient, accurate, and easy to use for end users without programming background to do dynamic sensitivity analysis on complex biological systems with time-delays.

  7. Pseudo Landau levels and quantum oscillations in strained Weyl semimetals

    NASA Astrophysics Data System (ADS)

    Alisultanov, Z. Z.

    2018-05-01

    The crystal lattice deformation in Weyl materials where the two chiralities are separated in momentum space leads to the appearance of gauge pseudo-fields. We investigated the pseudo-magnetic field induced quantum oscillations in strained Weyl semimetal (WSM). In contrast to all previous works on this problem, we use here a more general tilted Hamiltonian. Such Hamiltonian, seems to be is more suitable for a strained WSMs. We have shown that a pseudo-magnetic field induced magnetization of strained WSM is nonzero due to the fact that electric field (gradient of the deformation potential) is induced simultaneously with the pseudo-magnetic field. This related with fact that the pseudo Landau levels (LLs) in strained WSM are differ in vicinities of different WPs due to the presence of tilt in spectrum. Such violation of the equivalence between Weyl points (WPs) leads to modulation of quantum oscillations. We also showed that magnetization magnitude can be changed by application of an external electric field. In particular, it can be reduced to zero. The possibility of controlling of the magnetization by an electric field is interesting both from a fundamental point of view (a new type of magneto-electric effect) and application point of view (additional possibility to control diamagnetism of deformed WSMs). Finally, a coexistence of type-I and type-II Weyl fermions is possible in the system under investigation. Such phase is absolutely new for physics of topological systems.

  8. Three-Dimensional Inverse Transport Solver Based on Compressive Sensing Technique

    NASA Astrophysics Data System (ADS)

    Cheng, Yuxiong; Wu, Hongchun; Cao, Liangzhi; Zheng, Youqi

    2013-09-01

    According to the direct exposure measurements from flash radiographic image, a compressive sensing-based method for three-dimensional inverse transport problem is presented. The linear absorption coefficients and interface locations of objects are reconstructed directly at the same time. It is always very expensive to obtain enough measurements. With limited measurements, compressive sensing sparse reconstruction technique orthogonal matching pursuit is applied to obtain the sparse coefficients by solving an optimization problem. A three-dimensional inverse transport solver is developed based on a compressive sensing-based technique. There are three features in this solver: (1) AutoCAD is employed as a geometry preprocessor due to its powerful capacity in graphic. (2) The forward projection matrix rather than Gauss matrix is constructed by the visualization tool generator. (3) Fourier transform and Daubechies wavelet transform are adopted to convert an underdetermined system to a well-posed system in the algorithm. Simulations are performed and numerical results in pseudo-sine absorption problem, two-cube problem and two-cylinder problem when using compressive sensing-based solver agree well with the reference value.

  9. Robustness study of the pseudo open-loop controller for multiconjugate adaptive optics.

    PubMed

    Piatrou, Piotr; Gilles, Luc

    2005-02-20

    Robustness of the recently proposed "pseudo open-loop control" algorithm against various system errors has been investigated for the representative example of the Gemini-South 8-m telescope multiconjugate adaptive-optics system. The existing model to represent the adaptive-optics system with pseudo open-loop control has been modified to account for misalignments, noise and calibration errors in deformable mirrors, and wave-front sensors. Comparison with the conventional least-squares control model has been done. We show with the aid of both transfer-function pole-placement analysis and Monte Carlo simulations that POLC remains remarkably stable and robust against very large levels of system errors and outperforms in this respect least-squares control. Approximate stability margins as well as performance metrics such as Strehl ratios and rms wave-front residuals averaged over a 1-arc min field of view have been computed for different types and levels of system errors to quantify the expected performance degradation.

  10. Multi-Agent Flight Simulation with Robust Situation Generation

    NASA Technical Reports Server (NTRS)

    Johnson, Eric N.; Hansman, R. John, Jr.

    1994-01-01

    A robust situation generation architecture has been developed that generates multi-agent situations for human subjects. An implementation of this architecture was developed to support flight simulation tests of air transport cockpit systems. This system maneuvers pseudo-aircraft relative to the human subject's aircraft, generating specific situations for the subject to respond to. These pseudo-aircraft maneuver within reasonable performance constraints, interact in a realistic manner, and make pre-recorded voice radio communications. Use of this system minimizes the need for human experimenters to control the pseudo-agents and provides consistent interactions between the subject and the pseudo-agents. The achieved robustness of this system to typical variations in the subject's flight path was explored. It was found to successfully generate specific situations within the performance limitations of the subject-aircraft, pseudo-aircraft, and the script used.

  11. Formal integration of controlled-source and passive seismic data: Utilization of the CD-ROM experiment

    NASA Astrophysics Data System (ADS)

    Rumpfhuber, E.; Keller, G. R.; Velasco, A. A.

    2005-12-01

    Many large-scale experiments conduct both controlled-source and passive deployments to investigate the lithospheric structure of a targeted region. Many of these studies utilize each data set independently, resulting in different images of the Earth depending on the data set investigated. In general, formal integration of these data sets, such as joint inversions, with other data has not been performed. The CD-ROM experiment, which included both 2-D controlled-source and passive recording along a profile extending from southern Wyoming to northern New Mexico serves as an excellent data set to develop a formal integration strategy between both controlled source and passive experiments. These data are ideal to develop this strategy because: 1) the analysis of refraction/wide-angle reflection data yields Vp structure, and sometimes Vs structure, of the crust and uppermost mantle; 2) analysis of the PmP phase (Moho reflection) yields estimates of the average Vp of the crust for the crust; and 3) receiver functions contain full-crustal reverberations and yield the Vp/Vs ratio, but do not constrain the absolute P and S velocity. Thus, a simple form of integration involves using the Vp/Vs ratio from receiver functions and the average Vp from refraction measurements, to solve for the average Vs of the crust. When refraction/ wide-angle reflection data and several receiver functions nearby are available, an integrated 2-D model can be derived. In receiver functions, the PS conversion gives the S-wave travel-time (ts) through the crust along the raypath traveled from the Moho to the surface. Since the receiver function crustal reverberation gives the Vp/Vs ratio, it is also possible to use the arrival time of the converted phase, PS, to solve for the travel time of the direct teleseismic P-wave through the crust along the ray path. Raytracing can yield the point where the teleseismic wave intersects the Moho. In this approach, the conversion point is essentially a pseudo-shotpoint, thus the converted arrival at the surface can be jointly modeled with refraction data using a 3-D inversion code. Employing the combined CD-ROM data sets, we will be investigating the joint inversion results of controlled source data and receiver functions.

  12. Vector critical points and generalized quasi-efficient solutions in nonsmooth multi-objective programming.

    PubMed

    Wang, Zhen; Li, Ru; Yu, Guolin

    2017-01-01

    In this work, several extended approximately invex vector-valued functions of higher order involving a generalized Jacobian are introduced, and some examples are presented to illustrate their existences. The notions of higher-order (weak) quasi-efficiency with respect to a function are proposed for a multi-objective programming. Under the introduced generalization of higher-order approximate invexities assumptions, we prove that the solutions of generalized vector variational-like inequalities in terms of the generalized Jacobian are the generalized quasi-efficient solutions of nonsmooth multi-objective programming problems. Moreover, the equivalent conditions are presented, namely, a vector critical point is a weakly quasi-efficient solution of higher order with respect to a function.

  13. Implementing informative priors for heterogeneity in meta-analysis using meta-regression and pseudo data.

    PubMed

    Rhodes, Kirsty M; Turner, Rebecca M; White, Ian R; Jackson, Dan; Spiegelhalter, David J; Higgins, Julian P T

    2016-12-20

    Many meta-analyses combine results from only a small number of studies, a situation in which the between-study variance is imprecisely estimated when standard methods are applied. Bayesian meta-analysis allows incorporation of external evidence on heterogeneity, providing the potential for more robust inference on the effect size of interest. We present a method for performing Bayesian meta-analysis using data augmentation, in which we represent an informative conjugate prior for between-study variance by pseudo data and use meta-regression for estimation. To assist in this, we derive predictive inverse-gamma distributions for the between-study variance expected in future meta-analyses. These may serve as priors for heterogeneity in new meta-analyses. In a simulation study, we compare approximate Bayesian methods using meta-regression and pseudo data against fully Bayesian approaches based on importance sampling techniques and Markov chain Monte Carlo (MCMC). We compare the frequentist properties of these Bayesian methods with those of the commonly used frequentist DerSimonian and Laird procedure. The method is implemented in standard statistical software and provides a less complex alternative to standard MCMC approaches. An importance sampling approach produces almost identical results to standard MCMC approaches, and results obtained through meta-regression and pseudo data are very similar. On average, data augmentation provides closer results to MCMC, if implemented using restricted maximum likelihood estimation rather than DerSimonian and Laird or maximum likelihood estimation. The methods are applied to real datasets, and an extension to network meta-analysis is described. The proposed method facilitates Bayesian meta-analysis in a way that is accessible to applied researchers. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  14. Efficient computational methods for electromagnetic imaging with applications to 3D magnetotellurics

    NASA Astrophysics Data System (ADS)

    Kordy, Michal Adam

    The motivation for this work is the forward and inverse problem for magnetotellurics, a frequency domain electromagnetic remote-sensing geophysical method used in mineral, geothermal, and groundwater exploration. The dissertation consists of four papers. In the first paper, we prove the existence and uniqueness of a representation of any vector field in H(curl) by a vector lying in H(curl) and H(div). It allows us to represent electric or magnetic fields by another vector field, for which nodal finite element approximation may be used in the case of non-constant electromagnetic properties. With this approach, the system matrix does not become ill-posed for low-frequency. In the second paper, we consider hexahedral finite element approximation of an electric field for the magnetotelluric forward problem. The near-null space of the system matrix for low frequencies makes the numerical solution unstable in the air. We show that the proper solution may obtained by applying a correction on the null space of the curl. It is done by solving a Poisson equation using discrete Helmholtz decomposition. We parallelize the forward code on multicore workstation with large RAM. In the next paper, we use the forward code in the inversion. Regularization of the inversion is done by using the second norm of the logarithm of conductivity. The data space Gauss-Newton approach allows for significant savings in memory and computational time. We show the efficiency of the method by considering a number of synthetic inversions and we apply it to real data collected in Cascade Mountains. The last paper considers a cross-frequency interpolation of the forward response as well as the Jacobian. We consider Pade approximation through model order reduction and rational Krylov subspace. The interpolating frequencies are chosen adaptively in order to minimize the maximum error of interpolation. Two error indicator functions are compared. We prove a theorem of almost always lucky failure in the case of the right hand analytically dependent on frequency. The operator's null space is treated by decomposing the solution into the part in the null space and orthogonal to it.

  15. An inverse method to estimate emission rates based on nonlinear least-squares-based ensemble four-dimensional variational data assimilation with local air concentration measurements.

    PubMed

    Geng, Xiaobing; Xie, Zhenghui; Zhang, Lijun; Xu, Mei; Jia, Binghao

    2018-03-01

    An inverse source estimation method is proposed to reconstruct emission rates using local air concentration sampling data. It involves the nonlinear least squares-based ensemble four-dimensional variational data assimilation (NLS-4DVar) algorithm and a transfer coefficient matrix (TCM) created using FLEXPART, a Lagrangian atmospheric dispersion model. The method was tested by twin experiments and experiments with actual Cs-137 concentrations measured around the Fukushima Daiichi Nuclear Power Plant (FDNPP). Emission rates can be reconstructed sequentially with the progression of a nuclear accident, which is important in the response to a nuclear emergency. With pseudo observations generated continuously, most of the emission rates were estimated accurately, except under conditions when the wind blew off land toward the sea and at extremely slow wind speeds near the FDNPP. Because of the long duration of accidents and variability in meteorological fields, monitoring networks composed of land stations only in a local area are unable to provide enough information to support an emergency response. The errors in the estimation compared to the real observations from the FDNPP nuclear accident stemmed from a shortage of observations, lack of data control, and an inadequate atmospheric dispersion model without improvement and appropriate meteorological data. The proposed method should be developed further to meet the requirements of a nuclear emergency response. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. A novel pseudo resistor structure for biomedical front-end amplifiers.

    PubMed

    Yu-Chieh Huang; Tzu-Sen Yang; Shun-Hsi Hsu; Xin-Zhuang Chen; Jin-Chern Chiou

    2015-08-01

    This study proposes a novel pseudo resistor structure with a tunable DC bias voltage for biomedical front-end amplifiers (FEAs). In the proposed FEA, the high-pass filter composed of differential difference amplifier and a pseudo resistor is implemented. The FEA is manufactured by using a standard TSMC 0.35 μm CMOS process. In this study, three types FEAs included three different pseudo resistor are simulated, fabricated and measured for comparison and electrocorticography (ECoG) measurement, and all the results show the proposed pseudo resistor is superior to other two types in bandwidth. In chip implementation, the lower and upper cutoff frequencies of the high-pass filter with the proposed pseudo resistor are 0.15 Hz and 4.98 KHz, respectively. It also demonstrates lower total harmonic distortion performance of -58 dB at 1 kHz and higher stability with wide supply range (1.8 V and 3.3 V) and control voltage range (0.9 V and 1.65 V) than others. Moreover, the FEA with the proposed pseudo successfully recorded spike-and-wave discharges of ECoG signal in in vivo experiment on rat with pentylenetetrazol-induced seizures.

  17. New scene change control scheme based on pseudoskipped picture

    NASA Astrophysics Data System (ADS)

    Lee, Youngsun; Lee, Jinwhan; Chang, Hyunsik; Nam, Jae Y.

    1997-01-01

    A new scene change control scheme which improves the video coding performance for sequences that have many scene changed pictures is proposed in this paper. The scene changed pictures except intra-coded picture usually need more bits than normal pictures in order to maintain constant picture quality. The major idea of this paper is how to obtain extra bits which are needed to encode scene changed pictures. We encode a B picture which is located before a scene changed picture like a skipped picture. We call such a B picture as a pseudo-skipped picture. By generating the pseudo-skipped picture like a skipped picture. We call such a B picture as a pseudo-skipped picture. By generating the pseudo-skipped picture, we can save some bits and they are added to the originally allocated target bits to encode the scene changed picture. The simulation results show that the proposed algorithm improves encoding performance about 0.5 to approximately 2.0 dB of PSNR compared to MPEG-2 TM5 rate controls scheme. In addition, the suggested algorithm is compatible with MPEG-2 video syntax and the picture repetition is not recognizable.

  18. Neural network uncertainty assessment using Bayesian statistics: a remote sensing application

    NASA Technical Reports Server (NTRS)

    Aires, F.; Prigent, C.; Rossow, W. B.

    2004-01-01

    Neural network (NN) techniques have proved successful for many regression problems, in particular for remote sensing; however, uncertainty estimates are rarely provided. In this article, a Bayesian technique to evaluate uncertainties of the NN parameters (i.e., synaptic weights) is first presented. In contrast to more traditional approaches based on point estimation of the NN weights, we assess uncertainties on such estimates to monitor the robustness of the NN model. These theoretical developments are illustrated by applying them to the problem of retrieving surface skin temperature, microwave surface emissivities, and integrated water vapor content from a combined analysis of satellite microwave and infrared observations over land. The weight uncertainty estimates are then used to compute analytically the uncertainties in the network outputs (i.e., error bars and correlation structure of these errors). Such quantities are very important for evaluating any application of an NN model. The uncertainties on the NN Jacobians are then considered in the third part of this article. Used for regression fitting, NN models can be used effectively to represent highly nonlinear, multivariate functions. In this situation, most emphasis is put on estimating the output errors, but almost no attention has been given to errors associated with the internal structure of the regression model. The complex structure of dependency inside the NN is the essence of the model, and assessing its quality, coherency, and physical character makes all the difference between a blackbox model with small output errors and a reliable, robust, and physically coherent model. Such dependency structures are described to the first order by the NN Jacobians: they indicate the sensitivity of one output with respect to the inputs of the model for given input data. We use a Monte Carlo integration procedure to estimate the robustness of the NN Jacobians. A regularization strategy based on principal component analysis is proposed to suppress the multicollinearities in order to make these Jacobians robust and physically meaningful.

  19. Deformable and conformal silk hydrogel inverse opal

    PubMed Central

    Kim, Sookyoung; Kim, Sunghwan

    2017-01-01

    Photonic crystals (PhCs) efficiently manipulate photons at the nanoscale. Applying these crystals to biological tissue that has been subjected to large deformation and humid environments can lead to fascinating bioapplications such as in vivo biosensors and artificial ocular prostheses. These applications require that these PhCs have mechanical durability, deformability, and biocompatibility. Herein, we introduce a deformable and conformal silk hydrogel inverse opal (SHIO); the photonic lattice of this 3D PhC can be deformed by mechanical strain. This SHIO is prepared by the UV cross-linking of a liquid stilbene/silk solution, to give a transparent and elastic hydrogel. The pseudophotonic band gap (pseudo-PBG) of this material can be stably tuned by deformation of the photonic lattice (stretching, bending, and compressing). Proof-of-concept experiments demonstrate that the SHIO can be applied as an ocular prosthesis for better vision, such as that provided by the tapeta lucida of nocturnal or deep-sea animals. PMID:28559327

  20. Fusion of multi-spectral and panchromatic images based on 2D-PWVD and SSIM

    NASA Astrophysics Data System (ADS)

    Tan, Dongjie; Liu, Yi; Hou, Ruonan; Xue, Bindang

    2016-03-01

    A combined method using 2D pseudo Wigner-Ville distribution (2D-PWVD) and structural similarity(SSIM) index is proposed for fusion of low resolution multi-spectral (MS) image and high resolution panchromatic (PAN) image. First, the intensity component of multi-spectral image is extracted with generalized IHS transform. Then, the spectrum diagrams of the intensity components of multi-spectral image and panchromatic image are obtained with 2D-PWVD. Different fusion rules are designed for different frequency information of the spectrum diagrams. SSIM index is used to evaluate the high frequency information of the spectrum diagrams for assigning the weights in the fusion processing adaptively. After the new spectrum diagram is achieved according to the fusion rule, the final fusion image can be obtained by inverse 2D-PWVD and inverse GIHS transform. Experimental results show that, the proposed method can obtain high quality fusion images.

  1. Division by zero, pseudo-division by zero, Zhang dynamics method and Zhang-gradient method about control singularity conquering

    NASA Astrophysics Data System (ADS)

    Zhang, Yunong; Zhang, Yinyan; Chen, Dechao; Xiao, Zhengli; Yan, Xiaogang

    2017-01-01

    In this paper, the division-by-zero (DBO) problem in the field of nonlinear control, which is traditionally termed the control singularity problem (or specifically, controller singularity problem), is investigated by the Zhang dynamics (ZD) method and the Zhang-gradient (ZG) method. According to the impact of the DBO problem on the state variables of the controlled nonlinear system, the concepts of the pseudo-DBO problem and the true-DBO problem are proposed in this paper, which provide a new perspective for the researchers on the DBO problems as well as nonlinear control systems. Besides, the two classes of DBO problems are solved under the framework of the ZG method. Specific examples are shown and investigated in this paper to illustrate the two proposed concepts and the efficacy of the ZG method in conquering pseudo-DBO and true-DBO problems. The application of the ZG method to the tracking control of a two-wheeled mobile robot further substantiates the effectiveness of the ZG method. In addition, the ZG method is successfully applied to the tracking control of a pure-feedback nonlinear system.

  2. Electrical impedance tomography in anisotropic media with known eigenvectors

    NASA Astrophysics Data System (ADS)

    Abascal, Juan-Felipe P. J.; Lionheart, William R. B.; Arridge, Simon R.; Schweiger, Martin; Atkinson, David; Holder, David S.

    2011-06-01

    Electrical impedance tomography is an imaging method, with which volumetric images of conductivity are produced by injecting electrical current and measuring boundary voltages. It has the potential to become a portable non-invasive medical imaging technique. Until now, most implementations have neglected anisotropy even though human tissues like bone, muscle and brain white matter are markedly anisotropic. The recovery of an anisotropic conductivity tensor is uniquely determined by boundary measurements only up to a diffeomorphism that fixes the boundary. Nevertheless, uniqueness can be restored by providing information about the diffeomorphism. There are uniqueness results for two constraints: one eigenvalue and a multiple scalar of a general tensor. A useable constraint for medical applications is when the eigenvectors of the underlying tissue are known, which can be approximated from MRI or estimated from DT-MRI, although the eigenvalues are unknown. However there is no known theoretical result guaranteeing uniqueness for this constraint. In fact, only a few previous inversion studies have attempted to recover one or more eigenvalues assuming certain symmetries while ignoring nonuniqueness. In this work, the aim was to undertake a numerical study of the feasibility of the recovery of a piecewise linear finite element conductivity tensor in anisotropic media with known eigenvectors from the complete boundary data. The work suggests that uniqueness holds for this constraint, in addition to proposing a methodology for the incorporation of this prior for general conductivity tensors. This was carried out by performing an analysis of the Jacobian rank and by reconstructing four conductivity distributions: two diagonal tensors whose eigenvalues were linear and sinusoidal functions, and two general tensors whose eigenvectors resembled physiological tissue, one with eigenvectors spherically orientated like a spherical layered structure, and a sample of DT-MRI data of brain white matter. The Jacobian with respect to three eigenvalues was full-rank and it was possible to recover three eigenvalues for the four simulated distributions. This encourages further theoretical study of the uniqueness for this constraint and supports the use of this as a relevant usable method for medical applications.

  3. A Numerical Testbed for Remote Sensing of Aerosols, and its Demonstration for Evaluating Retrieval Synergy from a Geostationary Satellite Constellation of GEO-CAPE and GOES-R

    NASA Technical Reports Server (NTRS)

    Wang, Jun; Xu, Xiaoguang; Ding, Shouguo; Zeng, Jing; Spurr, Robert; Liu, Xiong; Chance, Kelly; Mishchenko, Michael I.

    2014-01-01

    We present a numerical testbed for remote sensing of aerosols, together with a demonstration for evaluating retrieval synergy from a geostationary satellite constellation. The testbed combines inverse (optimal-estimation) software with a forward model containing linearized code for computing particle scattering (for both spherical and non-spherical particles), a kernel-based (land and ocean) surface bi-directional reflectance facility, and a linearized radiative transfer model for polarized radiance. Calculation of gas absorption spectra uses the HITRAN (HIgh-resolution TRANsmission molecular absorption) database of spectroscopic line parameters and other trace species cross-sections. The outputs of the testbed include not only the Stokes 4-vector elements and their sensitivities (Jacobians) with respect to the aerosol single scattering and physical parameters (such as size and shape parameters, refractive index, and plume height), but also DFS (Degree of Freedom for Signal) values for retrieval of these parameters. This testbed can be used as a tool to provide an objective assessment of aerosol information content that can be retrieved for any constellation of (planned or real) satellite sensors and for any combination of algorithm design factors (in terms of wavelengths, viewing angles, radiance and/or polarization to be measured or used). We summarize the components of the testbed, including the derivation and validation of analytical formulae for Jacobian calculations. Benchmark calculations from the forward model are documented. In the context of NASA's Decadal Survey Mission GEOCAPE (GEOstationary Coastal and Air Pollution Events), we demonstrate the use of the testbed to conduct a feasibility study of using polarization measurements in and around the O2 A band for the retrieval of aerosol height information from space, as well as an to assess potential improvement in the retrieval of aerosol fine and coarse mode aerosol optical depth (AOD) through the synergic use of two future geostationary satellites, GOES-R (Geostationary Operational Environmental Satellite R-series) and TEMPO (Tropospheric Emissions: Monitoring of Pollution). Strong synergy between GEOS-R and TEMPO are found especially in their characterization of surface bi-directional reflectance, and thereby, can potentially improve the AOD retrieval to the accuracy required by GEO-CAPE.

  4. Method and apparatus for determining position using global positioning satellites

    NASA Technical Reports Server (NTRS)

    Ward, John (Inventor); Ward, William S. (Inventor)

    1998-01-01

    A global positioning satellite receiver having an antenna for receiving a L1 signal from a satellite. The L1 signal is processed by a preamplifier stage including a band pass filter and a low noise amplifier and output as a radio frequency (RF) signal. A mixer receives and de-spreads the RF signal in response to a pseudo-random noise code, i.e., Gold code, generated by an internal pseudo-random noise code generator. A microprocessor enters a code tracking loop, such that during the code tracking loop, it addresses the pseudo-random code generator to cause the pseudo-random code generator to sequentially output pseudo-random codes corresponding to satellite codes used to spread the L1 signal, until correlation occurs. When an output of the mixer is indicative of the occurrence of correlation between the RF signal and the generated pseudo-random codes, the microprocessor enters an operational state which slows the receiver code sequence to stay locked with the satellite code sequence. The output of the mixer is provided to a detector which, in turn, controls certain routines of the microprocessor. The microprocessor will output pseudo range information according to an interrupt routine in response detection of correlation. The pseudo range information is to be telemetered to a ground station which determines the position of the global positioning satellite receiver.

  5. Multiscale System for Environmentally-Driven Infectious Disease with Threshold Control Strategy

    NASA Astrophysics Data System (ADS)

    Sun, Xiaodan; Xiao, Yanni

    A multiscale system for environmentally-driven infectious disease is proposed, in which control measures at three different scales are implemented when the number of infected hosts exceeds a certain threshold. Our coupled model successfully describes the feedback mechanisms of between-host dynamics on within-host dynamics by employing one-scale variable guided enhancement of interventions on other scales. The modeling approach provides a novel idea of how to link the large-scale dynamics to small-scale dynamics. The dynamic behaviors of the multiscale system on two time-scales, i.e. fast system and slow system, are investigated. The slow system is further simplified to a two-dimensional Filippov system. For the Filippov system, we study the dynamics of its two subsystems (i.e. free-system and control-system), the sliding mode dynamics, the boundary equilibrium bifurcations, as well as the global behaviors. We prove that both subsystems may undergo backward bifurcations and the sliding domain exists. Meanwhile, it is possible that the pseudo-equilibrium exists and is globally stable, or the pseudo-equilibrium, the disease-free equilibrium and the real equilibrium are tri-stable, or the pseudo-equilibrium and the real equilibrium are bi-stable, or the pseudo-equilibrium and disease-free equilibrium are bi-stable, which depends on the threshold value and other parameter values. The global stability of the pseudo-equilibrium reveals that we may maintain the number of infected hosts at a previously given value. Moreover, the bi-stability and tri-stability indicate that whether the number of infected individuals tends to zero or a previously given value or other positive values depends on the parameter values and the initial states of the system. These results highlight the challenges in the control of environmentally-driven infectious disease.

  6. Determining the depositional pattern by resistivity-seismic inversion for the aquifer system of Maira area, Pakistan.

    PubMed

    Akhter, Gulraiz; Farid, Asim; Ahmad, Zulfiqar

    2012-01-01

    Velocity and density measured in a well are crucial for synthetic seismic generation which is, in turn, a key to interpreting real seismic amplitude in terms of lithology, porosity and fluid content. Investigations made in the water wells usually consist of spontaneous potential, resistivity long and short normal, point resistivity and gamma ray logs. The sonic logs are not available because these are usually run in the wells drilled for hydrocarbons. To generate the synthetic seismograms, sonic and density logs are required, which are useful to precisely mark the lithology contacts and formation tops. An attempt has been made to interpret the subsurface soil of the aquifer system by means of resistivity to seismic inversion. For this purpose, resistivity logs and surface resistivity sounding were used and the resistivity logs were converted to sonic logs whereas surface resistivity sounding data transformed into seismic curves. The converted sonic logs and the surface seismic curves were then used to generate synthetic seismograms. With the utilization of these synthetic seismograms, pseudo-seismic sections have been developed. Subsurface lithologies encountered in wells exhibit different velocities and densities. The reflection patterns were marked by using amplitude standout, character and coherence. These pseudo-seismic sections were later tied to well synthetics and lithologs. In this way, a lithology section was created for the alluvial fill. The cross-section suggested that the eastern portion of the studied area mainly consisted of sandy fill and the western portion constituted clayey part. This can be attributed to the depositional environment by the Indus and the Kabul Rivers.

  7. The cometary and asteroidal origins of meteors

    NASA Technical Reports Server (NTRS)

    Kresak, L.

    1973-01-01

    A quantitative examination of the gravitational and nongravitational changes of orbits shows that for larger interplanetary bodies the perturbations by Jupiter strongly predominate over all other effects, which include perturbations by other planets, splitting of comet nuclei and jet effects of cometary ejections. The structure of meteor streams, indicates that the mutual compensation of the changes in individual elements entering the Jacobian integral, which is characteristic for the comets, does not work among the meteoroids. It appears that additional forces of a different kind must exert appreciable influence on the motion of interplanetary particles of meteoroid size. Nevertheless, the distribution of the Jacobian constant in various samples of meteor orbits furnishes some information on the type of their parent bodies and on the relative contribution of individual sources.

  8. Method for six-legged robot stepping on obstacles by indirect force estimation

    NASA Astrophysics Data System (ADS)

    Xu, Yilin; Gao, Feng; Pan, Yang; Chai, Xun

    2016-07-01

    Adaptive gaits for legged robots often requires force sensors installed on foot-tips, however impact, temperature or humidity can affect or even damage those sensors. Efforts have been made to realize indirect force estimation on the legged robots using leg structures based on planar mechanisms. Robot Octopus III is a six-legged robot using spatial parallel mechanism(UP-2UPS) legs. This paper proposed a novel method to realize indirect force estimation on walking robot based on a spatial parallel mechanism. The direct kinematics model and the inverse kinematics model are established. The force Jacobian matrix is derived based on the kinematics model. Thus, the indirect force estimation model is established. Then, the relation between the output torques of the three motors installed on one leg to the external force exerted on the foot tip is described. Furthermore, an adaptive tripod static gait is designed. The robot alters its leg trajectory to step on obstacles by using the proposed adaptive gait. Both the indirect force estimation model and the adaptive gait are implemented and optimized in a real time control system. An experiment is carried out to validate the indirect force estimation model. The adaptive gait is tested in another experiment. Experiment results show that the robot can successfully step on a 0.2 m-high obstacle. This paper proposes a novel method to overcome obstacles for the six-legged robot using spatial parallel mechanism legs and to avoid installing the electric force sensors in harsh environment of the robot's foot tips.

  9. Learning by Demonstration for Motion Planning of Upper-Limb Exoskeletons

    PubMed Central

    Lauretti, Clemente; Cordella, Francesca; Ciancio, Anna Lisa; Trigili, Emilio; Catalan, Jose Maria; Badesa, Francisco Javier; Crea, Simona; Pagliara, Silvio Marcello; Sterzi, Silvia; Vitiello, Nicola; Garcia Aracil, Nicolas; Zollo, Loredana

    2018-01-01

    The reference joint position of upper-limb exoskeletons is typically obtained by means of Cartesian motion planners and inverse kinematics algorithms with the inverse Jacobian; this approach allows exploiting the available Degrees of Freedom (i.e. DoFs) of the robot kinematic chain to achieve the desired end-effector pose; however, if used to operate non-redundant exoskeletons, it does not ensure that anthropomorphic criteria are satisfied in the whole human-robot workspace. This paper proposes a motion planning system, based on Learning by Demonstration, for upper-limb exoskeletons that allow successfully assisting patients during Activities of Daily Living (ADLs) in unstructured environment, while ensuring that anthropomorphic criteria are satisfied in the whole human-robot workspace. The motion planning system combines Learning by Demonstration with the computation of Dynamic Motion Primitives and machine learning techniques to construct task- and patient-specific joint trajectories based on the learnt trajectories. System validation was carried out in simulation and in a real setting with a 4-DoF upper-limb exoskeleton, a 5-DoF wrist-hand exoskeleton and four patients with Limb Girdle Muscular Dystrophy. Validation was addressed to (i) compare the performance of the proposed motion planning with traditional methods; (ii) assess the generalization capabilities of the proposed method with respect to the environment variability. Three ADLs were chosen to validate the system: drinking, pouring and lifting a light sphere. The achieved results showed a 100% success rate in the task fulfillment, with a high level of generalization with respect to the environment variability. Moreover, an anthropomorphic configuration of the exoskeleton is always ensured. PMID:29527161

  10. Learning by Demonstration for Motion Planning of Upper-Limb Exoskeletons.

    PubMed

    Lauretti, Clemente; Cordella, Francesca; Ciancio, Anna Lisa; Trigili, Emilio; Catalan, Jose Maria; Badesa, Francisco Javier; Crea, Simona; Pagliara, Silvio Marcello; Sterzi, Silvia; Vitiello, Nicola; Garcia Aracil, Nicolas; Zollo, Loredana

    2018-01-01

    The reference joint position of upper-limb exoskeletons is typically obtained by means of Cartesian motion planners and inverse kinematics algorithms with the inverse Jacobian; this approach allows exploiting the available Degrees of Freedom (i.e. DoFs) of the robot kinematic chain to achieve the desired end-effector pose; however, if used to operate non-redundant exoskeletons, it does not ensure that anthropomorphic criteria are satisfied in the whole human-robot workspace. This paper proposes a motion planning system, based on Learning by Demonstration, for upper-limb exoskeletons that allow successfully assisting patients during Activities of Daily Living (ADLs) in unstructured environment, while ensuring that anthropomorphic criteria are satisfied in the whole human-robot workspace. The motion planning system combines Learning by Demonstration with the computation of Dynamic Motion Primitives and machine learning techniques to construct task- and patient-specific joint trajectories based on the learnt trajectories. System validation was carried out in simulation and in a real setting with a 4-DoF upper-limb exoskeleton, a 5-DoF wrist-hand exoskeleton and four patients with Limb Girdle Muscular Dystrophy. Validation was addressed to (i) compare the performance of the proposed motion planning with traditional methods; (ii) assess the generalization capabilities of the proposed method with respect to the environment variability. Three ADLs were chosen to validate the system: drinking, pouring and lifting a light sphere. The achieved results showed a 100% success rate in the task fulfillment, with a high level of generalization with respect to the environment variability. Moreover, an anthropomorphic configuration of the exoskeleton is always ensured.

  11. Nonlinear Spatial Inversion Without Monte Carlo Sampling

    NASA Astrophysics Data System (ADS)

    Curtis, A.; Nawaz, A.

    2017-12-01

    High-dimensional, nonlinear inverse or inference problems usually have non-unique solutions. The distribution of solutions are described by probability distributions, and these are usually found using Monte Carlo (MC) sampling methods. These take pseudo-random samples of models in parameter space, calculate the probability of each sample given available data and other information, and thus map out high or low probability values of model parameters. However, such methods would converge to the solution only as the number of samples tends to infinity; in practice, MC is found to be slow to converge, convergence is not guaranteed to be achieved in finite time, and detection of convergence requires the use of subjective criteria. We propose a method for Bayesian inversion of categorical variables such as geological facies or rock types in spatial problems, which requires no sampling at all. The method uses a 2-D Hidden Markov Model over a grid of cells, where observations represent localized data constraining the model in each cell. The data in our example application are seismic properties such as P- and S-wave impedances or rock density; our model parameters are the hidden states and represent the geological rock types in each cell. The observations at each location are assumed to depend on the facies at that location only - an assumption referred to as `localized likelihoods'. However, the facies at a location cannot be determined solely by the observation at that location as it also depends on prior information concerning its correlation with the spatial distribution of facies elsewhere. Such prior information is included in the inversion in the form of a training image which represents a conceptual depiction of the distribution of local geologies that might be expected, but other forms of prior information can be used in the method as desired. The method provides direct (pseudo-analytic) estimates of posterior marginal probability distributions over each variable, so these do not need to be estimated from samples as is required in MC methods. On a 2-D test example the method is shown to outperform previous methods significantly, and at a fraction of the computational cost. In many foreseeable applications there are therefore no serious impediments to extending the method to 3-D spatial models.

  12. Application of a Modal Approach in Solving the Static Stability Problem for Electric Power Systems

    NASA Astrophysics Data System (ADS)

    Sharov, J. V.

    2017-12-01

    Application of a modal approach in solving the static stability problem for power systems is examined. It is proposed to use the matrix exponent norm as a generalized transition function of the power system disturbed motion. Based on the concept of a stability radius and the pseudospectrum of Jacobian matrix, the necessary and sufficient conditions for existence of the static margins were determined. The capabilities and advantages of the modal approach in designing centralized or distributed control and the prospects for the analysis of nonlinear oscillations and rendering the dynamic stability are demonstrated.

  13. Bearing-only Cooperative Localization: Simulation and Experimental Results

    DTIC Science & Technology

    2013-01-01

    matrix Fi and Bi are the system jacobian with respect to state Xi and control ui, which are given below Fi = I3 + Ts ∂fi ∂Xi |Xi=Xi(k) =  1 0 − ViTs ...sinψ(k)0 1 ViTs cosψ(k) 0 0 1  , (8) Bi = Ts ∂fi ∂ui |ui=ui(k) Ts cosψk 0Ts sinψk 0 0 Ts  , (9) and Qi(k) = ( σ2vi 0 0 σ2ωi ) , where σvi and σωi

  14. Refined elasticity sampling for Monte Carlo-based identification of stabilizing network patterns.

    PubMed

    Childs, Dorothee; Grimbs, Sergio; Selbig, Joachim

    2015-06-15

    Structural kinetic modelling (SKM) is a framework to analyse whether a metabolic steady state remains stable under perturbation, without requiring detailed knowledge about individual rate equations. It provides a representation of the system's Jacobian matrix that depends solely on the network structure, steady state measurements, and the elasticities at the steady state. For a measured steady state, stability criteria can be derived by generating a large number of SKMs with randomly sampled elasticities and evaluating the resulting Jacobian matrices. The elasticity space can be analysed statistically in order to detect network positions that contribute significantly to the perturbation response. Here, we extend this approach by examining the kinetic feasibility of the elasticity combinations created during Monte Carlo sampling. Using a set of small example systems, we show that the majority of sampled SKMs would yield negative kinetic parameters if they were translated back into kinetic models. To overcome this problem, a simple criterion is formulated that mitigates such infeasible models. After evaluating the small example pathways, the methodology was used to study two steady states of the neuronal TCA cycle and the intrinsic mechanisms responsible for their stability or instability. The findings of the statistical elasticity analysis confirm that several elasticities are jointly coordinated to control stability and that the main source for potential instabilities are mutations in the enzyme alpha-ketoglutarate dehydrogenase. © The Author 2015. Published by Oxford University Press.

  15. Optical control of the Advanced Technology Solar Telescope.

    PubMed

    Upton, Robert

    2006-08-10

    The Advanced Technology Solar Telescope (ATST) is an off-axis Gregorian astronomical telescope design. The ATST is expected to be subject to thermal and gravitational effects that result in misalignments of its mirrors and warping of its primary mirror. These effects require active, closed-loop correction to maintain its as-designed diffraction-limited optical performance. The simulation and modeling of the ATST with a closed-loop correction strategy are presented. The correction strategy is derived from the linear mathematical properties of two Jacobian, or influence, matrices that map the ATST rigid-body (RB) misalignments and primary mirror figure errors to wavefront sensor (WFS) measurements. The two Jacobian matrices also quantify the sensitivities of the ATST to RB and primary mirror figure perturbations. The modeled active correction strategy results in a decrease of the rms wavefront error averaged over the field of view (FOV) from 500 to 19 nm, subject to 10 nm rms WFS noise. This result is obtained utilizing nine WFSs distributed in the FOV with a 300 nm rms astigmatism figure error on the primary mirror. Correction of the ATST RB perturbations is demonstrated for an optimum subset of three WFSs with corrections improving the ATST rms wavefront error from 340 to 17.8 nm. In addition to the active correction of the ATST, an analytically robust sensitivity analysis that can be generally extended to a wider class of optical systems is presented.

  16. Multivariate Tensor-based Morphometry on Surfaces: Application to Mapping Ventricular Abnormalities in HIV/AIDS

    PubMed Central

    Wang, Yalin; Zhang, Jie; Gutman, Boris; Chan, Tony F.; Becker, James T.; Aizenstein, Howard J.; Lopez, Oscar L.; Tamburo, Robert J.; Toga, Arthur W.; Thompson, Paul M.

    2010-01-01

    Here we developed a new method, called multivariate tensor-based surface morphometry (TBM), and applied it to study lateral ventricular surface differences associated with HIV/AIDS. Using concepts from differential geometry and the theory of differential forms, we created mathematical structures known as holomorphic one-forms, to obtain an efficient and accurate conformal parameterization of the lateral ventricular surfaces in the brain. The new meshing approach also provides a natural way to register anatomical surfaces across subjects, and improves on prior methods as it handles surfaces that branch and join at complex 3D junctions. To analyze anatomical differences, we computed new statistics from the Riemannian surface metrics - these retain multivariate information on local surface geometry. We applied this framework to analyze lateral ventricular surface morphometry in 3D MRI data from 11 subjects with HIV/AIDS and 8 healthy controls. Our method detected a 3D profile of surface abnormalities even in this small sample. Multivariate statistics on the local tensors gave better effect sizes for detecting group differences, relative to other TBM-based methods including analysis of the Jacobian determinant, the largest and smallest eigenvalues of the surface metric, and the pair of eigenvalues of the Jacobian matrix. The resulting analysis pipeline may improve the power of surface-based morphometry studies of the brain. PMID:19900560

  17. Effects of high-frequency damping on iterative convergence of implicit viscous solver

    NASA Astrophysics Data System (ADS)

    Nishikawa, Hiroaki; Nakashima, Yoshitaka; Watanabe, Norihiko

    2017-11-01

    This paper discusses effects of high-frequency damping on iterative convergence of an implicit defect-correction solver for viscous problems. The study targets a finite-volume discretization with a one parameter family of damped viscous schemes. The parameter α controls high-frequency damping: zero damping with α = 0, and larger damping for larger α (> 0). Convergence rates are predicted for a model diffusion equation by a Fourier analysis over a practical range of α. It is shown that the convergence rate attains its minimum at α = 1 on regular quadrilateral grids, and deteriorates for larger values of α. A similar behavior is observed for regular triangular grids. In both quadrilateral and triangular grids, the solver is predicted to diverge for α smaller than approximately 0.5. Numerical results are shown for the diffusion equation and the Navier-Stokes equations on regular and irregular grids. The study suggests that α = 1 and 4/3 are suitable values for robust and efficient computations, and α = 4 / 3 is recommended for the diffusion equation, which achieves higher-order accuracy on regular quadrilateral grids. Finally, a Jacobian-Free Newton-Krylov solver with the implicit solver (a low-order Jacobian approximately inverted by a multi-color Gauss-Seidel relaxation scheme) used as a variable preconditioner is recommended for practical computations, which provides robust and efficient convergence for a wide range of α.

  18. A Linearized Prognostic Cloud Scheme in NASAs Goddard Earth Observing System Data Assimilation Tools

    NASA Technical Reports Server (NTRS)

    Holdaway, Daniel; Errico, Ronald M.; Gelaro, Ronald; Kim, Jong G.; Mahajan, Rahul

    2015-01-01

    A linearized prognostic cloud scheme has been developed to accompany the linearized convection scheme recently implemented in NASA's Goddard Earth Observing System data assimilation tools. The linearization, developed from the nonlinear cloud scheme, treats cloud variables prognostically so they are subject to linearized advection, diffusion, generation, and evaporation. Four linearized cloud variables are modeled, the ice and water phases of clouds generated by large-scale condensation and, separately, by detraining convection. For each species the scheme models their sources, sublimation, evaporation, and autoconversion. Large-scale, anvil and convective species of precipitation are modeled and evaporated. The cloud scheme exhibits linearity and realistic perturbation growth, except around the generation of clouds through large-scale condensation. Discontinuities and steep gradients are widely used here and severe problems occur in the calculation of cloud fraction. For data assimilation applications this poor behavior is controlled by replacing this part of the scheme with a perturbation model. For observation impacts, where efficiency is less of a concern, a filtering is developed that examines the Jacobian. The replacement scheme is only invoked if Jacobian elements or eigenvalues violate a series of tuned constants. The linearized prognostic cloud scheme is tested by comparing the linear and nonlinear perturbation trajectories for 6-, 12-, and 24-h forecast times. The tangent linear model performs well and perturbations of clouds are well captured for the lead times of interest.

  19. Using ancestry matching to combine family-based and unrelated samples for genome-wide association studies‡

    PubMed Central

    Crossett, Andrew; Kent, Brian P.; Klei, Lambertus; Ringquist, Steven; Trucco, Massimo; Roeder, Kathryn; Devlin, Bernie

    2015-01-01

    We propose a method to analyze family-based samples together with unrelated cases and controls. The method builds on the idea of matched case–control analysis using conditional logistic regression (CLR). For each trio within the family, a case (the proband) and matched pseudo-controls are constructed, based upon the transmitted and untransmitted alleles. Unrelated controls, matched by genetic ancestry, supplement the sample of pseudo-controls; likewise unrelated cases are also paired with genetically matched controls. Within each matched stratum, the case genotype is contrasted with control pseudo-control genotypes via CLR, using a method we call matched-CLR (mCLR). Eigenanalysis of numerous SNP genotypes provides a tool for mapping genetic ancestry. The result of such an analysis can be thought of as a multidimensional map, or eigenmap, in which the relative genetic similarities and differences amongst individuals is encoded in the map. Once constructed, new individuals can be projected onto the ancestry map based on their genotypes. Successful differentiation of individuals of distinct ancestry depends on having a diverse, yet representative sample from which to construct the ancestry map. Once samples are well-matched, mCLR yields comparable power to competing methods while ensuring excellent control over Type I error. PMID:20862653

  20. A Secure, Intelligent, and Smart-Sensing Approach for Industrial System Automation and Transmission over Unsecured Wireless Networks

    PubMed Central

    Shahzad, Aamir; Lee, Malrey; Xiong, Neal Naixue; Jeong, Gisung; Lee, Young-Keun; Choi, Jae-Young; Mahesar, Abdul Wheed; Ahmad, Iftikhar

    2016-01-01

    In Industrial systems, Supervisory control and data acquisition (SCADA) system, the pseudo-transport layer of the distributed network protocol (DNP3) performs the functions of the transport layer and network layer of the open systems interconnection (OSI) model. This study used a simulation design of water pumping system, in-which the network nodes are directly and wirelessly connected with sensors, and are monitored by the main controller, as part of the wireless SCADA system. This study also intends to focus on the security issues inherent in the pseudo-transport layer of the DNP3 protocol. During disassembly and reassembling processes, the pseudo-transport layer keeps track of the bytes sequence. However, no mechanism is available that can verify the message or maintain the integrity of the bytes in the bytes received/transmitted from/to the data link layer or in the send/respond from the main controller/sensors. To properly and sequentially keep track of the bytes, a mechanism is required that can perform verification while bytes are received/transmitted from/to the lower layer of the DNP3 protocol or the send/respond to/from field sensors. For security and byte verification purposes, a mechanism needs to be proposed for the pseudo-transport layer, by employing cryptography algorithm. A dynamic choice security buffer (SB) is designed and employed during the security development. To achieve the desired goals of the proposed study, a pseudo-transport layer stack model is designed using the DNP3 protocol open library and the security is deployed and tested, without changing the original design. PMID:26950129

  1. A Secure, Intelligent, and Smart-Sensing Approach for Industrial System Automation and Transmission over Unsecured Wireless Networks.

    PubMed

    Shahzad, Aamir; Lee, Malrey; Xiong, Neal Naixue; Jeong, Gisung; Lee, Young-Keun; Choi, Jae-Young; Mahesar, Abdul Wheed; Ahmad, Iftikhar

    2016-03-03

    In Industrial systems, Supervisory control and data acquisition (SCADA) system, the pseudo-transport layer of the distributed network protocol (DNP3) performs the functions of the transport layer and network layer of the open systems interconnection (OSI) model. This study used a simulation design of water pumping system, in-which the network nodes are directly and wirelessly connected with sensors, and are monitored by the main controller, as part of the wireless SCADA system. This study also intends to focus on the security issues inherent in the pseudo-transport layer of the DNP3 protocol. During disassembly and reassembling processes, the pseudo-transport layer keeps track of the bytes sequence. However, no mechanism is available that can verify the message or maintain the integrity of the bytes in the bytes received/transmitted from/to the data link layer or in the send/respond from the main controller/sensors. To properly and sequentially keep track of the bytes, a mechanism is required that can perform verification while bytes are received/transmitted from/to the lower layer of the DNP3 protocol or the send/respond to/from field sensors. For security and byte verification purposes, a mechanism needs to be proposed for the pseudo-transport layer, by employing cryptography algorithm. A dynamic choice security buffer (SB) is designed and employed during the security development. To achieve the desired goals of the proposed study, a pseudo-transport layer stack model is designed using the DNP3 protocol open library and the security is deployed and tested, without changing the original design.

  2. Obtaining T1-T2 distribution functions from 1-dimensional T1 and T2 measurements: The pseudo 2-D relaxation model

    NASA Astrophysics Data System (ADS)

    Williamson, Nathan H.; Röding, Magnus; Galvosas, Petrik; Miklavcic, Stanley J.; Nydén, Magnus

    2016-08-01

    We present the pseudo 2-D relaxation model (P2DRM), a method to estimate multidimensional probability distributions of material parameters from independent 1-D measurements. We illustrate its use on 1-D T1 and T2 relaxation measurements of saturated rock and evaluate it on both simulated and experimental T1-T2 correlation measurement data sets. Results were in excellent agreement with the actual, known 2-D distribution in the case of the simulated data set. In both the simulated and experimental case, the functional relationships between T1 and T2 were in good agreement with the T1-T2 correlation maps from the 2-D inverse Laplace transform of the full 2-D data sets. When a 1-D CPMG experiment is combined with a rapid T1 measurement, the P2DRM provides a double-shot method for obtaining a T1-T2 relationship, with significantly decreased experimental time in comparison to the full T1-T2 correlation measurement.

  3. Improving Bandwidth Utilization in a 1 Tbps Airborne MIMO Communications Downlink

    DTIC Science & Technology

    2013-03-21

    number of transmitters). C = log2 ∣∣∣∣∣INr + EsNtN0 HHH ∣∣∣∣∣ (2.32) In the signal to noise ratio, Es represents the total energy from all transmitters...channel matrix pseudo-inverse is computed by (2.36) [6, p. 970] 31 H+ = ( HHH )−1HH. (2.36) 2.6.5 Minimum Mean-Squared Error Detection. Minimum Mean Squared...H† = ( HHH + Nt SNR I )−1 HH . (3.14) Equation (3.14) was defined in [2] as an implementation of a MMSE equalizer, and was applied to the received

  4. An optimized BP neural network based on genetic algorithm for static decoupling of a six-axis force/torque sensor

    NASA Astrophysics Data System (ADS)

    Fu, Liyue; Song, Aiguo

    2018-02-01

    In order to improve the measurement precision of 6-axis force/torque sensor for robot, BP decoupling algorithm optimized by GA (GA-BP algorithm) is proposed in this paper. The weights and thresholds of a BP neural network with 6-10-6 topology are optimized by GA to develop decouple a six-axis force/torque sensor. By comparison with other traditional decoupling algorithm, calculating the pseudo-inverse matrix of calibration and classical BP algorithm, the decoupling results validate the good decoupling performance of GA-BP algorithm and the coupling errors are reduced.

  5. Cold light dark matter in extended seesaw models

    NASA Astrophysics Data System (ADS)

    Boulebnane, Sami; Heeck, Julian; Nguyen, Anne; Teresi, Daniele

    2018-04-01

    We present a thorough discussion of light dark matter produced via freeze-in in two-body decays A→ B DM . If A and B are quasi-degenerate, the dark matter particle has a cold spectrum even for keV masses. We show this explicitly by calculating the transfer function that encodes the impact on structure formation. As examples for this setup we study extended seesaw mechanisms with a spontaneously broken global U(1) symmetry, such as the inverse seesaw. The keV-scale pseudo-Goldstone dark matter particle is then naturally produced cold by the decays of the quasi-degenerate right-handed neutrinos.

  6. Hybrid spread spectrum radio system

    DOEpatents

    Smith, Stephen F.; Dress, William B.

    2010-02-02

    Systems and methods are described for hybrid spread spectrum radio systems. A method includes modulating a signal by utilizing a subset of bits from a pseudo-random code generator to control an amplification circuit that provides a gain to the signal. Another method includes: modulating a signal by utilizing a subset of bits from a pseudo-random code generator to control a fast hopping frequency synthesizer; and fast frequency hopping the signal with the fast hopping frequency synthesizer, wherein multiple frequency hops occur within a single data-bit time.

  7. New RADIOM algorithm using inverse EOS

    NASA Astrophysics Data System (ADS)

    Busquet, Michel; Sokolov, Igor; Klapisch, Marcel

    2012-10-01

    The RADIOM model, [1-2], allows one to implement non-LTE atomic physics with a very low extra CPU cost. Although originally heuristic, RADIOM has been physically justified [3] and some accounting for auto-ionization has been included [2]. RADIOM defines an ionization temperature Tz derived from electronic density and actual electronic temperature Te. LTE databases are then queried for properties at Tz and NLTE values are derived from them. Some hydro-codes (like FAST at NRL, Ramis' MULTI, or the CRASH code at U.Mich) use inverse EOS starting from the total internal energy Etot and returning the temperature. In the NLTE case, inverse EOS requires to solve implicit relations between Te, Tz, and Etot. We shall describe these relations and an efficient solver successively implemented in some of our codes. [4pt] [1] M. Busquet, Radiation dependent ionization model for laser-created plasmas, Ph. Fluids B 5, 4191 (1993).[0pt] [2] M. Busquet, D. Colombant, M. Klapisch, D. Fyfe, J. Gardner. Improvements to the RADIOM non-LTE model, HEDP 5, 270 (2009).[0pt] [3] M.Busquet, Onset of pseudo-thermal equilibrium within configurations and super-configurations, JQSRT 99, 131 (2006)

  8. SNR-optimized phase-sensitive dual-acquisition turbo spin echo imaging: a fast alternative to FLAIR.

    PubMed

    Lee, Hyunyeol; Park, Jaeseok

    2013-07-01

    Phase-sensitive dual-acquisition single-slab three-dimensional turbo spin echo imaging was recently introduced, producing high-resolution isotropic cerebrospinal fluid attenuated brain images without long inversion recovery preparation. Despite the advantages, the weighted-averaging-based technique suffers from noise amplification resulting from different levels of cerebrospinal fluid signal modulations over the two acquisitions. The purpose of this work is to develop a signal-to-noise ratio-optimized version of the phase-sensitive dual-acquisition single-slab three-dimensional turbo spin echo. Variable refocusing flip angles in the first acquisition are calculated using a three-step prescribed signal evolution while those in the second acquisition are calculated using a two-step pseudo-steady state signal transition with a high flip-angle pseudo-steady state at a later portion of the echo train, balancing the levels of cerebrospinal fluid signals in both the acquisitions. Low spatial frequency signals are sampled during the high flip-angle pseudo-steady state to further suppress noise. Numerical simulations of the Bloch equations were performed to evaluate signal evolutions of brain tissues along the echo train and optimize imaging parameters. In vivo studies demonstrate that compared with conventional phase-sensitive dual-acquisition single-slab three-dimensional turbo spin echo, the proposed optimization yields 74% increase in apparent signal-to-noise ratio for gray matter and 32% decrease in imaging time. The proposed method can be a potential alternative to conventional fluid-attenuated imaging. Copyright © 2012 Wiley Periodicals, Inc.

  9. Analysis and Synthesis of Pseudo-Periodic[InlineEquation not available: see fulltext.]-Like Noise by Means of Wavelets with Applications to Digital Audio

    NASA Astrophysics Data System (ADS)

    Polotti, Pietro; Evangelista, Gianpaolo

    2001-12-01

    Voiced musical sounds have nonzero energy in sidebands of the frequency partials. Our work is based on the assumption, often experimentally verified, that the energy distribution of the sidebands is shaped as powers of the inverse of the distance from the closest partial. The power spectrum of these pseudo-periodic processes is modeled by means of a superposition of modulated[InlineEquation not available: see fulltext.] components, that is, by a pseudo-periodic[InlineEquation not available: see fulltext.]-like process. Due to the fundamental selfsimilar character of the wavelet transform,[InlineEquation not available: see fulltext.] processes can be fruitfully analyzed and synthesized by means of wavelets. We obtain a set of very loosely correlated coefficients at each scale level that can be well approximated by white noise in the synthesis process. Our computational scheme is based on an orthogonal[InlineEquation not available: see fulltext.]-band filter bank and a dyadic wavelet transform per channel. The[InlineEquation not available: see fulltext.] channels are tuned to the left and right sidebands of the harmonics so that sidebands are mutually independent. The structure computes the expansion coefficients of a new orthogonal and complete set of harmonic-band wavelets. The main point of our scheme is that we need only two parameters per harmonic in order to model the stochastic fluctuations of sounds from a pure periodic behavior.

  10. A scalable, fully implicit algorithm for the reduced two-field low-β extended MHD model

    DOE PAGES

    Chacon, Luis; Stanier, Adam John

    2016-12-01

    Here, we demonstrate a scalable fully implicit algorithm for the two-field low-β extended MHD model. This reduced model describes plasma behavior in the presence of strong guide fields, and is of significant practical impact both in nature and in laboratory plasmas. The model displays strong hyperbolic behavior, as manifested by the presence of fast dispersive waves, which make a fully implicit treatment very challenging. In this study, we employ a Jacobian-free Newton–Krylov nonlinear solver, for which we propose a physics-based preconditioner that renders the linearized set of equations suitable for inversion with multigrid methods. As a result, the algorithm ismore » shown to scale both algorithmically (i.e., the iteration count is insensitive to grid refinement and timestep size) and in parallel in a weak-scaling sense, with the wall-clock time scaling weakly with the number of cores for up to 4096 cores. For a 4096 × 4096 mesh, we demonstrate a wall-clock-time speedup of ~6700 with respect to explicit algorithms. The model is validated linearly (against linear theory predictions) and nonlinearly (against fully kinetic simulations), demonstrating excellent agreement.« less

  11. Pseudo-outbreak of Penicillium in an outpatient obstetrics and gynecology clinic.

    PubMed

    Sood, Geetika; Huber, Kerri; Dam, Lisa; Riedel, Stefan; Grubb, Lisa; Zenilman, Jonathan; Perl, Trish M; Argani, Cynthia

    2017-05-01

    We report an unusual pseudo-outbreak of Penicillium that occurred in patients seen in an outpatient obstetrics and gynecology clinic. The pseudo-outbreak was detected in late 2012, when the microbiology department reported a series of vaginal cultures positive for Penicillium spp. Our investigation found Penicillium spp in both patient and environmental samples and was potentially associated with the practice of wetting gloves with tap water by a health care worker prior to patient examination. Copyright © 2017 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.

  12. A Dynamical Analysis of a Piecewise Smooth Pest Control SI Model

    NASA Astrophysics Data System (ADS)

    Liu, Bing; Liu, Wanbo; Tao, Fennmei; Kang, Baolin; Cong, Jiguang

    In this paper, we propose a piecewise smooth SI pest control system to model the process of spraying pesticides and releasing infectious pests. We assume that the pest population consists of susceptible pests and infectious pests, and that the disease spreads horizontally between pests. We take the susceptible pest as the control index on whether to implement chemical control and biological control strategies. Based on the theory of Filippov system, the sliding-mode domain and conditions for the existence of real equilibria, virtual equilibria, pseudo-equilibrium and boundary equilibria are given. Further, we show the global stability of real equilibria (or boundary equilibria) and pseudo-equilibrium. Our results can provide theoretical guidance for the problem of pest control.

  13. tomo3d: a new 3-D joint refraction and reflection travel-time tomography code for active-source seismic data

    NASA Astrophysics Data System (ADS)

    Meléndez, A.; Korenaga, J.; Sallares, V.; Ranero, C. R.

    2012-12-01

    We present the development state of tomo3d, a code for three-dimensional refraction and reflection travel-time tomography of wide-angle seismic data based on the previous two-dimensional version of the code, tomo2d. The core of both forward and inverse problems is inherited from the 2-D version. The ray tracing is performed by a hybrid method combining the graph and bending methods. The graph method finds an ordered array of discrete model nodes, which satisfies Fermat's principle, that is, whose corresponding travel time is a global minimum within the space of discrete nodal connections. The bending method is then applied to produce a more accurate ray path by using the nodes as support points for an interpolation with beta-splines. Travel time tomography is formulated as an iterative linearized inversion, and each step is solved using an LSQR algorithm. In order to avoid the singularity of the sensitivity kernel and to reduce the instability of inversion, regularization parameters are introduced in the inversion in the form of smoothing and damping constraints. Velocity models are built as 3-D meshes, and velocity values at intermediate locations are obtained by trilinear interpolation within the corresponding pseudo-cubic cell. Meshes are sheared to account for topographic relief. A floating reflector is represented by a 2-D grid, and depths at intermediate locations are calculated by bilinear interpolation within the corresponding square cell. The trade-off between the resolution of the final model and the associated computational cost is controlled by the relation between the selected forward star for the graph method (i.e. the number of nodes that each node considers as its neighbors) and the refinement of the velocity mesh. Including reflected phases is advantageous because it provides a better coverage and allows us to define the geometry of those geological interfaces with velocity contrasts sharp enough to be observed on record sections. The code also offers the possibility of including water-layer multiples in the modeling, which is useful whenever these phases can be followed to greater offsets than the primary ones. This increases the amount of information available from the data, yielding more extensive and better constrained velocity and geometry models. We will present synthetic results from benchmark tests for the forward and inverse problems, as well as from more complex inversion tests for different inversions possibilities such as one with travel times from refracted waves only (i.e. first arrivals) and one with travel-times from both refracted and reflected waves. In addition, we will show some preliminary results for the inversion of real 3-D OBS data acquired off-shore Ecuador and Colombia.

  14. Computer coordination of limb motion for a three-legged walking robot

    NASA Technical Reports Server (NTRS)

    Klein, C. A.; Patterson, M. R.

    1980-01-01

    Coordination of the limb motion of a vehicle which could perform assembly and maintenance operations on large structures in space is described. Manipulator kinematics and walking robots are described. The basic control scheme of the robot is described. The control of the individual arms are described. Arm velocities are generally described in Cartesian coordinates. Cartesian velocities are converted to joint velocities using the Jacobian matrix. The calculation of a trajectory for an arm given a sequence of points through which it is to pass is described. The free gait algorithm which controls the lifting and placing of legs for the robot is described. The generation of commanded velocities for the robot, and the implementation of those velocities by the algorithm are discussed. Suggestions for further work in the area of robot legged locomotion are presented.

  15. Slow photon amplification of gas-phase ethanol photo-oxidation in titania inverse opal photonic crystals

    NASA Astrophysics Data System (ADS)

    Jovic, Vedran; Idriss, Hicham; Waterhouse, Geoffrey I. N.

    2016-11-01

    Here we describe the successful fabrication of six titania inverse opal (TiO2 IO) photocatalysts with fcc[1 1 1] pseudo photonic band gaps (PBGs) tuned to span the UV-vis region. Photocatalysts were fabricated by a colloidal crystal templating and sol-gel approach - a robust and highly applicable bottom-up scheme which allowed for precise control over the geometric and optical properties of the TiO2 IO photocatalysts. Optical properties of the TiO2 IO thin films were investigated in detail by UV-vis transmittance and reflectance measurements. The PBG along the fcc[1 1 1] direction in the TiO2 IOs was dependent on the inter-planar spacing in the [1 1 1] direction, the incident angle of light and the refractive index of the medium filling the macropores in the IOs, in agreement with a modified Bragg's law expression. Calculated photonic band structures for the photocatalysts revealed a PBG along the Γ → L direction at a/λ ∼ 0.74, in agreement with the experimental optical data. By coupling the low frequency edge of the PBG along the [1 1 1] direction with the electronic absorption edge of anatase TiO2, a two-fold enhancement in the rate of gas phase ethanol photo-oxidation in air was achieved. This enhancement appears to be associated with a 'slow photon' effect that acts to both enhance TiO2 absorption and inhibit spontaneous emission (i.e. suppress electron-hole pair recombination).

  16. Non-classical continuum theory for solids incorporating internal rotations and rotations of Cosserat theories

    NASA Astrophysics Data System (ADS)

    Surana, K. S.; Joy, A. D.; Reddy, J. N.

    2017-03-01

    This paper presents a non-classical continuum theory in Lagrangian description for solids in which the conservation and the balance laws are derived by incorporating both the internal rotations arising from the Jacobian of deformation and the rotations of Cosserat theories at a material point. In particular, in this non-classical continuum theory, we have (i) the usual displacements ( ±b \\varvec{u}) and (ii) three internal rotations ({}_i ±b \\varvec{Θ}) about the axes of a triad whose axes are parallel to the x-frame arising from the Jacobian of deformation (which are completely defined by the skew-symmetric part of the Jacobian of deformation), and (iii) three additional rotations ({}_e ±b \\varvec{Θ}) about the axes of the same triad located at each material point as additional three degrees of freedom referred to as Cosserat rotations. This gives rise to ±b \\varvec{u} and {}_e ±b \\varvec{{Θ} as six degrees of freedom at a material point. The internal rotations ({}_i ±b \\varvec{Θ}), often neglected in classical continuum mechanics, exist in all deforming solid continua as these are due to Jacobian of deformation. When the internal rotations {}_i ±b \\varvec{Θ} are resisted by the deforming matter, conjugate moment tensor arises that together with {}_i ±b \\varvec{Θ} may result in energy storage and/or dissipation, which must be accounted for in the conservation and the balance laws. The Cosserat rotations {}_e ±b \\varvec{Θ} also result in conjugate moment tensor which, together with {}_e ±b \\varvec{Θ}, may also result in energy storage and/or dissipation. The main focus of the paper is a consistent derivation of conservation and balance laws that incorporate aforementioned physics and associated constitutive theories for thermoelastic solids. The mathematical model derived here has closure, and the constitutive theories derived using two alternate approaches are in agreement with each other as well as with the condition resulting from the entropy inequality. Material coefficients introduced in the constitutive theories are clearly defined and discussed.

  17. Spatial Variability in Column CO2 Inferred from High Resolution GEOS-5 Global Model Simulations: Implications for Remote Sensing and Inversions

    NASA Technical Reports Server (NTRS)

    Ott, L.; Putman, B.; Collatz, J.; Gregg, W.

    2012-01-01

    Column CO2 observations from current and future remote sensing missions represent a major advancement in our understanding of the carbon cycle and are expected to help constrain source and sink distributions. However, data assimilation and inversion methods are challenged by the difference in scale of models and observations. OCO-2 footprints represent an area of several square kilometers while NASA s future ASCENDS lidar mission is likely to have an even smaller footprint. In contrast, the resolution of models used in global inversions are typically hundreds of kilometers wide and often cover areas that include combinations of land, ocean and coastal areas and areas of significant topographic, land cover, and population density variations. To improve understanding of scales of atmospheric CO2 variability and representativeness of satellite observations, we will present results from a global, 10-km simulation of meteorology and atmospheric CO2 distributions performed using NASA s GEOS-5 general circulation model. This resolution, typical of mesoscale atmospheric models, represents an order of magnitude increase in resolution over typical global simulations of atmospheric composition allowing new insight into small scale CO2 variations across a wide range of surface flux and meteorological conditions. The simulation includes high resolution flux datasets provided by NASA s Carbon Monitoring System Flux Pilot Project at half degree resolution that have been down-scaled to 10-km using remote sensing datasets. Probability distribution functions are calculated over larger areas more typical of global models (100-400 km) to characterize subgrid-scale variability in these models. Particular emphasis is placed on coastal regions and regions containing megacities and fires to evaluate the ability of coarse resolution models to represent these small scale features. Additionally, model output are sampled using averaging kernels characteristic of OCO-2 and ASCENDS measurement concepts to create realistic pseudo-datasets. Pseudo-data are averaged over coarse model grid cell areas to better understand the ability of measurements to characterize CO2 distributions and spatial gradients on both short (daily to weekly) and long (monthly to seasonal) time scales

  18. The exploration technology and application of sea surface wave

    NASA Astrophysics Data System (ADS)

    Wang, Y.

    2016-12-01

    In order to investigate the seismic velocity structure of the shallow sediments in the Bohai Sea of China, we conduct a shear-wave velocity inversion of the surface wave dispersion data from a survey of 12 ocean bottom seismometers (OBS) and 377 shots of a 9000 inch3 air gun. With OBS station spacing of 5 km and air gun shot spacing of 190 m, high-quality Rayleigh wave data were recorded by the OBSs within 0.4 5 km offset. Rayleigh wave phase velocity dispersion for the fundamental mode and first overtone in the frequency band of 0.9 3.0 Hz were retrieved with the phase-shift method and inverted for the shear-wave velocity structure of the shallow sediments with a damped iterative least-square algorithm. Pseudo 2-D shear-wave velocity profiles with depth to 400 m show coherent features of relatively weak lateral velocity variation. The uncertainty in shear-wave velocity structure was also estimated based on the pseudo 2-D profiles from 6 trial inversions with different initial models, which suggest a velocity uncertainty < 30 m/s for most parts of the 2-D profiles. The layered structure with little lateral variation may be attributable to the continuous sedimentary environment in the Cenozoic sedimentary basin of the Bohai Bay basin. The shear-wave velocity of 200 300 m/s in the top 100 m of the Bohai Sea floor may provide important information for offshore site response studies in earthquake engineering. Furthermore, the very low shear-wave velocity structure (200 700 m/s) down to 400 m depth could produce a significant travel time delay of 1 s in the S wave arrivals, which needs to be considered to avoid serious bias in S wave traveltime tomographic models.

  19. Identifying equivalent sound sources from aeroacoustic simulations using a numerical phased array

    NASA Astrophysics Data System (ADS)

    Pignier, Nicolas J.; O'Reilly, Ciarán J.; Boij, Susann

    2017-04-01

    An application of phased array methods to numerical data is presented, aimed at identifying equivalent flow sound sources from aeroacoustic simulations. Based on phased array data extracted from compressible flow simulations, sound source strengths are computed on a set of points in the source region using phased array techniques assuming monopole propagation. Two phased array techniques are used to compute the source strengths: an approach using a Moore-Penrose pseudo-inverse and a beamforming approach using dual linear programming (dual-LP) deconvolution. The first approach gives a model of correlated sources for the acoustic field generated from the flow expressed in a matrix of cross- and auto-power spectral values, whereas the second approach results in a model of uncorrelated sources expressed in a vector of auto-power spectral values. The accuracy of the equivalent source model is estimated by computing the acoustic spectrum at a far-field observer. The approach is tested first on an analytical case with known point sources. It is then applied to the example of the flow around a submerged air inlet. The far-field spectra obtained from the source models for two different flow conditions are in good agreement with the spectra obtained with a Ffowcs Williams-Hawkings integral, showing the accuracy of the source model from the observer's standpoint. Various configurations for the phased array and for the sources are used. The dual-LP beamforming approach shows better robustness to changes in the number of probes and sources than the pseudo-inverse approach. The good results obtained with this simulation case demonstrate the potential of the phased array approach as a modelling tool for aeroacoustic simulations.

  20. Study on the algorithm of computational ghost imaging based on discrete fourier transform measurement matrix

    NASA Astrophysics Data System (ADS)

    Zhang, Leihong; Liang, Dong; Li, Bei; Kang, Yi; Pan, Zilan; Zhang, Dawei; Gao, Xiumin; Ma, Xiuhua

    2016-07-01

    On the basis of analyzing the cosine light field with determined analytic expression and the pseudo-inverse method, the object is illuminated by a presetting light field with a determined discrete Fourier transform measurement matrix, and the object image is reconstructed by the pseudo-inverse method. The analytic expression of the algorithm of computational ghost imaging based on discrete Fourier transform measurement matrix is deduced theoretically, and compared with the algorithm of compressive computational ghost imaging based on random measurement matrix. The reconstruction process and the reconstruction error are analyzed. On this basis, the simulation is done to verify the theoretical analysis. When the sampling measurement number is similar to the number of object pixel, the rank of discrete Fourier transform matrix is the same as the one of the random measurement matrix, the PSNR of the reconstruction image of FGI algorithm and PGI algorithm are similar, the reconstruction error of the traditional CGI algorithm is lower than that of reconstruction image based on FGI algorithm and PGI algorithm. As the decreasing of the number of sampling measurement, the PSNR of reconstruction image based on FGI algorithm decreases slowly, and the PSNR of reconstruction image based on PGI algorithm and CGI algorithm decreases sharply. The reconstruction time of FGI algorithm is lower than that of other algorithms and is not affected by the number of sampling measurement. The FGI algorithm can effectively filter out the random white noise through a low-pass filter and realize the reconstruction denoising which has a higher denoising capability than that of the CGI algorithm. The FGI algorithm can improve the reconstruction accuracy and the reconstruction speed of computational ghost imaging.

  1. Inverse-consistent rigid registration of CT and MR for MR-based planning and adaptive prostate radiation therapy

    NASA Astrophysics Data System (ADS)

    Rivest-Hénault, David; Dowson, Nicholas; Greer, Peter; Dowling, Jason

    2014-03-01

    MRI-alone treatment planning and adaptive MRI-based prostate radiation therapy are two promising techniques that could significantly increase the accuracy of the curative dose delivery processes while reducing the total radiation dose. State-of-the-art methods rely on the registration of a patient MRI with a MR-CT atlas for the estimation of pseudo-CT [5]. This atlas itself is generally created by registering many CT and MRI pairs. Most registration methods are not symmetric, but the order of the images influences the result [8]. The computed transformation is therefore biased, introducing unwanted variability. This work examines how much a symmetric algorithm improves the registration. Methods: A robust symmetric registration algorithm is proposed that simultaneously optimises a half space transform and its inverse. During the registration process, the two input volumetric images are transformed to a common position in space, therefore minimising any computational bias. An asymmetrical implementation of the same algorithm was used for comparison purposes. Results: Whole pelvis MRI and CT scans from 15 prostate patients were registered, as in the creation of MR-CT atlases. In each case, two registrations were performed, with different input image orders, and the transformation error quantified. Mean residuals of 0.63±0.26 mm (translation) and (8.7±7.3) × 10--3 rad (rotation) were found for the asymmetrical implementation with corresponding values of 0.038±0.039 mm and (1.6 ± 1.3) × 10--3 rad for the proposed symmetric algorithm, a substantial improvement. Conclusions: The increased registration precision will enhance the generation of pseudo-CT from MRI for atlas based MR planning methods.

  2. Strain-Engineering of Giant Pseudo-Magnetic Fields in Graphene/Boron Nitride (BN) Periodic Nanostructures

    NASA Astrophysics Data System (ADS)

    Hsu, Chen-Chih; Wang, Jiaqing; Teague, Marcus; Chen, Chien-Chang; Yeh, Nai-Chang

    2015-03-01

    Ideal graphene is strain-free whereas non-trivial strain can induce pseudo-magnetic fields as predicted theoretically and manifested experimentally. Here we employ nearly strain-free single-domain graphene, grown by plasma-enhanced chemical vapor deposition (PECVD) at low temperatures, to induce controlled strain by placing the PECVD-graphene on substrates containing engineered nanostructures. We fabricate periodic pyramid nanostructures (typically 100 ~ 200 nm laterally and 10 ~ 60 nm in height) on Si substrates by focused ion beam, and determine the topography of these nanostructures using atomic force microscopy and scanning electron microscopy after we transferred monolayer h-BN followed by PECVD-graphene onto these substrates. We find both layers conform well to the nanostructures so that we can control the size, arrangement, separation, and shape of the nanostructures to generate desirable pseudo-magnetic fields. We also employ molecular dynamics simulation to determine the displacement of carbon atoms under a given nanostructure. The pseudo-magnetic field thus obtained is ~150T in the center, relatively homogeneous over 50% of the area, and drops off precipitously near the edge. These findings are extended to arrays of nanostructures and compared with topographic and spectroscopic studies by STM. Supported by NSF.

  3. Modeling large wind farms in conventionally neutral atmospheric boundary layers under varying initial conditions

    NASA Astrophysics Data System (ADS)

    Allaerts, Dries; Meyers, Johan

    2014-05-01

    Atmospheric boundary layers (ABL) are frequently capped by an inversion layer limiting the entrainment rate and boundary layer growth. Commonly used analytical models state that the entrainment rate is inversely proportional to the inversion strength. The height of the inversion turns out to be a second important parameter. Conventionally neutral atmospheric boundary layers (CNBL) are ABLs with zero surface heat flux developing against a stratified free atmosphere. In this regime the inversion-filling process is merely driven by the downward heat flux at the inversion base. As a result, CNBLs are strongly dependent on the heating history of the boundary layer and strong inversions will fail to erode during the course of the day. In case of large wind farms, the power output of the farm inside a CNBL will depend on the height and strength of the inversion above the boundary layer. On the other hand, increased turbulence levels induced by wind farms may partially undermine the rigid lid effect of the capping inversion, enhance vertical entrainment of air into the farm, and increase boundary layer growth. A suite of large eddy simulations (LES) is performed to investigate the effect of the capping inversion on the conventionally neutral atmospheric boundary layer and on the wind farm performance under varying initial conditions. For these simulations our in-house pseudo-spectral LES code SP-Wind is used. The wind turbines are modelled using a non-rotating actuator disk method. In the absence of wind farms, we find that a decrease in inversion strength corresponds to a decrease in the geostrophic angle and an increase in entrainment rate and geostrophic drag. Placing the initial inversion base at higher altitudes further reduces the effect of the capping inversion on the boundary layer. The inversion can be fully neglected once it is situated above the equilibrium height that a truly neutral boundary layer would attain under the same external conditions such as geostrophic wind speed and surface roughness. Wind farm simulations show the expected increase in boundary layer height and growth rate with respect to the case without wind farms. Raising the initial strength of the capping inversion in these simulations dampens the turbulent growth of the boundary layer above the farm, decreasing the farms energy extraction. The authors acknowledge support from the European Research Council (FP7-Ideas, grant no. 306471). Simulations were performed on the computing infrastructure of the VSC Flemish Supercomputer Center, funded by the Hercules Foundation and the Flemish Government.

  4. RANDOMNESS of Numbers DEFINITION(QUERY:WHAT? V HOW?) ONLY Via MAXWELL-BOLTZMANN CLASSICAL-Statistics(MBCS) Hot-Plasma VS. Digits-Clumping Log-Law NON-Randomness Inversion ONLY BOSE-EINSTEIN QUANTUM-Statistics(BEQS) .

    NASA Astrophysics Data System (ADS)

    Siegel, Z.; Siegel, Edward Carl-Ludwig

    2011-03-01

    RANDOMNESS of Numbers cognitive-semantics DEFINITION VIA Cognition QUERY: WHAT???, NOT HOW?) VS. computer-``science" mindLESS number-crunching (Harrel-Sipser-...) algorithmics Goldreich "PSEUDO-randomness"[Not.AMS(02)] mea-culpa is ONLY via MAXWELL-BOLTZMANN CLASSICAL-STATISTICS(NOT FDQS!!!) "hot-plasma" REPULSION VERSUS Newcomb(1881)-Weyl(1914;1916)-Benford(1938) "NeWBe" logarithmic-law digit-CLUMPING/ CLUSTERING NON-Randomness simple Siegel[AMS Joint.Mtg.(02)-Abs. # 973-60-124] algebraic-inversion to THE QUANTUM and ONLY BEQS preferentially SEQUENTIALLY lower-DIGITS CLUMPING/CLUSTERING with d = 0 BEC, is ONLY VIA Siegel-Baez FUZZYICS=CATEGORYICS (SON OF TRIZ)/"Category-Semantics"(C-S), latter intersection/union of Lawvere(1964)-Siegel(1964)] category-theory (matrix: MORPHISMS V FUNCTORS) "+" cognitive-semantics'' (matrix: ANTONYMS V SYNONYMS) yields Siegel-Baez FUZZYICS=CATEGORYICS/C-S tabular list-format matrix truth-table analytics: MBCS RANDOMNESS TRUTH/EMET!!!

  5. A numerical method to solve the 1D and the 2D reaction diffusion equation based on Bessel functions and Jacobian free Newton-Krylov subspace methods

    NASA Astrophysics Data System (ADS)

    Parand, K.; Nikarya, M.

    2017-11-01

    In this paper a novel method will be introduced to solve a nonlinear partial differential equation (PDE). In the proposed method, we use the spectral collocation method based on Bessel functions of the first kind and the Jacobian free Newton-generalized minimum residual (JFNGMRes) method with adaptive preconditioner. In this work a nonlinear PDE has been converted to a nonlinear system of algebraic equations using the collocation method based on Bessel functions without any linearization, discretization or getting the help of any other methods. Finally, by using JFNGMRes, the solution of the nonlinear algebraic system is achieved. To illustrate the reliability and efficiency of the proposed method, we solve some examples of the famous Fisher equation. We compare our results with other methods.

  6. Generating log-normal mock catalog of galaxies in redshift space

    NASA Astrophysics Data System (ADS)

    Agrawal, Aniket; Makiya, Ryu; Chiang, Chi-Ting; Jeong, Donghui; Saito, Shun; Komatsu, Eiichiro

    2017-10-01

    We present a public code to generate a mock galaxy catalog in redshift space assuming a log-normal probability density function (PDF) of galaxy and matter density fields. We draw galaxies by Poisson-sampling the log-normal field, and calculate the velocity field from the linearised continuity equation of matter fields, assuming zero vorticity. This procedure yields a PDF of the pairwise velocity fields that is qualitatively similar to that of N-body simulations. We check fidelity of the catalog, showing that the measured two-point correlation function and power spectrum in real space agree with the input precisely. We find that a linear bias relation in the power spectrum does not guarantee a linear bias relation in the density contrasts, leading to a cross-correlation coefficient of matter and galaxies deviating from unity on small scales. We also find that linearising the Jacobian of the real-to-redshift space mapping provides a poor model for the two-point statistics in redshift space. That is, non-linear redshift-space distortion is dominated by non-linearity in the Jacobian. The power spectrum in redshift space shows a damping on small scales that is qualitatively similar to that of the well-known Fingers-of-God (FoG) effect due to random velocities, except that the log-normal mock does not include random velocities. This damping is a consequence of non-linearity in the Jacobian, and thus attributing the damping of the power spectrum solely to FoG, as commonly done in the literature, is misleading.

  7. Solving phase appearance/disappearance two-phase flow problems with high resolution staggered grid and fully implicit schemes by the Jacobian-free Newton–Krylov Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zou, Ling; Zhao, Haihua; Zhang, Hongbin

    2016-04-01

    The phase appearance/disappearance issue presents serious numerical challenges in two-phase flow simulations. Many existing reactor safety analysis codes use different kinds of treatments for the phase appearance/disappearance problem. However, to our best knowledge, there are no fully satisfactory solutions. Additionally, the majority of the existing reactor system analysis codes were developed using low-order numerical schemes in both space and time. In many situations, it is desirable to use high-resolution spatial discretization and fully implicit time integration schemes to reduce numerical errors. In this work, we adapted a high-resolution spatial discretization scheme on staggered grid mesh and fully implicit time integrationmore » methods (such as BDF1 and BDF2) to solve the two-phase flow problems. The discretized nonlinear system was solved by the Jacobian-free Newton Krylov (JFNK) method, which does not require the derivation and implementation of analytical Jacobian matrix. These methods were tested with a few two-phase flow problems with phase appearance/disappearance phenomena considered, such as a linear advection problem, an oscillating manometer problem, and a sedimentation problem. The JFNK method demonstrated extremely robust and stable behaviors in solving the two-phase flow problems with phase appearance/disappearance. No special treatments such as water level tracking or void fraction limiting were used. High-resolution spatial discretization and second- order fully implicit method also demonstrated their capabilities in significantly reducing numerical errors.« less

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Jonghyun; Yoon, Hongkyu; Kitanidis, Peter K.

    When characterizing subsurface properties is crucial for reliable and cost-effective groundwater supply management and contaminant remediation. With recent advances in sensor technology, large volumes of hydro-geophysical and geochemical data can be obtained to achieve high-resolution images of subsurface properties. However, characterization with such a large amount of information requires prohibitive computational costs associated with “big data” processing and numerous large-scale numerical simulations. To tackle such difficulties, the Principal Component Geostatistical Approach (PCGA) has been proposed as a “Jacobian-free” inversion method that requires much smaller forward simulation runs for each iteration than the number of unknown parameters and measurements needed inmore » the traditional inversion methods. PCGA can be conveniently linked to any multi-physics simulation software with independent parallel executions. In our paper, we extend PCGA to handle a large number of measurements (e.g. 106 or more) by constructing a fast preconditioner whose computational cost scales linearly with the data size. For illustration, we characterize the heterogeneous hydraulic conductivity (K) distribution in a laboratory-scale 3-D sand box using about 6 million transient tracer concentration measurements obtained using magnetic resonance imaging. Since each individual observation has little information on the K distribution, the data was compressed by the zero-th temporal moment of breakthrough curves, which is equivalent to the mean travel time under the experimental setting. Moreover, only about 2,000 forward simulations in total were required to obtain the best estimate with corresponding estimation uncertainty, and the estimated K field captured key patterns of the original packing design, showing the efficiency and effectiveness of the proposed method. This article is protected by copyright. All rights reserved.« less

  9. Cyclic coordinate descent: A robotics algorithm for protein loop closure.

    PubMed

    Canutescu, Adrian A; Dunbrack, Roland L

    2003-05-01

    In protein structure prediction, it is often the case that a protein segment must be adjusted to connect two fixed segments. This occurs during loop structure prediction in homology modeling as well as in ab initio structure prediction. Several algorithms for this purpose are based on the inverse Jacobian of the distance constraints with respect to dihedral angle degrees of freedom. These algorithms are sometimes unstable and fail to converge. We present an algorithm developed originally for inverse kinematics applications in robotics. In robotics, an end effector in the form of a robot hand must reach for an object in space by altering adjustable joint angles and arm lengths. In loop prediction, dihedral angles must be adjusted to move the C-terminal residue of a segment to superimpose on a fixed anchor residue in the protein structure. The algorithm, referred to as cyclic coordinate descent or CCD, involves adjusting one dihedral angle at a time to minimize the sum of the squared distances between three backbone atoms of the moving C-terminal anchor and the corresponding atoms in the fixed C-terminal anchor. The result is an equation in one variable for the proposed change in each dihedral. The algorithm proceeds iteratively through all of the adjustable dihedral angles from the N-terminal to the C-terminal end of the loop. CCD is suitable as a component of loop prediction methods that generate large numbers of trial structures. It succeeds in closing loops in a large test set 99.79% of the time, and fails occasionally only for short, highly extended loops. It is very fast, closing loops of length 8 in 0.037 sec on average.

  10. Adaptive Core Simulation Employing Discrete Inverse Theory - Part I: Theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abdel-Khalik, Hany S.; Turinsky, Paul J.

    2005-07-15

    Use of adaptive simulation is intended to improve the fidelity and robustness of important core attribute predictions such as core power distribution, thermal margins, and core reactivity. Adaptive simulation utilizes a selected set of past and current reactor measurements of reactor observables, i.e., in-core instrumentation readings, to adapt the simulation in a meaningful way. A meaningful adaption will result in high-fidelity and robust adapted core simulator models. To perform adaption, we propose an inverse theory approach in which the multitudes of input data to core simulators, i.e., reactor physics and thermal-hydraulic data, are to be adjusted to improve agreement withmore » measured observables while keeping core simulator models unadapted. At first glance, devising such adaption for typical core simulators with millions of input and observables data would spawn not only several prohibitive challenges but also numerous disparaging concerns. The challenges include the computational burdens of the sensitivity-type calculations required to construct Jacobian operators for the core simulator models. Also, the computational burdens of the uncertainty-type calculations required to estimate the uncertainty information of core simulator input data present a demanding challenge. The concerns however are mainly related to the reliability of the adjusted input data. The methodologies of adaptive simulation are well established in the literature of data adjustment. We adopt the same general framework for data adjustment; however, we refrain from solving the fundamental adjustment equations in a conventional manner. We demonstrate the use of our so-called Efficient Subspace Methods (ESMs) to overcome the computational and storage burdens associated with the core adaption problem. We illustrate the successful use of ESM-based adaptive techniques for a typical boiling water reactor core simulator adaption problem.« less

  11. Processing complex pseudo-words in mild cognitive impairment: The interaction of preserved morphological rule knowledge with compromised cognitive ability.

    PubMed

    Manouilidou, Christina; Dolenc, Barbara; Marvin, Tatjana; Pirtošek, Zvezdan

    2016-01-01

    Mild cognitive impairment (MCI) affects the cognitive performance of elderly adults. However, the level of severity is not high enough to be diagnosed with dementia. Previous research reports subtle language impairments in individuals with MCI specifically in domains related to lexical meaning. The present study used both off-line (grammaticality judgment) and on-line (lexical decision) tasks to examine aspects of lexical processing and how they are affected by MCI. 21 healthy older adults and 23 individuals with MCI saw complex pseudo-words that violated various principles of word formation in Slovenian and decided if each letter string was an actual word of their language. The pseudo-words ranged in their degree of violability. A task effect was found, with MCI performance to be similar to that of healthy controls in the off-line task but different in the on-line task. Overall, the MCI group responded slower than the elderly controls. No significant differences were observed in the off-line task, while the on-line task revealed a main effect of Violation type, a main effect of Group and a significant Violation × Group interaction reflecting a difficulty for the MCI group to process pseudo-words in real time. That is, while individuals with MCI seem to preserve morphological rule knowledge, they experience additional difficulties while processing complex pseudo-words. This was attributed to an executive dysfunction associated with MCI that delays the recognition of ungrammatical formations.

  12. A trajectory generation and system characterization model for cislunar low-thrust spacecraft. Volume 2: Technical manual

    NASA Technical Reports Server (NTRS)

    Korsmeyer, David J.; Pinon, Elfego, III; Oconnor, Brendan M.; Bilby, Curt R.

    1990-01-01

    The documentation of the Trajectory Generation and System Characterization Model for the Cislunar Low-Thrust Spacecraft is presented in Technical and User's Manuals. The system characteristics and trajectories of low thrust nuclear electric propulsion spacecraft can be generated through the use of multiple system technology models coupled with a high fidelity trajectory generation routine. The Earth to Moon trajectories utilize near Earth orbital plane alignment, midcourse control dependent upon the spacecraft's Jacobian constant, and capture to target orbit utilizing velocity matching algorithms. The trajectory generation is performed in a perturbed two-body equinoctial formulation and the restricted three-body formulation. A single control is determined by the user for the interactive midcourse portion of the trajectory. The full spacecraft system characteristics and trajectory are provided as output.

  13. Neural Networks for Signal Processing and Control

    NASA Astrophysics Data System (ADS)

    Hesselroth, Ted Daniel

    Neural networks are developed for controlling a robot-arm and camera system and for processing images. The networks are based upon computational schemes that may be found in the brain. In the first network, a neural map algorithm is employed to control a five-joint pneumatic robot arm and gripper through feedback from two video cameras. The pneumatically driven robot arm employed shares essential mechanical characteristics with skeletal muscle systems. To control the position of the arm, 200 neurons formed a network representing the three-dimensional workspace embedded in a four-dimensional system of coordinates from the two cameras, and learned a set of pressures corresponding to the end effector positions, as well as a set of Jacobian matrices for interpolating between these positions. Because of the properties of the rubber-tube actuators of the arm, the position as a function of supplied pressure is nonlinear, nonseparable, and exhibits hysteresis. Nevertheless, through the neural network learning algorithm the position could be controlled to an accuracy of about one pixel (~3 mm) after two hundred learning steps. Applications of repeated corrections in each step via the Jacobian matrices leads to a very robust control algorithm since the Jacobians learned by the network have to satisfy the weak requirement that they yield a reduction of the distance between gripper and target. The second network is proposed as a model for the mammalian vision system in which backward connections from the primary visual cortex (V1) to the lateral geniculate nucleus play a key role. The application of hebbian learning to the forward and backward connections causes the formation of receptive fields which are sensitive to edges, bars, and spatial frequencies of preferred orientations. The receptive fields are learned in such a way as to maximize the rate of transfer of information from the LGN to V1. Orientational preferences are organized into a feature map in the primary visual cortex by the application of lateral interactions during the learning phase. The organization of the mature network is compared to that found in the macaque monkey by several analytical tests. The capacity of the network to process images is investigated. By a method of reconstructing the input images in terms of V1 activities, the simulations show that images can be faithfully represented in V1 by the proposed network. The signal-to-noise ratio of the image is improved by the representation, and compression ratios of well over two-hundred are possible. Lateral interactions between V1 neurons sharpen their orientational tuning. We further study the dynamics of the processing, showing that the rate of decrease of the error of the reconstruction is maximized for the receptive fields used. Lastly, we employ a Fokker-Planck equation for a more detailed prediction of the error value vs. time. The Fokker-Planck equation for an underdamped system with a driving force is derived, yielding an energy-dependent diffusion coefficient which is the integral of the spectral densities of the force and the velocity of the system. The theory is applied to correlated noise activation and resonant activation. Simulation results for the error of the network vs time are compared to the solution of the Fokker-Planck equation.

  14. Optimal Inversion Parameters for Full Waveform Inversion using OBS Data Set

    NASA Astrophysics Data System (ADS)

    Kim, S.; Chung, W.; Shin, S.; Kim, D.; Lee, D.

    2017-12-01

    In recent years, full Waveform Inversion (FWI) has been the most researched technique in seismic data processing. It uses the residuals between observed and modeled data as an objective function; thereafter, the final subsurface velocity model is generated through a series of iterations meant to minimize the residuals.Research on FWI has expanded from acoustic media to elastic media. In acoustic media, the subsurface property is defined by P-velocity; however, in elastic media, properties are defined by multiple parameters, such as P-velocity, S-velocity, and density. Further, the elastic media can also be defined by Lamé constants, density or impedance PI, SI; consequently, research is being carried out to ascertain the optimal parameters.From results of advanced exploration equipment and Ocean Bottom Seismic (OBS) survey, it is now possible to obtain multi-component seismic data. However, to perform FWI on these data and generate an accurate subsurface model, it is important to determine optimal inversion parameters among (Vp, Vs, ρ), (λ, μ, ρ), and (PI, SI) in elastic media. In this study, staggered grid finite difference method was applied to simulate OBS survey. As in inversion, l2-norm was set as objective function. Further, the accurate computation of gradient direction was performed using the back-propagation technique and its scaling was done using the Pseudo-hessian matrix.In acoustic media, only Vp is used as the inversion parameter. In contrast, various sets of parameters, such as (Vp, Vs, ρ) and (λ, μ, ρ) can be used to define inversion in elastic media. Therefore, it is important to ascertain the parameter that gives the most accurate result for inversion with OBS data set.In this study, we generated Vp and Vs subsurface models by using (λ, μ, ρ) and (Vp, Vs, ρ) as inversion parameters in every iteration, and compared the final two FWI results.This research was supported by the Basic Research Project(17-3312) of the Korea Institute of Geoscience and Mineral Resources(KIGAM) funded by the Ministry of Science, ICT and Future Planning of Korea.

  15. Rate-Based Model Predictive Control of Turbofan Engine Clearance

    NASA Technical Reports Server (NTRS)

    DeCastro, Jonathan A.

    2006-01-01

    An innovative model predictive control strategy is developed for control of nonlinear aircraft propulsion systems and sub-systems. At the heart of the controller is a rate-based linear parameter-varying model that propagates the state derivatives across the prediction horizon, extending prediction fidelity to transient regimes where conventional models begin to lose validity. The new control law is applied to a demanding active clearance control application, where the objectives are to tightly regulate blade tip clearances and also anticipate and avoid detrimental blade-shroud rub occurrences by optimally maintaining a predefined minimum clearance. Simulation results verify that the rate-based controller is capable of satisfying the objectives during realistic flight scenarios where both a conventional Jacobian-based model predictive control law and an unconstrained linear-quadratic optimal controller are incapable of doing so. The controller is evaluated using a variety of different actuators, illustrating the efficacy and versatility of the control approach. It is concluded that the new strategy has promise for this and other nonlinear aerospace applications that place high importance on the attainment of control objectives during transient regimes.

  16. Time-Lapse Acoustic Impedance Inversion in CO2 Sequestration Study (Weyburn Field, Canada)

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Morozov, I. B.

    2016-12-01

    Acoustic-impedance (AI) pseudo-logs are useful for characterising subtle variations of fluid content during seismic monitoring of reservoirs undergoing enhanced oil recovery and/or geologic CO2 sequestration. However, highly accurate AI images are required for time-lapse analysis, which may be difficult to achieve with conventional inversion approaches. In this study, two enhancements of time-lapse AI analysis are proposed. First, a well-known uncertainty of AI inversion is caused by the lack of low-frequency signal in reflection seismic data. To resolve this difficulty, we utilize an integrated AI inversion approach combining seismic data, acoustic well logs and seismic-processing velocities. The use of well logs helps stabilizing the recursive AI inverse, and seismic-processing velocities are used to complement the low-frequency information in seismic records. To derive the low-frequency AI from seismic-processing velocity data, an empirical relation is determined by using the available acoustic logs. This method is simple and does not require subjective choices of parameters and regularization schemes as in the more sophisticated joint inversion methods. The second improvement to accurate time-lapse AI imaging consists in time-variant calibration of reflectivity. Calibration corrections consist of time shifts, amplitude corrections, spectral shaping and phase rotations. Following the calibration, average and differential reflection amplitudes are calculated, from which the average and differential AI are obtained. The approaches are applied to a time-lapse 3-D 3-C dataset from Weyburn CO2 sequestration project in southern Saskatchewan, Canada. High quality time-lapse AI volumes are obtained. Comparisons with traditional recursive and colored AI inversions (obtained without using seismic-processing velocities) show that the new method gives a better representation of spatial AI variations. Although only early stages of monitoring seismic data are available, time-lapse AI variations mapped within and near the reservoir zone suggest correlations with CO2 injection. By extending this procedure to elastic impedances, additional constraints on the variations of physical properties within the reservoir can be obtained.

  17. tomo3d: a new 3-D joint refraction and reflection travel-time tomography code for active-source seismic data

    NASA Astrophysics Data System (ADS)

    Meléndez, A.; Korenaga, J.; Sallarès, V.; Ranero, C. R.

    2012-04-01

    We present the development state of tomo3d, a code for three-dimensional refraction and reflection travel-time tomography of wide-angle seismic data based on the previous two-dimensional version of the code, tomo2d. The core of both forward and inverse problems is inherited from the 2-D version. The ray tracing is performed by a hybrid method combining the graph and bending methods. The graph method finds an ordered array of discrete model nodes, which satisfies Fermat's principle, that is, whose corresponding travel time is a global minimum within the space of discrete nodal connections. The bending method is then applied to produce a more accurate ray path by using the nodes as support points for an interpolation with beta-splines. Travel time tomography is formulated as an iterative linearized inversion, and each step is solved using an LSQR algorithm. In order to avoid the singularity of the sensitivity kernel and to reduce the instability of inversion, regularization parameters are introduced in the inversion in the form of smoothing and damping constraints. Velocity models are built as 3-D meshes, and velocity values at intermediate locations are obtained by trilinear interpolation within the corresponding pseudo-cubic cell. Meshes are sheared to account for topographic relief. A floating reflector is represented by a 2-D grid, and depths at intermediate locations are calculated by bilinear interpolation within the corresponding square cell. The trade-off between the resolution of the final model and the associated computational cost is controlled by the relation between the selected forward star for the graph method (i.e. the number of nodes that each node considers as its neighbors) and the refinement of the velocity mesh. Including reflected phases is advantageous because it provides a better coverage and allows us to define the geometry of those geological interfaces with velocity contrasts sharp enough to be observed on record sections. The code also offers the possibility of including water-layer multiples in the modeling, whenever this phase can be followed to greater offsets than the primary phases. This increases the quantity of useful information in the data and yields more extensive and better constrained velocity and geometry models. We will present results from benchmark tests for forward and inverse problems, as well as synthetic tests comparing an inversion with refractions only and another one with both refractions and reflections.

  18. Accuracy analysis and design of A3 parallel spindle head

    NASA Astrophysics Data System (ADS)

    Ni, Yanbing; Zhang, Biao; Sun, Yupeng; Zhang, Yuan

    2016-03-01

    As functional components of machine tools, parallel mechanisms are widely used in high efficiency machining of aviation components, and accuracy is one of the critical technical indexes. Lots of researchers have focused on the accuracy problem of parallel mechanisms, but in terms of controlling the errors and improving the accuracy in the stage of design and manufacturing, further efforts are required. Aiming at the accuracy design of a 3-DOF parallel spindle head(A3 head), its error model, sensitivity analysis and tolerance allocation are investigated. Based on the inverse kinematic analysis, the error model of A3 head is established by using the first-order perturbation theory and vector chain method. According to the mapping property of motion and constraint Jacobian matrix, the compensatable and uncompensatable error sources which affect the accuracy in the end-effector are separated. Furthermore, sensitivity analysis is performed on the uncompensatable error sources. The sensitivity probabilistic model is established and the global sensitivity index is proposed to analyze the influence of the uncompensatable error sources on the accuracy in the end-effector of the mechanism. The results show that orientation error sources have bigger effect on the accuracy in the end-effector. Based upon the sensitivity analysis results, the tolerance design is converted into the issue of nonlinearly constrained optimization with the manufacturing cost minimum being the optimization objective. By utilizing the genetic algorithm, the allocation of the tolerances on each component is finally determined. According to the tolerance allocation results, the tolerance ranges of ten kinds of geometric error sources are obtained. These research achievements can provide fundamental guidelines for component manufacturing and assembly of this kind of parallel mechanisms.

  19. Joint inversion of seismic refraction and resistivity data using layered models - applications to hydrogeology

    NASA Astrophysics Data System (ADS)

    Juhojuntti, N. G.; Kamm, J.

    2010-12-01

    We present a layered-model approach to joint inversion of shallow seismic refraction and resistivity (DC) data, which we believe is a seldom tested method of addressing the problem. This method has been developed as we believe that for shallow sedimentary environments (roughly <100 m depth) a model with a few layers and sharp layer boundaries better represents the subsurface than a smooth minimum-structure (grid) model. Due to the strong assumption our model parameterization implies on the subsurface, only a low number of well resolved model parameters has to be estimated, and provided that this assumptions holds our method can also be applied to other environments. We are using a least-squares inversion, with lateral smoothness constraints, allowing lateral variations in the seismic velocity and the resistivity but no vertical variations. One exception is a positive gradient in the seismic velocity in the uppermost layer in order to get diving rays (the refractions in the deeper layers are modeled as head waves). We assume no connection between seismic velocity and resistivity, and these parameters are allowed to vary individually within the layers. The layer boundaries are, however, common for both parameters. During the inversion lateral smoothing can be applied to the layer boundaries as well as to the seismic velocity and the resistivity. The number of layers is specified before the inversion, and typically we use models with three layers. Depending on the type of environment it is possible to apply smoothing either to the depth of the layer boundaries or to the thickness of the layers, although normally the former is used for shallow sedimentary environments. The smoothing parameters can be chosen independently for each layer. For the DC data we use a finite-difference algorithm to perform the forward modeling and to calculate the Jacobian matrix, while for the seismic data the corresponding entities are retrieved via ray-tracing, using components from the RAYINVR package. The modular layout of the code makes it straightforward to include other types of geophysical data, i.e. gravity. The code has been tested using synthetic examples with fairly simple 2D geometries, mainly for checking the validity of the calculations. The inversion generally converges towards the correct solution, although there could be stability problems if the starting model is too erroneous. We have also applied the code to field data from seismic refraction and multi-electrode resistivity measurements at typical sand-gravel groundwater reservoirs. The tests are promising, as the calculated depths agree fairly well with information from drilling and the velocity and resistivity values appear reasonable. Current work includes better regularization of the inversion as well as defining individual weight factors for the different datasets, as the present algorithm tends to constrain the depths mainly by using the seismic data. More complex synthetic examples will also be tested, including models addressing the seismic hidden-layer problem.

  20. Frequency-domain elastic full waveform inversion using encoded simultaneous sources

    NASA Astrophysics Data System (ADS)

    Jeong, W.; Son, W.; Pyun, S.; Min, D.

    2011-12-01

    Currently, numerous studies have endeavored to develop robust full waveform inversion and migration algorithms. These processes require enormous computational costs, because of the number of sources in the survey. To avoid this problem, the phase encoding technique for prestack migration was proposed by Romero (2000) and Krebs et al. (2009) proposed the encoded simultaneous-source inversion technique in the time domain. On the other hand, Ben-Hadj-Ali et al. (2011) demonstrated the robustness of the frequency-domain full waveform inversion with simultaneous sources for noisy data changing the source assembling. Although several studies on simultaneous-source inversion tried to estimate P- wave velocity based on the acoustic wave equation, seismic migration and waveform inversion based on the elastic wave equations are required to obtain more reliable subsurface information. In this study, we propose a 2-D frequency-domain elastic full waveform inversion technique using phase encoding methods. In our algorithm, the random phase encoding method is employed to calculate the gradients of the elastic parameters, source signature estimation and the diagonal entries of approximate Hessian matrix. The crosstalk for the estimated source signature and the diagonal entries of approximate Hessian matrix are suppressed with iteration as for the gradients. Our 2-D frequency-domain elastic waveform inversion algorithm is composed using the back-propagation technique and the conjugate-gradient method. Source signature is estimated using the full Newton method. We compare the simultaneous-source inversion with the conventional waveform inversion for synthetic data sets of the Marmousi-2 model. The inverted results obtained by simultaneous sources are comparable to those obtained by individual sources, and source signature is successfully estimated in simultaneous source technique. Comparing the inverted results using the pseudo Hessian matrix with previous inversion results provided by the approximate Hessian matrix, it is noted that the latter are better than the former for deeper parts of the model. This work was financially supported by the Brain Korea 21 project of Energy System Engineering, by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (2010-0006155), by the Energy Efficiency & Resources of the Korea Institute of Energy Technology Evaluation and Planning (KETEP) grant funded by the Korea government Ministry of Knowledge Economy (No. 2010T100200133).

  1. Pseudo-spectral control of a novel oscillating surge wave energy converter in regular waves for power optimization including load reduction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tom, Nathan M.; Yu, Yi -Hsiang; Wright, Alan D.

    The aim of this study is to describe a procedure to maximize the power-to-load ratio of a novel wave energy converter (WEC) that combines an oscillating surge wave energy converter with variable structural components. The control of the power-take-off torque will be on a wave-to-wave timescale, whereas the structure will be controlled statically such that the geometry remains the same throughout the wave period. Linear hydrodynamic theory is used to calculate the upper and lower bounds for the time-averaged absorbed power and surge foundation loads while assuming that the WEC motion remains sinusoidal. Previous work using pseudo-spectral techniques to solvemore » the optimal control problem focused solely on maximizing absorbed energy. This work extends the optimal control problem to include a measure of the surge foundation force in the optimization. The objective function includes two competing terms that force the optimizer to maximize power capture while minimizing structural loads. A penalty weight was included with the surge foundation force that allows control of the optimizer performance based on whether emphasis should be placed on power absorption or load shedding. Results from pseudo-spectral optimal control indicate that a unit reduction in time-averaged power can be accompanied by a greater reduction in surge-foundation force.« less

  2. Pseudo-spectral control of a novel oscillating surge wave energy converter in regular waves for power optimization including load reduction

    DOE PAGES

    Tom, Nathan M.; Yu, Yi -Hsiang; Wright, Alan D.; ...

    2017-04-18

    The aim of this study is to describe a procedure to maximize the power-to-load ratio of a novel wave energy converter (WEC) that combines an oscillating surge wave energy converter with variable structural components. The control of the power-take-off torque will be on a wave-to-wave timescale, whereas the structure will be controlled statically such that the geometry remains the same throughout the wave period. Linear hydrodynamic theory is used to calculate the upper and lower bounds for the time-averaged absorbed power and surge foundation loads while assuming that the WEC motion remains sinusoidal. Previous work using pseudo-spectral techniques to solvemore » the optimal control problem focused solely on maximizing absorbed energy. This work extends the optimal control problem to include a measure of the surge foundation force in the optimization. The objective function includes two competing terms that force the optimizer to maximize power capture while minimizing structural loads. A penalty weight was included with the surge foundation force that allows control of the optimizer performance based on whether emphasis should be placed on power absorption or load shedding. Results from pseudo-spectral optimal control indicate that a unit reduction in time-averaged power can be accompanied by a greater reduction in surge-foundation force.« less

  3. A nonlinear H-infinity approach to optimal control of the depth of anaesthesia

    NASA Astrophysics Data System (ADS)

    Rigatos, Gerasimos; Rigatou, Efthymia; Zervos, Nikolaos

    2016-12-01

    Controlling the level of anaesthesia is important for improving the success rate of surgeries and for reducing the risks to which operated patients are exposed. This paper proposes a nonlinear H-infinity approach to optimal control of the level of anaesthesia. The dynamic model of the anaesthesia, which describes the concentration of the anaesthetic drug in different parts of the body, is subjected to linearization at local operating points. These are defined at each iteration of the control algorithm and consist of the present value of the system's state vector and of the last control input that was exerted on it. For this linearization Taylor series expansion is performed and the system's Jacobian matrices are computed. For the linearized model an H-infinity controller is designed. The feedback control gains are found by solving at each iteration of the control algorithm an algebraic Riccati equation. The modelling errors due to this approximate linearization are considered as disturbances which are compensated by the robustness of the control loop. The stability of the control loop is confirmed through Lyapunov analysis.

  4. Matrix-Inversion-Free Compressed Sensing With Variable Orthogonal Multi-Matching Pursuit Based on Prior Information for ECG Signals.

    PubMed

    Cheng, Yih-Chun; Tsai, Pei-Yun; Huang, Ming-Hao

    2016-05-19

    Low-complexity compressed sensing (CS) techniques for monitoring electrocardiogram (ECG) signals in wireless body sensor network (WBSN) are presented. The prior probability of ECG sparsity in the wavelet domain is first exploited. Then, variable orthogonal multi-matching pursuit (vOMMP) algorithm that consists of two phases is proposed. In the first phase, orthogonal matching pursuit (OMP) algorithm is adopted to effectively augment the support set with reliable indices and in the second phase, the orthogonal multi-matching pursuit (OMMP) is employed to rescue the missing indices. The reconstruction performance is thus enhanced with the prior information and the vOMMP algorithm. Furthermore, the computation-intensive pseudo-inverse operation is simplified by the matrix-inversion-free (MIF) technique based on QR decomposition. The vOMMP-MIF CS decoder is then implemented in 90 nm CMOS technology. The QR decomposition is accomplished by two systolic arrays working in parallel. The implementation supports three settings for obtaining 40, 44, and 48 coefficients in the sparse vector. From the measurement result, the power consumption is 11.7 mW at 0.9 V and 12 MHz. Compared to prior chip implementations, our design shows good hardware efficiency and is suitable for low-energy applications.

  5. Glyburide is associated with attenuated vasogenic edema in stroke patients.

    PubMed

    Kimberly, W Taylor; Battey, Thomas W K; Pham, Ly; Wu, Ona; Yoo, Albert J; Furie, Karen L; Singhal, Aneesh B; Elm, Jordan J; Stern, Barney J; Sheth, Kevin N

    2014-04-01

    Brain edema is a serious complication of ischemic stroke that can lead to secondary neurological deterioration and death. Glyburide is reported to prevent brain swelling in preclinical rodent models of ischemic stroke through inhibition of a non-selective channel composed of sulfonylurea receptor 1 and transient receptor potential cation channel subfamily M member 4. However, the relevance of this pathway to the development of cerebral edema in stroke patients is not known. Using a case-control design, we retrospectively assessed neuroimaging and blood markers of cytotoxic and vasogenic edema in subjects who were enrolled in the glyburide advantage in malignant edema and stroke-pilot (GAMES-Pilot) trial. We compared serial brain magnetic resonance images (MRIs) to a cohort with similar large volume infarctions. We also compared matrix metalloproteinase-9 (MMP-9) plasma level in large hemispheric stroke. We report that IV glyburide was associated with T2 fluid-attenuated inversion recovery signal intensity ratio on brain MRI, diminished the lesional water diffusivity between days 1 and 2 (pseudo-normalization), and reduced blood MMP-9 level. Several surrogate markers of vasogenic edema appear to be reduced in the setting of IV glyburide treatment in human stroke. Verification of these potential imaging and blood biomarkers is warranted in the context of a randomized, placebo-controlled trial.

  6. Inversion Schemes to Retrieve Atmospheric and Oceanic Parameters from SeaWiFS Data

    NASA Technical Reports Server (NTRS)

    Frouin, Robert; Deschamps, Pierre-Yves

    1997-01-01

    Firstly, we have analyzed atmospheric transmittance and sky radiance data connected at the Scripps Institution of Oceanography pier, La Jolla during the winters of 1993 and 1994. Aerosol optical thickness at 870 nm was generally low in La Jolla, with most values below 0.1 after correction for stratospheric aerosols. For such low optical thickness, variability in aerosol scattering properties cannot be determined, and a mean background model, specified regionally under stable stratospheric component, may be sufficient for ocean color remote sensing, from space. For optical thicknesses above 0. 1, two modes of variability characterized by Angstrom exponents of 1.2 and 0.5 and corresponding, to Tropospheric and Maritime models, respectively, were identified in the measurements. The aerosol models selected for ocean color remote sensing, allowed one to fit, within measurement inaccuracies, the derived values of Angstrom exponent and 'pseudo' phase function (the product of single scattering albedo and phase function), key atmospheric correction parameters. Importantly, the 'pseudo' phase function can be derived from measurements of the Angstrom exponent. Shipborne sun photometer measurements at the time of satellite overpass are usually sufficient to verify atmospheric correction for ocean color.

  7. Balancing Power Absorption and Structural Loading for a Novel Fixed-Bottom Wave Energy Converter with Nonideal Power Take-Off in Regular Waves: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tom, Nathan M; Yu, Yi-Hsiang; Wright, Alan D

    In this work, the net power delivered to the grid from a nonideal power take-off (PTO) is introduced followed by a review of the pseudo-spectral control theory. A power-to-load ratio, used to evaluate the pseudo-spectral controller performance, is discussed, and the results obtained from optimizing a multiterm objective function are compared against results obtained from maximizing the net output power to the grid. Simulation results are then presented for four different oscillating wave energy converter geometries to highlight the potential of combing both geometry and PTO control to maximize power while minimizing loads.

  8. Prediction of jump phenomena in roll-coupled maneuvers of airplanes

    NASA Technical Reports Server (NTRS)

    Schy, A. A.; Hannah, M. E.

    1976-01-01

    An easily computerized analytical method is developed for identifying critical airplane maneuvers in which nonlinear rotational coupling effects may cause sudden jumps in the response to pilot's control inputs. Fifth and ninth degree polynomials for predicting multiple pseudo-steady states of roll-coupled maneuvers are derived. The program calculates the pseudo-steady solutions and their stability. The occurrence of jump-like responses for several airplanes and a variety of maneuvers is shown to correlate well with the appearance of multiple stable solutions for critical control combinations. The analysis is extended to include aerodynamics nonlinear in angle of attack.

  9. Derivative matrices of a skew ray for spherical boundary surfaces and their applications in system analysis and design.

    PubMed

    Lin, Psang Dain

    2014-05-10

    In a previous paper [Appl. Opt.52, 4151 (2013)], we presented the first- and second-order derivatives of a ray for a flat boundary surface to design prisms. In this paper, that scheme is extended to determine the Jacobian and Hessian matrices of a skew ray as it is reflected/refracted at a spherical boundary surface. The validity of the proposed approach as an analysis and design tool is demonstrated using an axis-symmetrical system for illustration purpose. It is found that these two matrices can provide the search direction used by existing gradient-based schemes to minimize the merit function during the optimization stage of the optical system design process. It is also possible to make the optical system designs more automatic, if the image defects can be extracted from the Jacobian and Hessian matrices of a skew ray.

  10. Chemistry-split techniques for viscous reactive blunt body flow computations

    NASA Technical Reports Server (NTRS)

    Li, C. P.

    1987-01-01

    The weak-coupling structure between the fluid and species equations has been exploited and resulted in three, closely related, time-iterative implicit techniques. While the primitive variables are solved in two separated groups and each by an Alternating Direction Implicit (ADI) factorization scheme, the rate-species Jacobian can be treated in either full or diagonal matrix form, or simply ignored. The latter two versions render the split technique to solving for species as scalar rather than vector variables. The solution is completed at the end of each iteration after determining temperature and pressure from the flow density, energy and species concentrations. Numerical experimentation has shown that the split scalar technique, using partial rate Jacobian, yields the best overall stability and consistency. Satisfactory viscous solutions were obtained for an ellipsoidal body of axis ratio 3:1 at Mach 35 and an angle of attack of 20 degrees.

  11. An assessment of coupling algorithms for nuclear reactor core physics simulations

    DOE PAGES

    Hamilton, Steven; Berrill, Mark; Clarno, Kevin; ...

    2016-04-01

    This paper evaluates the performance of multiphysics coupling algorithms applied to a light water nuclear reactor core simulation. The simulation couples the k-eigenvalue form of the neutron transport equation with heat conduction and subchannel flow equations. We compare Picard iteration (block Gauss–Seidel) to Anderson acceleration and multiple variants of preconditioned Jacobian-free Newton–Krylov (JFNK). The performance of the methods are evaluated over a range of energy group structures and core power levels. A novel physics-based approximation to a Jacobian-vector product has been developed to mitigate the impact of expensive on-line cross section processing steps. Furthermore, numerical simulations demonstrating the efficiency ofmore » JFNK and Anderson acceleration relative to standard Picard iteration are performed on a 3D model of a nuclear fuel assembly. Both criticality (k-eigenvalue) and critical boron search problems are considered.« less

  12. Symmetric and Asymmetric Tendencies in Stable Complex Systems

    PubMed Central

    Tan, James P. L.

    2016-01-01

    A commonly used approach to study stability in a complex system is by analyzing the Jacobian matrix at an equilibrium point of a dynamical system. The equilibrium point is stable if all eigenvalues have negative real parts. Here, by obtaining eigenvalue bounds of the Jacobian, we show that stable complex systems will favor mutualistic and competitive relationships that are asymmetrical (non-reciprocative) and trophic relationships that are symmetrical (reciprocative). Additionally, we define a measure called the interdependence diversity that quantifies how distributed the dependencies are between the dynamical variables in the system. We find that increasing interdependence diversity has a destabilizing effect on the equilibrium point, and the effect is greater for trophic relationships than for mutualistic and competitive relationships. These predictions are consistent with empirical observations in ecology. More importantly, our findings suggest stabilization algorithms that can apply very generally to a variety of complex systems. PMID:27545722

  13. Symmetric and Asymmetric Tendencies in Stable Complex Systems.

    PubMed

    Tan, James P L

    2016-08-22

    A commonly used approach to study stability in a complex system is by analyzing the Jacobian matrix at an equilibrium point of a dynamical system. The equilibrium point is stable if all eigenvalues have negative real parts. Here, by obtaining eigenvalue bounds of the Jacobian, we show that stable complex systems will favor mutualistic and competitive relationships that are asymmetrical (non-reciprocative) and trophic relationships that are symmetrical (reciprocative). Additionally, we define a measure called the interdependence diversity that quantifies how distributed the dependencies are between the dynamical variables in the system. We find that increasing interdependence diversity has a destabilizing effect on the equilibrium point, and the effect is greater for trophic relationships than for mutualistic and competitive relationships. These predictions are consistent with empirical observations in ecology. More importantly, our findings suggest stabilization algorithms that can apply very generally to a variety of complex systems.

  14. An assessment of coupling algorithms for nuclear reactor core physics simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamilton, Steven; Berrill, Mark; Clarno, Kevin

    This paper evaluates the performance of multiphysics coupling algorithms applied to a light water nuclear reactor core simulation. The simulation couples the k-eigenvalue form of the neutron transport equation with heat conduction and subchannel flow equations. We compare Picard iteration (block Gauss–Seidel) to Anderson acceleration and multiple variants of preconditioned Jacobian-free Newton–Krylov (JFNK). The performance of the methods are evaluated over a range of energy group structures and core power levels. A novel physics-based approximation to a Jacobian-vector product has been developed to mitigate the impact of expensive on-line cross section processing steps. Furthermore, numerical simulations demonstrating the efficiency ofmore » JFNK and Anderson acceleration relative to standard Picard iteration are performed on a 3D model of a nuclear fuel assembly. Both criticality (k-eigenvalue) and critical boron search problems are considered.« less

  15. An assessment of coupling algorithms for nuclear reactor core physics simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamilton, Steven, E-mail: hamiltonsp@ornl.gov; Berrill, Mark, E-mail: berrillma@ornl.gov; Clarno, Kevin, E-mail: clarnokt@ornl.gov

    This paper evaluates the performance of multiphysics coupling algorithms applied to a light water nuclear reactor core simulation. The simulation couples the k-eigenvalue form of the neutron transport equation with heat conduction and subchannel flow equations. We compare Picard iteration (block Gauss–Seidel) to Anderson acceleration and multiple variants of preconditioned Jacobian-free Newton–Krylov (JFNK). The performance of the methods are evaluated over a range of energy group structures and core power levels. A novel physics-based approximation to a Jacobian-vector product has been developed to mitigate the impact of expensive on-line cross section processing steps. Numerical simulations demonstrating the efficiency of JFNKmore » and Anderson acceleration relative to standard Picard iteration are performed on a 3D model of a nuclear fuel assembly. Both criticality (k-eigenvalue) and critical boron search problems are considered.« less

  16. Correction of electrode modelling errors in multi-frequency EIT imaging.

    PubMed

    Jehl, Markus; Holder, David

    2016-06-01

    The differentiation of haemorrhagic from ischaemic stroke using electrical impedance tomography (EIT) requires measurements at multiple frequencies, since the general lack of healthy measurements on the same patient excludes time-difference imaging methods. It has previously been shown that the inaccurate modelling of electrodes constitutes one of the largest sources of image artefacts in non-linear multi-frequency EIT applications. To address this issue, we augmented the conductivity Jacobian matrix with a Jacobian matrix with respect to electrode movement. Using this new algorithm, simulated ischaemic and haemorrhagic strokes in a realistic head model were reconstructed for varying degrees of electrode position errors. The simultaneous recovery of conductivity spectra and electrode positions removed most artefacts caused by inaccurately modelled electrodes. Reconstructions were stable for electrode position errors of up to 1.5 mm standard deviation along both surface dimensions. We conclude that this method can be used for electrode model correction in multi-frequency EIT.

  17. Numerical Approximation of Elasticity Tensor Associated With Green-Naghdi Rate.

    PubMed

    Liu, Haofei; Sun, Wei

    2017-08-01

    Objective stress rates are often used in commercial finite element (FE) programs. However, deriving a consistent tangent modulus tensor (also known as elasticity tensor or material Jacobian) associated with the objective stress rates is challenging when complex material models are utilized. In this paper, an approximation method for the tangent modulus tensor associated with the Green-Naghdi rate of the Kirchhoff stress is employed to simplify the evaluation process. The effectiveness of the approach is demonstrated through the implementation of two user-defined fiber-reinforced hyperelastic material models. Comparisons between the approximation method and the closed-form analytical method demonstrate that the former can simplify the material Jacobian evaluation with satisfactory accuracy while retaining its computational efficiency. Moreover, since the approximation method is independent of material models, it can facilitate the implementation of complex material models in FE analysis using shell/membrane elements in abaqus.

  18. Tall sections from non-minimal transformations

    DOE PAGES

    Morrison, David R.; Park, Daniel S.

    2016-10-10

    In previous study, we have shown that elliptic fibrations with two sections, or Mordell-Weil rank one, can always be mapped birationally to a Weierstrass model of a certain form, namely, the Jacobian of a P 112 model. Most constructions of elliptically fibered Calabi-Yau manifolds with two sections have been carried out assuming that the image of this birational map was a “minimal” Weierstrass model. In this paper, we show that for some elliptically fibered Calabi-Yau manifolds with Mordell-Weil rank-one, the Jacobian of the P 112 model is not minimal. Said another way, starting from a Calabi-Yau Weierstrass model, the totalmore » space must be blown up (thereby destroying the “Calabi-Yau” property) in order to embed the model into P 112. In particular, we show that the elliptic fibrations studied recently by Klevers and Taylor fall into this class of models.« less

  19. Efficient numerical calculation of MHD equilibria with magnetic islands, with particular application to saturated neoclassical tearing modes

    NASA Astrophysics Data System (ADS)

    Raburn, Daniel Louis

    We have developed a preconditioned, globalized Jacobian-free Newton-Krylov (JFNK) solver for calculating equilibria with magnetic islands. The solver has been developed in conjunction with the Princeton Iterative Equilibrium Solver (PIES) and includes two notable enhancements over a traditional JFNK scheme: (1) globalization of the algorithm by a sophisticated backtracking scheme, which optimizes between the Newton and steepest-descent directions; and, (2) adaptive preconditioning, wherein information regarding the system Jacobian is reused between Newton iterations to form a preconditioner for our GMRES-like linear solver. We have developed a formulation for calculating saturated neoclassical tearing modes (NTMs) which accounts for the incomplete loss of a bootstrap current due to gradients of multiple physical quantities. We have applied the coupled PIES-JFNK solver to calculate saturated island widths on several shots from the Tokamak Fusion Test Reactor (TFTR) and have found reasonable agreement with experimental measurement.

  20. Miniature Rotorcraft Flight Control Stabilization System

    DTIC Science & Technology

    2008-05-30

    The first algorithm is based on the well known QUEST algorithm used for spacecraft and satellites. Due to large vibration in sensors a pseudo...for spacecraft and satellites. Due to large vibration in sensors a pseudo-measurement is developed from gyroscope measurements and rotational...using any valid set of orientation map. Note, in Eq. (6) Euler angles were used to describe . A common alternative to Euler angles is a quaternion

  1. Efficient high-dimensional characterization of conductivity in a sand box using massive MRI-imaged concentration data

    NASA Astrophysics Data System (ADS)

    Lee, J. H.; Yoon, H.; Kitanidis, P. K.; Werth, C. J.; Valocchi, A. J.

    2015-12-01

    Characterizing subsurface properties, particularly hydraulic conductivity, is crucial for reliable and cost-effective groundwater supply management, contaminant remediation, and emerging deep subsurface activities such as geologic carbon storage and unconventional resources recovery. With recent advances in sensor technology, a large volume of hydro-geophysical and chemical data can be obtained to achieve high-resolution images of subsurface properties, which can be used for accurate subsurface flow and reactive transport predictions. However, subsurface characterization with a plethora of information requires high, often prohibitive, computational costs associated with "big data" processing and large-scale numerical simulations. As a result, traditional inversion techniques are not well-suited for problems that require coupled multi-physics simulation models with massive data. In this work, we apply a scalable inversion method called Principal Component Geostatistical Approach (PCGA) for characterizing heterogeneous hydraulic conductivity (K) distribution in a 3-D sand box. The PCGA is a Jacobian-free geostatistical inversion approach that uses the leading principal components of the prior information to reduce computational costs, sometimes dramatically, and can be easily linked with any simulation software. Sequential images of transient tracer concentrations in the sand box were obtained using magnetic resonance imaging (MRI) technique, resulting in 6 million tracer-concentration data [Yoon et. al., 2008]. Since each individual tracer observation has little information on the K distribution, the dimension of the data was reduced using temporal moments and discrete cosine transform (DCT). Consequently, 100,000 unknown K values consistent with the scale of MRI data (at a scale of 0.25^3 cm^3) were estimated by matching temporal moments and DCT coefficients of the original tracer data. Estimated K fields are close to the true K field, and even small-scale variability of the sand box was captured to highlight high K connectivity and contrasts between low and high K zones. Total number of 1,000 MODFLOW and MT3DMS simulations were required to obtain final estimates and corresponding estimation uncertainty, showing the efficiency and effectiveness of our method.

  2. Fine-scale thermohaline ocean structure retrieved with 2-D prestack full-waveform inversion of multichannel seismic data: Application to the Gulf of Cadiz (SW Iberia)

    NASA Astrophysics Data System (ADS)

    Dagnino, D.; Sallarès, V.; Biescas, B.; Ranero, C. R.

    2016-08-01

    This work demonstrates the feasibility of 2-D time-domain, adjoint-state acoustic full-waveform inversion (FWI) to retrieve high-resolution models of ocean physical parameters such as sound speed, temperature and salinity. The proposed method is first described and then applied to prestack multichannel seismic (MCS) data acquired in the Gulf of Cadiz (SW Iberia) in 2007 in the framework of the Geophysical Oceanography project. The inversion strategy flow includes specifically designed data preconditioning for acoustic noise reduction, followed by the inversion of sound speed in the shotgather domain. We show that the final sound speed model has a horizontal resolution of ˜ 70 m, which is two orders of magnitude better than that of the initial model constructed with coincident eXpendable Bathy Thermograph (XBT) data, and close to the theoretical resolution of O(λ). Temperature (T) and salinity (S) are retrieved with the same lateral resolution as sound speed by combining the inverted sound speed model with the thermodynamic equation of seawater and a local, depth-dependent T-S relation derived from regional conductivity-temperature-depth (CTD) measurements of the National Oceanic and Atmospheric Administration (NOAA) database. The comparison of the inverted T and S models with XBT and CTD casts deployed simultaneously to the MCS acquisition shows that the thermohaline contrasts are resolved with an accuracy of 0.18oC for temperature and 0.08 PSU for salinity. The combination of oceanographic and MCS data into a common, pseudo-automatic inversion scheme allows to quantitatively resolve submeso-scale features that ought to be incorporated into larger-scale ocean models of oceans structure and circulation.

  3. Radiance and Jacobian Intercomparison of Radiative Transfer Models Applied to HIRS and AMSU Channels

    NASA Technical Reports Server (NTRS)

    Garand, L.; Turner, D. S.; Larocque, M.; Bates, J.; Boukabara, S.; Brunel, P.; Chevallier, F.; Deblonde, G.; Engelen, R.; Hollingshead, M.; hide

    2000-01-01

    The goals of this study are the evaluation of current fast radiative transfer models (RTMs) and line-by-line (LBL) models. The intercomparison focuses on the modeling of 11 representative sounding channels routinely used at numerical weather prediction centers: 7 HIRS (High-resolution Infrared Sounder) and 4 AMSU (Advanced Microwave Sounding Unit) channels. Interest in this topic was evidenced by the participation of 24 scientists from 16 institutions. An ensemble of 42 diverse atmospheres was used and results compiled for 19 infrared models and 10 microwave models, including several LBL RTMs. For the first time, not only radiances, but also Jacobians (of temperature, water vapor and ozone) were compared to various LBL models for many channels. In the infrared, LBL models typically agree to within 0.05-0.15 K (standard deviation) in terms of top-of-the-atmosphere brightness temperature (BT). Individual differences up to 0.5 K still exist, systematic in some channels, and linked to the type of atmosphere in others. The best fast models emulate LBL BTs to within 0.25 K, but no model achieves this desirable level of success for all channels. The ozone modeling is particularly challenging, In the microwave, fast models generally do quite well against the LBL model to which they were tuned. However significant differences were noted among LBL models, Extending the intercomparison to the Jacobians proved very useful in detecting subtle and more obvious modeling errors. In addition, total and single gas optical depths were calculated, which provided additional insight on the nature of differences. Recommendations for future intercomparisons are suggested.

  4. Radiance and Jacobian Intercomparison of Radiative Transfer Models Applied to HIRS and AMSU Channels

    NASA Technical Reports Server (NTRS)

    Garand, L.; Turner, D. S.; Larocque, M.; Bates, J.; Boukabara, S.; Brunel, P.; Chevallier, F.; Deblonde, G.; Engelen, R.; Atlas, Robert (Technical Monitor)

    2000-01-01

    The goals of this study are the evaluation of current fast radiative transfer models (RTMs) and line-by-line (LBL) models. The intercomparison focuses on the modeling of 11 representative sounding channels routinely used at numerical weather prediction centers: seven HIRS (High-resolution Infrared Sounder) and four AMSU (Advanced Microwave Sounding Unit) channels. Interest in this topic was evidenced by the participation of 24 scientists from 16 institutions. An ensemble of 42 diverse atmospheres was used and results compiled for 19 infrared models and 10 microwave models, including several LBL RTMs. For the first time, not only radiances, but also Jacobians (of temperature, water vapor, and ozone) were compared to various LBL models for many channels. In the infrared, LBL models typically agree to within 0.05-0.15 K (standard deviation) in terms of top-of-the-atmosphere brightness temperature (BT). Individual differences up to 0.5 K still exist, systematic in some channels, and linked to the type of atmosphere in others. The best fast models emulate LBL BTs to within 0.25 K, but no model achieves this desirable level of success for all channels. The ozone modeling is particularly challenging. In the microwave, fast models generally do quite well against the LBL model to which they were tuned. However significant differences were noted among LBL models. Extending the intercomparison to the Jacobians proved very useful in detecting subtle and more obvious modeling errors. In addition, total and single gas optical depths were calculated, which provided additional insight on the nature of differences. Recommendations for future intercomparisons are suggested.

  5. Lung deformations and radiation-induced regional lung collapse in patients treated with stereotactic body radiation therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Diot, Quentin, E-mail: quentin.diot@ucdenver.edu; Kavanagh, Brian; Vinogradskiy, Yevgeniy

    2015-11-15

    Purpose: To differentiate radiation-induced fibrosis from regional lung collapse outside of the high dose region in patients treated with stereotactic body radiation therapy (SBRT) for lung tumors. Methods: Lung deformation maps were computed from pre-treatment and post-treatment computed tomography (CT) scans using a point-to-point translation method. Fifty anatomical landmarks inside the lung (vessel or airway branches) were matched on planning and follow-up scans for the computation process. Two methods using the deformation maps were developed to differentiate regional lung collapse from fibrosis: vector field and Jacobian methods. A total of 40 planning and follow-ups CT scans were analyzed for 20more » lung SBRT patients. Results: Regional lung collapse was detected in 15 patients (75%) using the vector field method, in ten patients (50%) using the Jacobian method, and in 12 patients (60%) by radiologists. In terms of sensitivity and specificity the Jacobian method performed better. Only weak correlations were observed between the dose to the proximal airways and the occurrence of regional lung collapse. Conclusions: The authors presented and evaluated two novel methods using anatomical lung deformations to investigate lung collapse and fibrosis caused by SBRT treatment. Differentiation of these distinct physiological mechanisms beyond what is usually labeled “fibrosis” is necessary for accurate modeling of lung SBRT-induced injuries. With the help of better models, it becomes possible to expand the therapeutic benefits of SBRT to a larger population of lung patients with large or centrally located tumors that were previously considered ineligible.« less

  6. Generating log-normal mock catalog of galaxies in redshift space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agrawal, Aniket; Makiya, Ryu; Saito, Shun

    We present a public code to generate a mock galaxy catalog in redshift space assuming a log-normal probability density function (PDF) of galaxy and matter density fields. We draw galaxies by Poisson-sampling the log-normal field, and calculate the velocity field from the linearised continuity equation of matter fields, assuming zero vorticity. This procedure yields a PDF of the pairwise velocity fields that is qualitatively similar to that of N-body simulations. We check fidelity of the catalog, showing that the measured two-point correlation function and power spectrum in real space agree with the input precisely. We find that a linear biasmore » relation in the power spectrum does not guarantee a linear bias relation in the density contrasts, leading to a cross-correlation coefficient of matter and galaxies deviating from unity on small scales. We also find that linearising the Jacobian of the real-to-redshift space mapping provides a poor model for the two-point statistics in redshift space. That is, non-linear redshift-space distortion is dominated by non-linearity in the Jacobian. The power spectrum in redshift space shows a damping on small scales that is qualitatively similar to that of the well-known Fingers-of-God (FoG) effect due to random velocities, except that the log-normal mock does not include random velocities. This damping is a consequence of non-linearity in the Jacobian, and thus attributing the damping of the power spectrum solely to FoG, as commonly done in the literature, is misleading.« less

  7. A comparative ultrastructural study of pit membranes with plasmodesmata associated thickenings in four angiosperm species.

    PubMed

    Rabaey, David; Lens, Frederic; Huysmans, Suzy; Smets, Erik; Jansen, Steven

    2008-11-01

    Recent micromorphological observations of angiosperm pit membranes have extended the number and range of taxa with pseudo-tori in tracheary elements. This study investigates at ultrastructural level (TEM) the development of pseudo-tori in the unrelated Malus yunnanensis, Ligustrum vulgare, Pittosporum tenuifolium, and Vaccinium myrtillus in order to determine whether these plasmodesmata associated thickenings have a similar developmental pattern across flowering plants. At early ontogenetic stages, the formation of a primary thickening was observed, resulting from swelling of the pit membrane in fibre-tracheids and vessel elements. Since plasmodesmata appear to be frequently, but not always, associated with these primary pit membrane thickenings, it remains unclear which ultrastructural characteristics control the formation of pseudo-tori. At a very late stage during xylem differentiation, a secondary thickening is deposited on the primary pit membrane thickening. Plasmodesmata are always associated with pseudo-tori at these final developmental stages. After autolysis, the secondary thickening becomes electron-dense and persistent, while the primary thickening turns transparent and partially or entirely dissolves. The developmental patterns observed in the species studied are similar and agree with former ontogenetic studies in Rosaceae, suggesting that pseudo-tori might be homologous features across angiosperms.

  8. Tsallis entropy and decoherence of CsI quantum pseudo dot qubit

    NASA Astrophysics Data System (ADS)

    Tiotsop, M.; Fotue, A. J.; Fotsin, H. B.; Fai, L. C.

    2017-05-01

    Polaron in CsI quantum pseudo dot under an electromagnetic field was considered, and the ground and first excited state energies were derived by employing the combining Pekar variational and unitary transformation methods. With the two-level system obtained, single qubit was envisioned and the decoherence was studied using non-extensive entropy (Tsallis entropy). Numerical results showed: (i) the increase (decrease) of the energy levels (period of oscillation) with the increase of chemical potential, the zero point of pseudo dot, cyclotron frequency, and transverse and longitudinal confinements; (ii) the Tsallis entropy evolved as a wave envelop that increase with the increase of non-extenxive parameter and with the increase of electric field strength, zero point of pseudo dot and cyclotron frequency the wave envelop evolve periodically with reduction of period; (iii) The transition probability increases from the boundary to the centre of the dot where it has its maximum value. It was also noted that the probability density oscillate with period T0 = ℏ / Δ Ε with the tunnelling of the chemical potential and zero point of the pseudo dot. These results are helpful in the control of decoherence in quantum systems and may also be useful for the design of quantum computers.

  9. An automated, quantitative, and case-specific evaluation of deformable image registration in computed tomography images

    NASA Astrophysics Data System (ADS)

    Kierkels, R. G. J.; den Otter, L. A.; Korevaar, E. W.; Langendijk, J. A.; van der Schaaf, A.; Knopf, A. C.; Sijtsema, N. M.

    2018-02-01

    A prerequisite for adaptive dose-tracking in radiotherapy is the assessment of the deformable image registration (DIR) quality. In this work, various metrics that quantify DIR uncertainties are investigated using realistic deformation fields of 26 head and neck and 12 lung cancer patients. Metrics related to the physiologically feasibility (the Jacobian determinant, harmonic energy (HE), and octahedral shear strain (OSS)) and numerically robustness of the deformation (the inverse consistency error (ICE), transitivity error (TE), and distance discordance metric (DDM)) were investigated. The deformable registrations were performed using a B-spline transformation model. The DIR error metrics were log-transformed and correlated (Pearson) against the log-transformed ground-truth error on a voxel level. Correlations of r  ⩾  0.5 were found for the DDM and HE. Given a DIR tolerance threshold of 2.0 mm and a negative predictive value of 0.90, the DDM and HE thresholds were 0.49 mm and 0.014, respectively. In conclusion, the log-transformed DDM and HE can be used to identify voxels at risk for large DIR errors with a large negative predictive value. The HE and/or DDM can therefore be used to perform automated quality assurance of each CT-based DIR for head and neck and lung cancer patients.

  10. Limited memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) method for the parameter estimation on geographically weighted ordinal logistic regression model (GWOLR)

    NASA Astrophysics Data System (ADS)

    Saputro, Dewi Retno Sari; Widyaningsih, Purnami

    2017-08-01

    In general, the parameter estimation of GWOLR model uses maximum likelihood method, but it constructs a system of nonlinear equations, making it difficult to find the solution. Therefore, an approximate solution is needed. There are two popular numerical methods: the methods of Newton and Quasi-Newton (QN). Newton's method requires large-scale time in executing the computation program since it contains Jacobian matrix (derivative). QN method overcomes the drawback of Newton's method by substituting derivative computation into a function of direct computation. The QN method uses Hessian matrix approach which contains Davidon-Fletcher-Powell (DFP) formula. The Broyden-Fletcher-Goldfarb-Shanno (BFGS) method is categorized as the QN method which has the DFP formula attribute of having positive definite Hessian matrix. The BFGS method requires large memory in executing the program so another algorithm to decrease memory usage is needed, namely Low Memory BFGS (LBFGS). The purpose of this research is to compute the efficiency of the LBFGS method in the iterative and recursive computation of Hessian matrix and its inverse for the GWOLR parameter estimation. In reference to the research findings, we found out that the BFGS and LBFGS methods have arithmetic operation schemes, including O(n2) and O(nm).

  11. Regularized solution of a nonlinear problem in electromagnetic sounding

    NASA Astrophysics Data System (ADS)

    Piero Deidda, Gian; Fenu, Caterina; Rodriguez, Giuseppe

    2014-12-01

    Non destructive investigation of soil properties is crucial when trying to identify inhomogeneities in the ground or the presence of conductive substances. This kind of survey can be addressed with the aid of electromagnetic induction measurements taken with a ground conductivity meter. In this paper, starting from electromagnetic data collected by this device, we reconstruct the electrical conductivity of the soil with respect to depth, with the aid of a regularized damped Gauss-Newton method. We propose an inversion method based on the low-rank approximation of the Jacobian of the function to be inverted, for which we develop exact analytical formulae. The algorithm chooses a relaxation parameter in order to ensure the positivity of the solution and implements various methods for the automatic estimation of the regularization parameter. This leads to a fast and reliable algorithm, which is tested on numerical experiments both on synthetic data sets and on field data. The results show that the algorithm produces reasonable solutions in the case of synthetic data sets, even in the presence of a noise level consistent with real applications, and yields results that are compatible with those obtained by electrical resistivity tomography in the case of field data. Research supported in part by Regione Sardegna grant CRP2_686.

  12. An hp symplectic pseudospectral method for nonlinear optimal control

    NASA Astrophysics Data System (ADS)

    Peng, Haijun; Wang, Xinwei; Li, Mingwu; Chen, Biaosong

    2017-01-01

    An adaptive symplectic pseudospectral method based on the dual variational principle is proposed and is successfully applied to solving nonlinear optimal control problems in this paper. The proposed method satisfies the first order necessary conditions of continuous optimal control problems, also the symplectic property of the original continuous Hamiltonian system is preserved. The original optimal control problem is transferred into a set of nonlinear equations which can be solved easily by Newton-Raphson iterations, and the Jacobian matrix is found to be sparse and symmetric. The proposed method, on one hand, exhibits exponent convergence rates when the number of collocation points are increasing with the fixed number of sub-intervals; on the other hand, exhibits linear convergence rates when the number of sub-intervals is increasing with the fixed number of collocation points. Furthermore, combining with the hp method based on the residual error of dynamic constraints, the proposed method can achieve given precisions in a few iterations. Five examples highlight the high precision and high computational efficiency of the proposed method.

  13. Potential of multispectral synergism for observing ozone pollution by combining IASI-NG and UVNS measurements from the EPS-SG satellite

    NASA Astrophysics Data System (ADS)

    Costantino, Lorenzo; Cuesta, Juan; Emili, Emanuele; Coman, Adriana; Foret, Gilles; Dufour, Gaëlle; Eremenko, Maxim; Chailleux, Yohann; Beekmann, Matthias; Flaud, Jean-Marie

    2017-04-01

    Present and future satellite observations offer great potential for monitoring air quality on a daily and global basis. However, measurements from currently orbiting satellites do not allow a single sensor to accurately probe surface concentrations of gaseous pollutants such as tropospheric ozone. Combining information from IASI (Infrared Atmospheric Sounding Interferometer) and GOME-2 (Global Ozone Monitoring Experiment-2) respectively in the TIR and UV spectra, a recent multispectral method (referred to as IASI+GOME-2) has shown enhanced sensitivity for probing ozone in the lowermost troposphere (LMT, below 3 km altitude) with maximum sensitivity down to 2.20 km a.s.l. over land, while sensitivity for IASI or GOME-2 alone only peaks at 3 to 4 km at the lowest.In this work we develop a pseudo-observation simulator and evaluate the potential of future EPS-SG (EUMETSAT Polar System - Second Generation) satellite observations, from new-generation sensors IASI-NG (Infrared Atmospheric Sounding Interferometer - New Generation) and UVNS (Ultraviolet Visible Near-infrared Shortwave-infrared), to observe near-surface O3 through the IASI-NG+UVNS multispectral method. The pseudo-real state of the atmosphere is provided by the MOCAGE (MOdèle de Chimie Atmosphérique à Grande Échelle) chemical transport model. We perform full and accurate forward and inverse radiative transfer calculations for a period of 4 days (8-11 July 2010) over Europe.In the LMT, there is a remarkable agreement in the geographical distribution of O3 partial columns between IASI-NG+UVNS pseudo-observations and the corresponding MOCAGE pseudo-reality. With respect to synthetic IASI+GOME-2 products, IASI-NG+UVNS shows a higher correlation between pseudo-observations and pseudo-reality, which is enhanced by about 12 %. The bias on high ozone retrieval is reduced and the average accuracy increases by 22 %. The sensitivity to LMT ozone is also enhanced. On average, the degree of freedom for signal is higher by 159 % over land (from 0.29 to 0.75) and 214 % over ocean (from 0.21 to 0.66). The mean height of maximum sensitivity for the LMT peaks at 1.43 km over land and 2.02 km over ocean, respectively 1.03 and 1.30 km below that of IASI+GOME-2. IASI-NG+UVNS also shows good retrieval skill in the surface-2 km altitude range. It is one of a kind for retrieving ozone layers of 2-3 km thickness, in the first 2-3 km of the atmosphere. IASI-NG+UVNS is expected to largely enhance the capacity to observe ozone pollution from space.

  14. Full 3D Microwave Tomography enhanced GPR surveys: a case study

    NASA Astrophysics Data System (ADS)

    Catapano, Ilaria; Soldovieri, Francesco; Affinito, Antonio; Hugenschmidt, Johannes

    2014-05-01

    Ground Penetrating Radar (GPR) systems are well assessed non-invasive diagnostic tools capable of providing high resolution images of the inner structure of the probed spatial region. Owing to this capability, GPR systems are nowadays more and more considered in the frame of civil engineering surveys since they may give information on constructive details as well as on the aging and risk factors affecting the healthiness of an infrastructure. In this frame, accurate, reliable and easily interpretable images of the probed scenarios are mandatory in order to support the management of maintenance works and assure the safety of structures. Such a requirement motivates the use of different and sophisticated data processing approaches in order to compare more than one image of the same scene, thus improving the reliability and objectiveness of the GPR survey results. Among GPR data processing procedures, Microwave Tomography approaches based on the Born approximation face the imaging as the solution of a linear inverse problem, which is solved by using the Truncated Singular Value Decomposition as a regularized inversion scheme [1]. So far, an approach exploiting a 2D scalar model of the scattering phenomenon have been adopted to process GPR data gathered along a single scan. In this case, 3D images are obtained by interpolating 2D reconstructions (this is referred commonly as pseudo 3D imaging). Such an imaging approach have provided valuable results in several real cases dealing with not only surveys for civil engineering but also archeological prospection, subservice monitoring, security surveys and so on [1-4]. These encouraging results have motivated the development of a full 3D Microwave Tomography approach capable of accounting for the vectorial nature of the wave propagation. The reconstruction capabilities of this novel approach have been assessed mainly against experimental data collected in laboratory controlled conditions. The obtained results corroborate that, the use of a full 3D scattering model allows an improved estimation of the objects shape and size with respect to pseudo 3D imaging [5]. In this communication, the performance offered by the full 3D imaging approach is investigated by using a dataset from infrastructure inspection. Since the collapse of a car park in Switzerland killing 7 firemen, "punching", where a pile remains upright but the ceiling carried by the pile falls down, is considered a serious problem The 3D tomography approach was applied to a dataset acquired in a car park in the vicinity of piles. Such datasets can be used for an assessment of the safety of such structures and can therefore be considered as relevant test cases for innovative data processing and inversion strategies. REFERENCES [1] F. Soldovieri, J. Hugenschmidt, R. Persico and G. Leone, "A linear inverse scattering algorithm for realistic GPR applications, Near Surf. Geophys., vol. 5, pp.29-42, 2007. [2] I. Catapano, L. Crocco R. Di Napoli, F. Soldovieri, A. Brancaccio, F. Pesando, A. Aiello, "Microwave tomography enhanced GPR surveys in Centaur's Domus, Regio VI of Pompeii, Italy", J. Geophys. Eng., vol.9, S92-S99, 2012. [3] I. Catapano, R. Di Napoli, F. Soldovieri, M. Bavusi, A. Loperte, J. Dumoulin, Structural monitoring via microwave tomography-enhanced GPR: the Montagnole test site, J. Geophys. Eng., vol. 9, S100-S107, 2012 [4] J. Hugenschmidt, A. Kalogeropoulos, F. Soldovieri, G. Prisco (2010) Processing Strategies for high-resolution GPR Concrete Inspections, NDT & E International, Volume 43, Issue 4: 334-342. [5] I. Catapano, A. Affinito, G. Gennarelli, F. di Maio, A. Loperte, F. Soldovieri, Full three-dimensional imaging via ground penetrating radar: assessment in controlled conditions and on field for archaeological prospecting, Appl. Phys. A: Materials Science and Processing, pp. 1-8, Article in Press

  15. Analysis of Hypersonic Vehicle Wakes

    DTIC Science & Technology

    2015-09-17

    factor used with viscous Jacobian matrix of left eigenvectors for A R specific gas constant Re Reynolds number Recell cell Reynolds number......focus was shifted to characterizing other wake phenomena. The aerothermal phenomena of interest in the wake include: gas properties, chemical species

  16. Sorption kinetics and isotherm studies of a cationic dye using agricultural waste: broad bean peels.

    PubMed

    Hameed, B H; El-Khaiary, M I

    2008-06-15

    In this paper, broad bean peels (BBP), an agricultural waste, was evaluated for its ability to remove cationic dye (methylene blue) from aqueous solutions. Batch mode experiments were conducted at 30 degrees C. Equilibrium sorption isotherms and kinetics were investigated. The kinetic data obtained at different concentrations have been analyzed using pseudo-first-order, pseudo-second-order and intraparticle diffusion equations. The experimental data fitted very well the pseudo-first-order kinetic model. Analysis of the temportal change of q indicates that at the beginning of the process the overall rate of adsorption is controlled by film-diffusion, then at later stage intraparticle-diffusion controls the rate. Diffusion coefficients and times of transition from film to pore-diffusion control were estimated by piecewise linear regression. The experimental data were analyzed by the Langmuir and Freundlich models. The sorption isotherm data fitted well to Langmuir isotherm and the monolayer adsorption capacity was found to be 192.7 mg/g and the equilibrium adsorption constant Ka is 0.07145 l/mg at 30 degrees C. The results revealed that BBP was a promising sorbent for the removal of methylene blue from aqueous solutions.

  17. Lexical orthography acquisition: Is handwriting better than spelling aloud?

    PubMed Central

    Bosse, Marie-Line; Chaves, Nathalie; Valdois, Sylviane

    2014-01-01

    Lexical orthography acquisition is currently described as the building of links between the visual forms and the auditory forms of whole words. However, a growing body of data suggests that a motor component could further be involved in orthographic acquisition. A few studies support the idea that reading plus handwriting is a better lexical orthographic learning situation than reading alone. However, these studies did not explore which of the cognitive processes involved in handwriting enhanced lexical orthographic acquisition. Some findings suggest that the specific movements memorized when learning to write may participate in the establishment of orthographic representations in memory. The aim of the present study was to assess this hypothesis using handwriting and spelling aloud as two learning conditions. In two experiments, fifth graders were asked to read complex pseudo-words embedded in short sentences. Immediately after reading, participants had to recall the pseudo-words' spellings either by spelling them aloud or by handwriting them down. One week later, orthographic acquisition was tested using two post-tests: a pseudo-word production task (spelling by hand in Experiment 1 or spelling aloud in Experiment 2) and a pseudo-word recognition task. Results showed no significant difference in pseudo-word recognition between the two learning conditions. In the pseudo-word production task, orthography learning improved when the learning and post-test conditions were similar, thus showing a massive encoding-retrieval match effect in the two experiments. However, a mixed model analysis of the pseudo-word production results revealed a significant learning condition effect which remained after control of the encoding-retrieval match effect. This later finding suggests that orthography learning is more efficient when mediated by handwriting than by spelling aloud, whatever the post-test production task. PMID:24575058

  18. Lexical orthography acquisition: Is handwriting better than spelling aloud?

    PubMed

    Bosse, Marie-Line; Chaves, Nathalie; Valdois, Sylviane

    2014-01-01

    Lexical orthography acquisition is currently described as the building of links between the visual forms and the auditory forms of whole words. However, a growing body of data suggests that a motor component could further be involved in orthographic acquisition. A few studies support the idea that reading plus handwriting is a better lexical orthographic learning situation than reading alone. However, these studies did not explore which of the cognitive processes involved in handwriting enhanced lexical orthographic acquisition. Some findings suggest that the specific movements memorized when learning to write may participate in the establishment of orthographic representations in memory. The aim of the present study was to assess this hypothesis using handwriting and spelling aloud as two learning conditions. In two experiments, fifth graders were asked to read complex pseudo-words embedded in short sentences. Immediately after reading, participants had to recall the pseudo-words' spellings either by spelling them aloud or by handwriting them down. One week later, orthographic acquisition was tested using two post-tests: a pseudo-word production task (spelling by hand in Experiment 1 or spelling aloud in Experiment 2) and a pseudo-word recognition task. Results showed no significant difference in pseudo-word recognition between the two learning conditions. In the pseudo-word production task, orthography learning improved when the learning and post-test conditions were similar, thus showing a massive encoding-retrieval match effect in the two experiments. However, a mixed model analysis of the pseudo-word production results revealed a significant learning condition effect which remained after control of the encoding-retrieval match effect. This later finding suggests that orthography learning is more efficient when mediated by handwriting than by spelling aloud, whatever the post-test production task.

  19. Measurements of mechanical anisotropy in brain tissue and implications for transversely isotropic material models of white matter

    PubMed Central

    Feng, Yuan; Okamoto, Ruth J.; Namani, Ravi; Genin, Guy M.; Bayly, Philip V.

    2013-01-01

    White matter in the brain is structurally anisotropic, consisting largely of bundles of aligned, myelin-sheathed axonal fibers. White matter is believed to be mechanically anisotropic as well. Specifically, transverse isotropy is expected locally, with the plane of isotropy normal to the local mean fiber direction. Suitable material models involve strain energy density functions that depend on the I4 and I5 pseudo-invariants of the Cauchy–Green strain tensor to account for the effects of relatively stiff fibers. The pseudo-invariant I4 is the square of the stretch ratio in the fiber direction; I5 contains contributions of shear strain in planes parallel to the fiber axis. Most, if not all, published models of white matter depend on I4 but not on I5. Here, we explore the small strain limits of these models in the context of experimental measurements that probe these dependencies. Models in which strain energy depends on I4 but not I5 can capture differences in Young’s (tensile) moduli, but will not exhibit differences in shear moduli for loading parallel and normal to the mean direction of axons. We show experimentally, using a combination of shear and asymmetric indentation tests, that white matter does exhibit such differences in both tensile and shear moduli. Indentation tests were interpreted through inverse fitting of finite element models in the limit of small strains. Results highlight that: (1) hyperelastic models of transversely isotropic tissues such as white matter should include contributions of both the I4 and I5 strain pseudo-invariants; and (2) behavior in the small strain regime can usefully guide the choice and initial parameterization of more general material models of white matter. PMID:23680651

  20. Physiologic volume of phosphorus during hemodialysis: predictions from a pseudo one-compartment model.

    PubMed

    Leypoldt, John K; Akonur, Alp; Agar, Baris U; Culleton, Bruce F

    2012-10-01

    The kinetics of plasma phosphorus concentrations during hemodialysis (HD) are complex and cannot be described by conventional one- or two-compartment kinetic models. It has recently been shown by others that the physiologic (or apparent distribution) volume for phosphorus (Vr-P) increases with increasing treatment time and shows a large variation among patients treated by thrice weekly and daily HD. Here, we describe the dependence of Vr-P on treatment time and predialysis plasma phosphorus concentration as predicted by a novel pseudo one-compartment model. The kinetics of plasma phosphorus during conventional and six times per week daily HD were simulated as a function of treatment time per session for various dialyzer phosphate clearances and patient-specific phosphorus mobilization clearances (K(M)). Vr-P normalized to extracellular volume from these simulations were reported and compared with previously published empirical findings. Simulated results were relatively independent of dialyzer phosphate clearance and treatment frequency. In contrast, Vr-P was strongly dependent on treatment time per session; the increase in Vr-P with treatment time was larger for higher values of K(M). Vr-P was inversely dependent on predialysis plasma phosphorus concentration. There was significant variation among predicted Vr-P values, depending largely on the value of K(M). We conclude that a pseudo one-compartment model can describe the empirical dependence of the physiologic volume of phosphorus on treatment time and predialysis plasma phosphorus concentration. Further, the variation in physiologic volume of phosphorus among HD patients is largely due to differences in patient-specific phosphorus mobilization clearance. © 2012 The Authors. Hemodialysis International © 2012 International Society for Hemodialysis.

  1. Morphological and Molecular Characterization of Dietary-Induced Pseudo-Albinism during Post-Embryonic Development of Solea senegalensis (Kaup, 1858)

    PubMed Central

    Darias, Maria J.; Andree, Karl B.; Boglino, Anaïs; Rotllant, Josep; Cerdá-Reverter, José Miguel; Estévez, Alicia; Gisbert, Enric

    2013-01-01

    The appearance of the pseudo-albino phenotype was investigated in developing Senegalese sole (Solea senegalensis, Kaup 1858) larvae at morphological and molecular levels. In order to induce the development of pseudo-albinos, Senegalese sole larvae were fed Artemia enriched with high levels of arachidonic acid (ARA). The development of their skin pigmentation was compared to that of a control group fed Artemia enriched with a reference commercial product. The relative amount of skin melanophores, xanthophores and iridophores revealed that larval pigmentation developed similarly in both groups. However, results from different relative proportions, allocation patterns, shapes and sizes of skin chromatophores revealed changes in the pigmentation pattern between ARA and control groups from 33 days post hatching onwards. The new populations of chromatophores that should appear at post-metamorphosis were not formed in the ARA group. Further, spatial patterns of distribution between the already present larval xanthophores and melanophores were suggestive of short-range interaction that seemed to be implicated in the degradation of these chromatophores, leading to the appearance of the pseudo-albino phenotype. The expression profile of several key pigmentation-related genes revealed that melanophore development was promoted in pseudo-albinos without a sufficient degree of terminal differentiation, thus preventing melanogenesis. Present results suggest the potential roles of asip1 and slc24a5 genes on the down-regulation of trp1 expression, leading to defects in melanin production. Moreover, gene expression data supports the involvement of pax3, mitf and asip1 genes in the developmental disruption of the new post-metamorphic populations of melanophores, xanthophores and iridophores. PMID:23874785

  2. Inverse design of bulk morphologies in block copolymers using particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Khadilkar, Mihir; Delaney, Kris; Fredrickson, Glenn

    Multiblock polymers are a versatile platform for creating a large range of nanostructured materials with novel morphologies and properties. However, achieving desired structures or property combinations is difficult due to a vast design space comprised of parameters including monomer species, block sequence, block molecular weights and dispersity, copolymer architecture, and binary interaction parameters. Navigating through such vast design spaces to achieve an optimal formulation for a target structure or property set requires an efficient global optimization tool wrapped around a forward simulation technique such as self-consistent field theory (SCFT). We report on such an inverse design strategy utilizing particle swarm optimization (PSO) as the global optimizer and SCFT as the forward prediction engine. To avoid metastable states in forward prediction, we utilize pseudo-spectral variable cell SCFT initiated from a library of defect free seeds of known block copolymer morphologies. We demonstrate that our approach allows for robust identification of block copolymers and copolymer alloys that self-assemble into a targeted structure, optimizing parameters such as block fractions, blend fractions, and Flory chi parameters.

  3. On optimal control of linear systems in the presence of multiplicative noise

    NASA Technical Reports Server (NTRS)

    Joshi, S. M.

    1976-01-01

    This correspondence considers the problem of optimal regulator design for discrete time linear systems subjected to white state-dependent and control-dependent noise in addition to additive white noise in the input and the observations. A pseudo-deterministic problem is first defined in which multiplicative and additive input disturbances are present, but noise-free measurements of the complete state vector are available. This problem is solved via discrete dynamic programming. Next is formulated the problem in which the number of measurements is less than that of the state variables and the measurements are contaminated with state-dependent noise. The inseparability of control and estimation is brought into focus, and an 'enforced separation' solution is obtained via heuristic reasoning in which the control gains are shown to be the same as those in the pseudo-deterministic problem. An optimal linear state estimator is given in order to implement the controller.

  4. Modular Expression Language for Ordinary Differential Equation Editing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blake, Robert C.

    MELODEEis a system for describing systems of initial value problem ordinary differential equations, and a compiler for the language that produces optimized code to integrate the differential equations. Features include rational polynomial approximation for expensive functions and automatic differentiation for symbolic jacobians

  5. Comparison of clipping and coiling in elderly patients with unruptured cerebral aneurysms

    PubMed Central

    Bekelis, Kimon; Gottlieb, Daniel J.; Su, Yin; O’Malley, A. James; Labropoulos, Nicos; Goodney, Philip; Lawton, Michael T.; MacKenzie, Todd A.

    2016-01-01

    OBJECTIVE The comparative effectiveness of the 2 treatment options—surgical clipping and endovascular coiling—for unruptured cerebral aneurysms remains an issue of debate and has not been studied in clinical trials. The authors investigated the association between treatment method for unruptured cerebral aneurysms and outcomes in elderly patients. METHODS The authors performed a cohort study of 100% of Medicare fee-for-service claims data for elderly patients who had treatment for unruptured cerebral aneurysms between 2007 and 2012. To control for measured confounding, the authors used propensity score conditioning and inverse probability weighting with mixed effects to account for clustering at the level of the hospital referral region (HRR). An instrumental variable (regional rates of coiling) analysis was used to control for unmeasured confounding and to create pseudo-randomization on the treatment method. RESULTS During the study period, 8705 patients underwent treatment for unruptured cerebral aneurysms and met the study inclusion criteria. Of these patients, 2585 (29.7%) had surgical clipping and 6120 (70.3%) had endovascular coiling. Instrumental variable analysis demonstrated no difference between coiling and clipping in 1-year postoperative mortality (OR 1.25, 95% CI 0.68–2.31) or 90-day readmission rate (OR 1.04, 95% CI 0.66–1.62). However, clipping was associated with a greater likelihood of discharge to rehabilitation (OR 6.39, 95% CI 3.85–10.59) and 3.6 days longer length of stay (LOS; 95% CI 2.90–4.71). The same associations were present in propensity score–adjusted and inverse probability– weighted models. CONCLUSIONS In a cohort of Medicare patients, there was no difference in mortality and the readmission rate between clipping and coiling of unruptured cerebral aneurysms. Clipping was associated with a higher rate of discharge to a rehabilitation facility and a longer LOS. PMID:27203150

  6. Development of an injectable pseudo-bone thermo-gel for application in small bone fractures.

    PubMed

    Kondiah, Pariksha J; Choonara, Yahya E; Kondiah, Pierre P D; Kumar, Pradeep; Marimuthu, Thashree; du Toit, Lisa C; Pillay, Viness

    2017-03-30

    A pseudo-bone thermo-gel was synthesized and evaluated for its physicochemical, mechanical and rheological properties, with its application to treat small bone fractures. The pseudo-bone thermo-gel was proven to have thermo-responsive properties, behaving as a solution in temperatures below 25°C, and forming a gelling technology when maintained at physiological conditions. Poly propylene fumerate (PPF), Pluronic F127 and PEG-PCL-PEG were strategically blended, obtaining a thermo-responsive delivery system, to mimic the mechanical properties of bone with sufficient matrix hardness and resilience. A Biopharmaceutics Classification System (BCS) class II drug, simvastatin, was loaded in the pseudo-bone thermo-gel, selected for its bone healing properties. In vitro release analysis was undertaken on a series of experimental formulations, with the ideal formulations obtaining its maximum controlled drug release profile up to 14days. Ex vivo studies were undertaken on an induced 4mm diameter butterfly-fractured osteoporotic human clavicle bone samples. X-ray, ultrasound as well as textural analysis, undertaken on the fractured bones before and after treatment displayed significant bone filling, matrix hardening and matrix resilience properties. These characteristics of the pseudo-bone thermo-gel thus proved significant potential for application in small bone fractures. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Expanding the capability of reaction-diffusion codes using pseudo traps and temperature partitioning: Applied to hydrogen uptake and release from tungsten

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simmonds, M. J.; Yu, J. H.; Wang, Y. Q.

    Simulating the implantation and thermal desorption evolution in a reaction-diffusion model requires solving a set of coupled differential equations that describe the trapping and release of atomic species in Plasma Facing Materials (PFMs). These fundamental equations are well outlined by the Tritium Migration Analysis Program (TMAP) which can model systems with no more than three active traps per atomic species. To overcome this limitation, we have developed a Pseudo Trap and Temperature Partition (PTTP) scheme allowing us to lump multiple inactive traps into one pseudo trap, simplifying the system of equations to be solved. For all temperatures, we show themore » trapping of atoms from solute is exactly accounted for when using a pseudo trap. However, a single effective pseudo trap energy can not well replicate the release from multiple traps, each with its own detrapping energy. However, atoms held in a high energy trap will remain trapped at relatively low temperatures, and thus there is a temperature range in which release from high energy traps is effectively inactive. By partitioning the temperature range into segments, a pseudo trap can be defined for each segment to account for multiple high energy traps that are actively trapping but are effectively not releasing atoms. With increasing temperature, as in controlled thermal desorption, the lowest energy trap is nearly emptied and can be removed from the set of coupled equations, while the next higher energy trap becomes an actively releasing trap. Each segment is thus calculated sequentially, with the last time step of a given segment solution being used as an initial input for the next segment as only the pseudo and actively releasing traps are modeled. This PTTP scheme is then applied to experimental thermal desorption data for tungsten (W) samples damaged with heavy ions, which display six distinct release peaks during thermal desorption. Without modifying the TMAP7 source code the PTTP scheme is shown to successfully model the D retention in all six traps. In conclusion, we demonstrate the full reconstruction from the plasma implantation phase through the controlled thermal desorption phase with detrapping energies near 0.9, 1.1, 1.4, 1.7, 1.9 and 2.1 eV for a W sample damaged at room temperature.« less

  8. Expanding the capability of reaction-diffusion codes using pseudo traps and temperature partitioning: Applied to hydrogen uptake and release from tungsten

    DOE PAGES

    Simmonds, M. J.; Yu, J. H.; Wang, Y. Q.; ...

    2018-06-04

    Simulating the implantation and thermal desorption evolution in a reaction-diffusion model requires solving a set of coupled differential equations that describe the trapping and release of atomic species in Plasma Facing Materials (PFMs). These fundamental equations are well outlined by the Tritium Migration Analysis Program (TMAP) which can model systems with no more than three active traps per atomic species. To overcome this limitation, we have developed a Pseudo Trap and Temperature Partition (PTTP) scheme allowing us to lump multiple inactive traps into one pseudo trap, simplifying the system of equations to be solved. For all temperatures, we show themore » trapping of atoms from solute is exactly accounted for when using a pseudo trap. However, a single effective pseudo trap energy can not well replicate the release from multiple traps, each with its own detrapping energy. However, atoms held in a high energy trap will remain trapped at relatively low temperatures, and thus there is a temperature range in which release from high energy traps is effectively inactive. By partitioning the temperature range into segments, a pseudo trap can be defined for each segment to account for multiple high energy traps that are actively trapping but are effectively not releasing atoms. With increasing temperature, as in controlled thermal desorption, the lowest energy trap is nearly emptied and can be removed from the set of coupled equations, while the next higher energy trap becomes an actively releasing trap. Each segment is thus calculated sequentially, with the last time step of a given segment solution being used as an initial input for the next segment as only the pseudo and actively releasing traps are modeled. This PTTP scheme is then applied to experimental thermal desorption data for tungsten (W) samples damaged with heavy ions, which display six distinct release peaks during thermal desorption. Without modifying the TMAP7 source code the PTTP scheme is shown to successfully model the D retention in all six traps. In conclusion, we demonstrate the full reconstruction from the plasma implantation phase through the controlled thermal desorption phase with detrapping energies near 0.9, 1.1, 1.4, 1.7, 1.9 and 2.1 eV for a W sample damaged at room temperature.« less

  9. Modular Bundle Adjustment for Photogrammetric Computations

    NASA Astrophysics Data System (ADS)

    Börlin, N.; Murtiyoso, A.; Grussenmeyer, P.; Menna, F.; Nocerino, E.

    2018-05-01

    In this paper we investigate how the residuals in bundle adjustment can be split into a composition of simple functions. According to the chain rule, the Jacobian (linearisation) of the residual can be formed as a product of the Jacobians of the individual steps. When implemented, this enables a modularisation of the computation of the bundle adjustment residuals and Jacobians where each component has limited responsibility. This enables simple replacement of components to e.g. implement different projection or rotation models by exchanging a module. The technique has previously been used to implement bundle adjustment in the open-source package DBAT (Börlin and Grussenmeyer, 2013) based on the Photogrammetric and Computer Vision interpretations of Brown (1971) lens distortion model. In this paper, we applied the technique to investigate how affine distortions can be used to model the projection of a tilt-shift lens. Two extended distortion models were implemented to test the hypothesis that the ordering of the affine and lens distortion steps can be changed to reduce the size of the residuals of a tilt-shift lens calibration. Results on synthetic data confirm that the ordering of the affine and lens distortion steps matter and is detectable by DBAT. However, when applied to a real camera calibration data set of a tilt-shift lens, no difference between the extended models was seen. This suggests that the tested hypothesis is false and that other effects need to be modelled to better explain the projection. The relatively low implementation effort that was needed to generate the models suggest that the technique can be used to investigate other novel projection models in photogrammetry, including modelling changes in the 3D geometry to better understand the tilt-shift lens.

  10. Direct Iterative Nonlinear Inversion by Multi-frequency T-matrix Completion

    NASA Astrophysics Data System (ADS)

    Jakobsen, M.; Wu, R. S.

    2016-12-01

    Researchers in the mathematical physics community have recently proposed a conceptually new method for solving nonlinear inverse scattering problems (like FWI) which is inspired by the theory of nonlocality of physical interactions. The conceptually new method, which may be referred to as the T-matrix completion method, is very interesting since it is not based on linearization at any stage. Also, there are no gradient vectors or (inverse) Hessian matrices to calculate. However, the convergence radius of this promising T-matrix completion method is seriously restricted by it's use of single-frequency scattering data only. In this study, we have developed a modified version of the T-matrix completion method which we believe is more suitable for applications to nonlinear inverse scattering problems in (exploration) seismology, because it makes use of multi-frequency data. Essentially, we have simplified the single-frequency T-matrix completion method of Levinson and Markel and combined it with the standard sequential frequency inversion (multi-scale regularization) method. For each frequency, we first estimate the experimental T-matrix by using the Moore-Penrose pseudo inverse concept. Then this experimental T-matrix is used to initiate an iterative procedure for successive estimation of the scattering potential and the T-matrix using the Lippmann-Schwinger for the nonlinear relation between these two quantities. The main physical requirements in the basic iterative cycle is that the T-matrix should be data-compatible and the scattering potential operator should be dominantly local; although a non-local scattering potential operator is allowed in the intermediate iterations. In our simplified T-matrix completion strategy, we ensure that the T-matrix updates are always data compatible simply by adding a suitable correction term in the real space coordinate representation. The use of singular-value decomposition representations are not required in our formulation since we have developed an efficient domain decomposition method. The results of several numerical experiments for the SEG/EAGE salt model illustrate the importance of using multi-frequency data when performing frequency domain full waveform inversion in strongly scattering media via the new concept of T-matrix completion.

  11. Accuracy and Resolution in Micro-earthquake Tomographic Inversion Studies

    NASA Astrophysics Data System (ADS)

    Hutchings, L. J.; Ryan, J.

    2010-12-01

    Accuracy and resolution are complimentary properties necessary to interpret the results of earthquake location and tomography studies. Accuracy is the how close an answer is to the “real world”, and resolution is who small of node spacing or earthquake error ellipse one can achieve. We have modified SimulPS (Thurber, 1986) in several ways to provide a tool for evaluating accuracy and resolution of potential micro-earthquake networks. First, we provide synthetic travel times from synthetic three-dimensional geologic models and earthquake locations. We use this to calculate errors in earthquake location and velocity inversion results when we perturb these models and try to invert to obtain these models. We create as many stations as desired and can create a synthetic velocity model with any desired node spacing. We apply this study to SimulPS and TomoDD inversion studies. “Real” travel times are perturbed with noise and hypocenters are perturbed to replicate a starting location away from the “true” location, and inversion is performed by each program. We establish travel times with the pseudo-bending ray tracer and use the same ray tracer in the inversion codes. This, of course, limits our ability to test the accuracy of the ray tracer. We developed relationships for the accuracy and resolution expected as a function of the number of earthquakes and recording stations for typical tomographic inversion studies. Velocity grid spacing started at 1km, then was decreased to 500m, 100m, 50m and finally 10m to see if resolution with decent accuracy at that scale was possible. We considered accuracy to be good when we could invert a velocity model perturbed by 50% back to within 5% of the original model, and resolution to be the size of the grid spacing. We found that 100 m resolution could obtained by using 120 stations with 500 events, bu this is our current limit. The limiting factors are the size of computers needed for the large arrays in the inversion and a realistic number of stations and events needed to provide the data.

  12. Adaptive online inverse control of a shape memory alloy wire actuator using a dynamic neural network

    NASA Astrophysics Data System (ADS)

    Mai, Huanhuan; Song, Gangbing; Liao, Xiaofeng

    2013-01-01

    Shape memory alloy (SMA) actuators exhibit severe hysteresis, a nonlinear behavior, which complicates control strategies and limits their applications. This paper presents a new approach to controlling an SMA actuator through an adaptive inverse model based controller that consists of a dynamic neural network (DNN) identifier, a copy dynamic neural network (CDNN) feedforward term and a proportional (P) feedback action. Unlike fixed hysteresis models used in most inverse controllers, the proposed one uses a DNN to identify online the relationship between the applied voltage to the actuator and the displacement (the inverse model). Even without a priori knowledge of the SMA hysteresis and without pre-training, the proposed controller can precisely control the SMA wire actuator in various tracking tasks by identifying online the inverse model of the SMA actuator. Experiments were conducted, and experimental results demonstrated real-time modeling capabilities of DNN and the performance of the adaptive inverse controller.

  13. Preparation and characterization of porous reduced graphene oxide based inverse spinel nickel ferrite nanocomposite for adsorption removal of radionuclides.

    PubMed

    Lingamdinne, Lakshmi Prasanna; Choi, Yu-Lim; Kim, Im-Soon; Yang, Jae-Kyu; Koduru, Janardhan Reddy; Chang, Yoon-Young

    2017-03-15

    For the removal of uranium(VI) (U(VI)) and thorium(IV) (Th(IV)), graphene oxide based inverse spinel nickel ferrite (GONF) nanocomposite and reduced graphene oxide based inverse spinel nickel ferrite (rGONF) nanocomposite were prepared by co-precipitation of GO with nickel and iron salts in one pot. The spectral characterization analyses revealed that GONF and rGONF have a porous surface morphology with an average particle size of 41.41nm and 32.16nm, respectively. The magnetic property measurement system (MPMS) studies confirmed the formation of ferromagnetic GONF and superparamagnetic rGONF. The adsorption kinetics studies found that the pseudo-second-order kinetics was well tune to the U(VI) and Th(IV) adsorption. The results of adsorption isotherms showed that the adsorption of U(VI) and Th(IV) were due to the monolayer on homogeneous surface of the GONF and rGONF. The adsorptions of both U(VI) and Th(IV) were increased with increasing system temperature from 293 to 333±2K. The thermodynamic studies reveal that the U(VI) and Th(IV) adsorption onto GONF and rGONF was endothermic. GONF and rGONF, which could be separated by external magnetic field, were recycled and re-used for up to five cycles without any significant loss of adsorption capacity. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. Computing rates of Markov models of voltage-gated ion channels by inverting partial differential equations governing the probability density functions of the conducting and non-conducting states.

    PubMed

    Tveito, Aslak; Lines, Glenn T; Edwards, Andrew G; McCulloch, Andrew

    2016-07-01

    Markov models are ubiquitously used to represent the function of single ion channels. However, solving the inverse problem to construct a Markov model of single channel dynamics from bilayer or patch-clamp recordings remains challenging, particularly for channels involving complex gating processes. Methods for solving the inverse problem are generally based on data from voltage clamp measurements. Here, we describe an alternative approach to this problem based on measurements of voltage traces. The voltage traces define probability density functions of the functional states of an ion channel. These probability density functions can also be computed by solving a deterministic system of partial differential equations. The inversion is based on tuning the rates of the Markov models used in the deterministic system of partial differential equations such that the solution mimics the properties of the probability density function gathered from (pseudo) experimental data as well as possible. The optimization is done by defining a cost function to measure the difference between the deterministic solution and the solution based on experimental data. By evoking the properties of this function, it is possible to infer whether the rates of the Markov model are identifiable by our method. We present applications to Markov model well-known from the literature. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  15. Nonlinear adaptive inverse control via the unified model neural network

    NASA Astrophysics Data System (ADS)

    Jeng, Jin-Tsong; Lee, Tsu-Tian

    1999-03-01

    In this paper, we propose a new nonlinear adaptive inverse control via a unified model neural network. In order to overcome nonsystematic design and long training time in nonlinear adaptive inverse control, we propose the approximate transformable technique to obtain a Chebyshev Polynomials Based Unified Model (CPBUM) neural network for the feedforward/recurrent neural networks. It turns out that the proposed method can use less training time to get an inverse model. Finally, we apply this proposed method to control magnetic bearing system. The experimental results show that the proposed nonlinear adaptive inverse control architecture provides a greater flexibility and better performance in controlling magnetic bearing systems.

  16. Using Ankle Bracing and Taping to Decrease Range of Motion and Velocity During Inversion Perturbation While Walking.

    PubMed

    Hall, Emily A; Simon, Janet E; Docherty, Carrie L

    2016-04-01

    Prophylactic ankle supports are commonly used. However, the effectiveness of external supports in preventing an inversion stress has been debated. To evaluate how ankle bracing and taping affect inversion range of motion, time to maximum inversion, inversion velocity, and perceived ankle stability compared with a control condition during a dynamic inversion perturbation while walking. Crossover study. Research laboratory. A total of 42 physically active participants (16 men, 26 women; age = 21.2 ± 3.3 years, height = 168.9 ± 8.9 cm, mass = 66.1 ± 11.4 kg) volunteered. Participants walked on a custom-built walkway that suddenly inverted their ankles to 30° in 3 conditions: brace, tape, and control (no external support). We used an ASO ankle brace for the brace condition and a closed basketweave technique for the tape condition. Three trials were completed for each condition. Main Outcome Measure(s) Maximum inversion (degrees), time to maximum inversion (milliseconds), and inversion velocity (degrees per second) were measured using an electrogoniometer, and perceived stability (centimeters) was measured using a visual analog scale. Maximum inversion decreased more in the brace condition (20.1°) than in the control (25.3°) or tape (22.3°) conditions (both P values = .001), and the tape condition restricted inversion more than the control condition (P = .001). Time to maximum inversion was greater in the brace condition (143.5 milliseconds) than in the control (123.7 milliseconds; P = .001) or tape (130.7 milliseconds; P = .009) conditions and greater in the tape than in the control condition (P = .02). Inversion velocity was slower in the brace condition (142.6°/s) than in the control (209.1°/s) or tape (174.3°/s) conditions (both P values = .001) and slower in the tape than in the control condition (P = .001). Both the brace and tape conditions provided more perceived stability (0.98 cm and 0.94 cm, respectively) than the control condition (2.38 cm; both P values = .001). Both prophylactic conditions affected inversion range of motion, time to maximum inversion, inversion velocity, and perceived ankle stability. However, bracing provided more restriction at a slower rate than taping.

  17. Using Ankle Bracing and Taping to Decrease Range of Motion and Velocity During Inversion Perturbation While Walking

    PubMed Central

    Hall, Emily A.; Simon, Janet E.; Docherty, Carrie L.

    2016-01-01

    Context:  Prophylactic ankle supports are commonly used. However, the effectiveness of external supports in preventing an inversion stress has been debated. Objective:  To evaluate how ankle bracing and taping affect inversion range of motion, time to maximum inversion, inversion velocity, and perceived ankle stability compared with a control condition during a dynamic inversion perturbation while walking. Design:  Crossover study. Setting:  Research laboratory. Patients or Other Participants:  A total of 42 physically active participants (16 men, 26 women; age = 21.2 ± 3.3 years, height = 168.9 ± 8.9 cm, mass = 66.1 ± 11.4 kg) volunteered. Intervention(s):  Participants walked on a custom-built walkway that suddenly inverted their ankles to 30° in 3 conditions: brace, tape, and control (no external support). We used an ASO ankle brace for the brace condition and a closed basketweave technique for the tape condition. Three trials were completed for each condition. Main Outcome Measure(s):  Maximum inversion (degrees), time to maximum inversion (milliseconds), and inversion velocity (degrees per second) were measured using an electrogoniometer, and perceived stability (centimeters) was measured using a visual analog scale. Results:  Maximum inversion decreased more in the brace condition (20.1°) than in the control (25.3°) or tape (22.3°) conditions (both P values = .001), and the tape condition restricted inversion more than the control condition (P = .001). Time to maximum inversion was greater in the brace condition (143.5 milliseconds) than in the control (123.7 milliseconds; P = .001) or tape (130.7 milliseconds; P = .009) conditions and greater in the tape than in the control condition (P = .02). Inversion velocity was slower in the brace condition (142.6°/s) than in the control (209.1°/s) or tape (174.3°/s) conditions (both P values = .001) and slower in the tape than in the control condition (P = .001). Both the brace and tape conditions provided more perceived stability (0.98 cm and 0.94 cm, respectively) than the control condition (2.38 cm; both P values = .001). Conclusions:  Both prophylactic conditions affected inversion range of motion, time to maximum inversion, inversion velocity, and perceived ankle stability. However, bracing provided more restriction at a slower rate than taping. PMID:27111586

  18. Feldspar 40Ar/39Ar dating of ICDP PALEOVAN cores

    NASA Astrophysics Data System (ADS)

    Engelhardt, Jonathan Franz; Sudo, Masafumi; Stockhecke, Mona; Oberhänsli, Roland

    2017-11-01

    Volcaniclastic fall deposits in ICDP drilling cores from Lake Van, Turkey, contain sodium-rich sanidine and calcium-rich anorthoclase, which both comprise a variety of textural zoning and inclusions. An age model records the lake's history and is based on climate-stratigraphic correlations, tephrostratigraphy, paleomagnetics, and earlier 40Ar/39Ar analyses (Stockhecke et al., 2014b). Results from total fusion and stepwise heating 40Ar/39Ar analyses presented in this study allow for the comparison of radiometric constraints from texturally diversified feldspar and the multi-proxy lacustrine age model and vice versa. This study has investigated several grain-size fractions of feldspar from 13 volcaniclastic units. The feldspars show textural features that are visible in cathodoluminescence (CL) or back-scattered electron (BSE) images and can be subdivided into three dominant zoning-types: (1) compositional zoning, (2) round pseudo-oscillatory zoning and (3) resorbed and patchy zoning (Ginibre et al., 2004). Round pseudo-oscillatory zoning records a sensitive alternation of Fe and Ca that also reflects resorption processes. This is only visible in CL images. Compositional zoning reflects anticorrelated anorthite and orthoclase contents and is visible in BSE. Eleven inverse isochron ages from total fusion and three from stepwise heating analyses fit the age model. Four experiments resulted in older inverse isochron ages that do not concur with the model within 2σ uncertainties and that deviate from 1 ka to 17 ka minimum. C- and R-type zoning are interpreted as representing growth in magma chamber cupolas, as wall mushes, or in narrow conduits. Persistent compositions of PO-type crystals and abundant surfaces recording dissolution features correspond to formation within a magma chamber. C-type zoning and R-type zoning have revealed an irregular incorporation of melt and fluid inclusions. These two types of zoning in feldspar are interpreted as preferentially contributing either heterogeneously distributed excess 40Ar or inherited 40Ar to the deviating 40Ar/39Ar ages that are discussed in this study.

  19. Numerical solutions of nonlinear STIFF initial value problems by perturbed functional iterations

    NASA Technical Reports Server (NTRS)

    Dey, S. K.

    1982-01-01

    Numerical solution of nonlinear stiff initial value problems by a perturbed functional iterative scheme is discussed. The algorithm does not fully linearize the system and requires only the diagonal terms of the Jacobian. Some examples related to chemical kinetics are presented.

  20. Entropy-stable summation-by-parts discretization of the Euler equations on general curved elements

    NASA Astrophysics Data System (ADS)

    Crean, Jared; Hicken, Jason E.; Del Rey Fernández, David C.; Zingg, David W.; Carpenter, Mark H.

    2018-03-01

    We present and analyze an entropy-stable semi-discretization of the Euler equations based on high-order summation-by-parts (SBP) operators. In particular, we consider general multidimensional SBP elements, building on and generalizing previous work with tensor-product discretizations. In the absence of dissipation, we prove that the semi-discrete scheme conserves entropy; significantly, this proof of nonlinear L2 stability does not rely on integral exactness. Furthermore, interior penalties can be incorporated into the discretization to ensure that the total (mathematical) entropy decreases monotonically, producing an entropy-stable scheme. SBP discretizations with curved elements remain accurate, conservative, and entropy stable provided the mapping Jacobian satisfies the discrete metric invariants; polynomial mappings at most one degree higher than the SBP operators automatically satisfy the metric invariants in two dimensions. In three-dimensions, we describe an elementwise optimization that leads to suitable Jacobians in the case of polynomial mappings. The properties of the semi-discrete scheme are verified and investigated using numerical experiments.

  1. Jointly reconstructing ground motion and resistivity for ERT-based slope stability monitoring

    NASA Astrophysics Data System (ADS)

    Boyle, Alistair; Wilkinson, Paul B.; Chambers, Jonathan E.; Meldrum, Philip I.; Uhlemann, Sebastian; Adler, Andy

    2018-02-01

    Electrical resistivity tomography (ERT) is increasingly being used to investigate unstable slopes and monitor the hydrogeological processes within. But movement of electrodes or incorrect placement of electrodes with respect to an assumed model can introduce significant resistivity artefacts into the reconstruction. In this work, we demonstrate a joint resistivity and electrode movement reconstruction algorithm within an iterative Gauss-Newton framework. We apply this to ERT monitoring data from an active slow-moving landslide in the UK. Results show fewer resistivity artefacts and suggest that electrode movement and resistivity can be reconstructed at the same time under certain conditions. A new 2.5-D formulation for the electrode position Jacobian is developed and is shown to give accurate numerical solutions when compared to the adjoint method on 3-D models. On large finite element meshes, the calculation time of the newly developed approach was also proven to be orders of magnitude faster than the 3-D adjoint method and addressed modelling errors in the 2-D perturbation and adjoint electrode position Jacobian.

  2. Value at 2 of the L-function of an elliptic curve

    NASA Astrophysics Data System (ADS)

    Brunault, Francois

    2006-02-01

    We study the special value at 2 of L-functions of modular forms of weight 2 on congruence subgroups of the modular group. We prove an explicit version of Beilinson's theorem for the modular curve X_1(N). When N is prime, we deduce that the target space of Beilinson's regulator map is generated by the images of Milnor symbols associated to modular units of X_1(N). We also suggest a reformulation of Zagier's conjecture on L(E,2) for the jacobian J_1(N) of X_1(N), where E is an elliptic curve of conductor N. In this direction we define an analogue of the elliptic dilogarithm for any jacobian J : it is a function R_J from the complex points of J to a finite-dimensional vector space. In the case J=J_1(N), we establish a link between the aforementioned L-values and the function R_J evaluated at Q-rational points of the cuspidal subgroup of J.

  3. A set of parallel, implicit methods for a reconstructed discontinuous Galerkin method for compressible flows on 3D hybrid grids

    DOE PAGES

    Xia, Yidong; Luo, Hong; Frisbey, Megan; ...

    2014-07-01

    A set of implicit methods are proposed for a third-order hierarchical WENO reconstructed discontinuous Galerkin method for compressible flows on 3D hybrid grids. An attractive feature in these methods are the application of the Jacobian matrix based on the P1 element approximation, resulting in a huge reduction of memory requirement compared with DG (P2). Also, three approaches -- analytical derivation, divided differencing, and automatic differentiation (AD) are presented to construct the Jacobian matrix respectively, where the AD approach shows the best robustness. A variety of compressible flow problems are computed to demonstrate the fast convergence property of the implemented flowmore » solver. Furthermore, an SPMD (single program, multiple data) programming paradigm based on MPI is proposed to achieve parallelism. The numerical results on complex geometries indicate that this low-storage implicit method can provide a viable and attractive DG solution for complicated flows of practical importance.« less

  4. Multi-Rate Digital Control Systems with Simulation Applications. Volume I. Technical Report

    DTIC Science & Technology

    1980-09-01

    108 45. A Pseudo Differentiation Configuration ........................ 110 46. Bode Plot, Pseudo Differentiation ...symbolically in Fig. 7a and for 11 x 2 in Fig. 7b. (* notation on x2is used here to indicate an "unconven- tional" sampling operation.) 115 TXi ,A! T...the general multi-rate/multiple-order open-loop system of Fig. 21 have a sine wave input. In Fig 2L, = (GIRj) (114) CT/N = [GGRt]T/N ( 115 ) where a, B

  5. The origins of microstructure in phase inversion coatings or membranes: Snapshots of the transient from time-sectioning cryo-SEM

    NASA Astrophysics Data System (ADS)

    Prakash, Sai Sivasankaran

    2001-11-01

    Time-sectioning cryogenic scanning electron microscopy (cryo-SEM) is a unique method of visualizing how the microstructure of liquid coatings evolves during processing. Time-sectioning means rapidly freezing (nearly) identical specimens at successively later stages of the process; doing this requires that coating and drying be well controlled in the dry phase inversion process, and solvents exchange likewise in the wet phase inversion process. With control, frozen specimens are fractured, etched by limited sublimation, sputter-coated, and imaged at temperatures of ca -175°C. The coatings examined were of cellulose acetate, of high and low molecular weights, and polysulfone in mixed solvents and nonsolvents: acetone and water with cellulose acetate undergoing dry phase inversion; and tetrahydrofuran, dimethylacetamide, ethanol with polysulfone undergoing dry-wet phase inversion. All coatings, cast on silicon substrates, were initially homogeneous. The initial compositions of the high and low molecular weight cellulose acetate ternary solutions were "off-critical" and "near-critical", respectively, connoting their proximities to the critical or plait point of the phase diagram. The initial composition of the polysulfone quaternary solution was located near the binodal of the pseudo-ternary phase diagram. It appeared that as the higher molecular weight cellulose acetate coating dries, it nucleates and grows polymer-poor droplets that coalesce into a bicontinuous structure underlying a thin, dense skin. Bicontinuity of structure was verified by stereomicroscopy of the dry sample. The lower molecular weight cellulose acetate coating phase-separates, seemingly spinodally, directly into a bicontinuous structure whose polymer-rich network, stressed by frustrated in-plane shrinkage, ruptures far beneath the skin in some locales to form macrovoids. When, after partial drying, the polysulfone coating was immersed in a bath of water, a nonsolvent, it appeared to swell in thickness as it phase-separates. A dense skin, thinner than a micron, appeared to overlie a two-phase substructure that is punctuated with pear-shaped macrovoids. At early immersion times, this substructure is visibly bicontinuous or open-celled near the bath-side, and dispersion-like (droplets dispersed in a polymeric matrix) or closed-celled near the substrate-side. Moreover, in the bicontinuous regions, length-scales of the individual phases seem to increase across the coating thickness from the bath-side to the substrate-side. After prolonged immersion, the substructure, excluding the macrovoids, is entirely bicontinuous. The bicontinuity presumably results from a combination of spinodal decomposition and nucleation and growth plus coalescence. Quite strikingly, macrovoids are present exclusively in regions where phases are bicontinuous, and are absent where droplets are dispersed in the polymeric matrix. Evidence suggests that macrovoids result from an instability caused by a progressive rupture of polymer-rich links deeper and deeper beneath the skin, aggravated by stress localization in the rupturing network and a buildup of pressure in the polymer-poor phase (the pore space), as suspected by Grobe and Meyer in 1959.

  6. Optimization of finite difference forward modeling for elastic waves based on optimum combined window functions

    NASA Astrophysics Data System (ADS)

    Jian, Wang; Xiaohong, Meng; Hong, Liu; Wanqiu, Zheng; Yaning, Liu; Sheng, Gui; Zhiyang, Wang

    2017-03-01

    Full waveform inversion and reverse time migration are active research areas for seismic exploration. Forward modeling in the time domain determines the precision of the results, and numerical solutions of finite difference have been widely adopted as an important mathematical tool for forward modeling. In this article, the optimum combined of window functions was designed based on the finite difference operator using a truncated approximation of the spatial convolution series in pseudo-spectrum space, to normalize the outcomes of existing window functions for different orders. The proposed combined window functions not only inherit the characteristics of the various window functions, to provide better truncation results, but also control the truncation error of the finite difference operator manually and visually by adjusting the combinations and analyzing the characteristics of the main and side lobes of the amplitude response. Error level and elastic forward modeling under the proposed combined system were compared with outcomes from conventional window functions and modified binomial windows. Numerical dispersion is significantly suppressed, which is compared with modified binomial window function finite-difference and conventional finite-difference. Numerical simulation verifies the reliability of the proposed method.

  7. Dynamic implicit 3D adaptive mesh refinement for non-equilibrium radiation diffusion

    NASA Astrophysics Data System (ADS)

    Philip, B.; Wang, Z.; Berrill, M. A.; Birke, M.; Pernice, M.

    2014-04-01

    The time dependent non-equilibrium radiation diffusion equations are important for solving the transport of energy through radiation in optically thick regimes and find applications in several fields including astrophysics and inertial confinement fusion. The associated initial boundary value problems that are encountered often exhibit a wide range of scales in space and time and are extremely challenging to solve. To efficiently and accurately simulate these systems we describe our research on combining techniques that will also find use more broadly for long term time integration of nonlinear multi-physics systems: implicit time integration for efficient long term time integration of stiff multi-physics systems, local control theory based step size control to minimize the required global number of time steps while controlling accuracy, dynamic 3D adaptive mesh refinement (AMR) to minimize memory and computational costs, Jacobian Free Newton-Krylov methods on AMR grids for efficient nonlinear solution, and optimal multilevel preconditioner components that provide level independent solver convergence.

  8. Balancing the Power-to-Load Ratio for a Novel Variable Geometry Wave Energy Converter with Nonideal Power Take-Off in Regular Waves: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tom, Nathan M; Yu, Yi-Hsiang; Wright, Alan D

    This work attempts to balance power absorption against structural loading for a novel variable geometry wave energy converter. The variable geometry consists of four identical flaps that will be opened in ascending order starting with the flap closest to the seafloor and moving to the free surface. The influence of a pitch motion constraint on power absorption when utilizing a nonideal power take-off (PTO) is examined and found to reduce the losses associated with bidirectional energy flow. The power-to-load ratio is evaluated using pseudo-spectral control to determine the optimum PTO torque based on a multiterm objective function. The pseudo-spectral optimalmore » control problem is extended to include load metrics in the objective function, which may now consist of competing terms. Separate penalty weights are attached to the surge-foundation force and PTO control torque to tune the optimizer performance to emphasize either power absorption or load shedding. PTO efficiency is not included in the objective function, but the penalty weights are utilized to limit the force and torque amplitudes, thereby reducing losses associated with bidirectional energy flow. Results from pseudo-spectral control demonstrate that shedding a portion of the available wave energy can provide greater reductions in structural loads and reactive power.« less

  9. Analysis of filament statistics in fast camera data on MAST

    NASA Astrophysics Data System (ADS)

    Farley, Tom; Militello, Fulvio; Walkden, Nick; Harrison, James; Silburn, Scott; Bradley, James

    2017-10-01

    Coherent filamentary structures have been shown to play a dominant role in turbulent cross-field particle transport [D'Ippolito 2011]. An improved understanding of filaments is vital in order to control scrape off layer (SOL) density profiles and thus control first wall erosion, impurity flushing and coupling of radio frequency heating in future devices. The Elzar code [T. Farley, 2017 in prep.] is applied to MAST data. The code uses information about the magnetic equilibrium to calculate the intensity of light emission along field lines as seen in the camera images, as a function of the field lines' radial and toroidal locations at the mid-plane. In this way a `pseudo-inversion' of the intensity profiles in the camera images is achieved from which filaments can be identified and measured. In this work, a statistical analysis of the intensity fluctuations along field lines in the camera field of view is performed using techniques similar to those typically applied in standard Langmuir probe analyses. These filament statistics are interpreted in terms of the theoretical ergodic framework presented by F. Militello & J.T. Omotani, 2016, in order to better understand how time averaged filament dynamics produce the more familiar SOL density profiles. This work has received funding from the RCUK Energy programme (Grant Number EP/P012450/1), from Euratom (Grant Agreement No. 633053) and from the EUROfusion consortium.

  10. Glyburide is associated with attenuated vasogenic edema in stroke patients

    PubMed Central

    Kimberly, W. Taylor; Battey, Thomas W. K.; Pham, Ly; Wu, Ona; Yoo, Albert J.; Furie, Karen L.; Singhal, Aneesh B.; Elm, Jordan J.; Stern, Barney J.; Sheth, Kevin N.

    2016-01-01

    Background and Purpose Brain edema is a serious complication of ischemic stroke that can lead to secondary neurological deterioration and death. Glyburide is reported to prevent brain swelling in preclinical rodent models of ischemic stroke through inhibition of a non-selective channel composed of sulfonylurea receptor 1 (SUR1) and transient receptor potential cation channel subfamily M member 4 (TRPM4). However, the relevance of this pathway to the development of cerebral edema in stroke patients is not known. Methods Using a case control design, we retrospectively assessed neuroimaging and blood markers of cytotoxic and vasogenic edema in subjects who were enrolled in the Glyburide Advantage in Malignant Edema and Stroke-Pilot (GAMES-Pilot) trial. We compared serial brain magnetic resonance images (MRIs) to a cohort with similar large volume infarctions. We also compared matrix metalloproteinase-9 plasma level in large hemispheric stroke. Results We report that IV glyburide was associated with attenuated T2 fluid attenuated inversion recovery (FLAIR) signal intensity ratio on brain MRI, diminished the lesional water diffusivity between days 1 and 2 (pseudo-normalization), and reduced blood matrix metalloproteinase-9 (MMP-9) level. Conclusions Several surrogate markers of vasogenic edema appear to be reduced in the setting of IV glyburide treatment in human stroke. Verification of these potential imaging and blood biomarkers is warranted in the context of a randomized, placebo-controlled trial. PMID:24072459

  11. VASP- VARIABLE DIMENSION AUTOMATIC SYNTHESIS PROGRAM

    NASA Technical Reports Server (NTRS)

    White, J. S.

    1994-01-01

    VASP is a variable dimension Fortran version of the Automatic Synthesis Program, ASP. The program is used to implement Kalman filtering and control theory. Basically, it consists of 31 subprograms for solving most modern control problems in linear, time-variant (or time-invariant) control systems. These subprograms include operations of matrix algebra, computation of the exponential of a matrix and its convolution integral, and the solution of the matrix Riccati equation. The user calls these subprograms by means of a FORTRAN main program, and so can easily obtain solutions to most general problems of extremization of a quadratic functional of the state of the linear dynamical system. Particularly, these problems include the synthesis of the Kalman filter gains and the optimal feedback gains for minimization of a quadratic performance index. VASP, as an outgrowth of the Automatic Synthesis Program, has the following improvements: more versatile programming language; more convenient input/output format; some new subprograms which consolidate certain groups of statements that are often repeated; and variable dimensioning. The pertinent difference between the two programs is that VASP has variable dimensioning and more efficient storage. The documentation for the VASP program contains a VASP dictionary and example problems. The dictionary contains a description of each subroutine and instructions on its use. The example problems include dynamic response, optimal control gain, solution of the sampled data matrix Riccati equation, matrix decomposition, and a pseudo-inverse of a matrix. This program is written in FORTRAN IV and has been implemented on the IBM 360. The VASP program was developed in 1971.

  12. 2.5D complex resistivity modeling and inversion using unstructured grids

    NASA Astrophysics Data System (ADS)

    Xu, Kaijun; Sun, Jie

    2016-04-01

    The characteristic of complex resistivity on rock and ore has been recognized by people for a long time. Generally we have used the Cole-Cole Model(CCM) to describe complex resistivity. It has been proved that the electrical anomaly of geologic body can be quantitative estimated by CCM parameters such as direct resistivity(ρ0), chargeability(m), time constant(τ) and frequency dependence(c). Thus it is very important to obtain the complex parameters of geologic body. It is difficult to approximate complex structures and terrain using traditional rectangular grid. In order to enhance the numerical accuracy and rationality of modeling and inversion, we use an adaptive finite-element algorithm for forward modeling of the frequency-domain 2.5D complex resistivity and implement the conjugate gradient algorithm in the inversion of 2.5D complex resistivity. An adaptive finite element method is applied for solving the 2.5D complex resistivity forward modeling of horizontal electric dipole source. First of all, the CCM is introduced into the Maxwell's equations to calculate the complex resistivity electromagnetic fields. Next, the pseudo delta function is used to distribute electric dipole source. Then the electromagnetic fields can be expressed in terms of the primary fields caused by layered structure and the secondary fields caused by inhomogeneities anomalous conductivity. At last, we calculated the electromagnetic fields response of complex geoelectric structures such as anticline, syncline, fault. The modeling results show that adaptive finite-element methods can automatically improve mesh generation and simulate complex geoelectric models using unstructured grids. The 2.5D complex resistivity invertion is implemented based the conjugate gradient algorithm.The conjugate gradient algorithm doesn't need to compute the sensitivity matrix but directly computes the sensitivity matrix or its transpose multiplying vector. In addition, the inversion target zones are segmented with fine grids and the background zones are segmented with big grid, the method can reduce the grid amounts of inversion, it is very helpful to improve the computational efficiency. The inversion results verify the validity and stability of conjugate gradient inversion algorithm. The results of theoretical calculation indicate that the modeling and inversion of 2.5D complex resistivity using unstructured grids are feasible. Using unstructured grids can improve the accuracy of modeling, but the large number of grids inversion is extremely time-consuming, so the parallel computation for the inversion is necessary. Acknowledgments: We thank to the support of the National Natural Science Foundation of China(41304094).

  13. Effect of bioaugmentation and biostimulation on sulfate-reducing column startup captured by functional gene profiling.

    PubMed

    Pereyra, Luciana P; Hiibel, Sage R; Perrault, Elizabeth M; Reardon, Kenneth F; Pruden, Amy

    2012-10-01

    Sulfate-reducing permeable reactive zones (SR-PRZs) depend upon a complex microbial community to utilize a lignocellulosic substrate and produce sulfides, which remediate mine drainage by binding heavy metals. To gain insight into the impact of the microbial community composition on the startup time and pseudo-steady-state performance, functional genes corresponding to cellulose-degrading (CD), fermentative, sulfate-reducing, and methanogenic microorganisms were characterized in columns simulating SR-PRZs using quantitative polymerase chain reaction (qPCR) and denaturing gradient gel electrophoresis (DGGE). Duplicate columns were bioaugmented with sulfate-reducing or CD bacteria or biostimulated with ethanol or carboxymethyl cellulose and compared with baseline dairy manure inoculum and uninoculated controls. Sulfate removal began after ~ 15 days for all columns and pseudo-steady state was achieved by Day 30. Despite similar performance, DGGE profiles of 16S rRNA gene and functional genes at pseudo-steady state were distinct among the column treatments, suggesting the potential to control ultimate microbial community composition via bioaugmentation and biostimulation. qPCR revealed enrichment of functional genes in all columns between the initial and pseudo-steady-state time points. This is the first functional gene-based study of CD, fermentative and sulfate-reducing bacteria and methanogenic archaea in a lignocellulose-based environment and provides new qualitative and quantitative insight into startup of a complex microbial system. © 2012 Federation of European Microbiological Societies. Published by Blackwell Publishing Ltd. All rights reserved.

  14. Underdetermined blind separation of three-way fluorescence spectra of PAHs in water

    NASA Astrophysics Data System (ADS)

    Yang, Ruifang; Zhao, Nanjing; Xiao, Xue; Zhu, Wei; Chen, Yunan; Yin, Gaofang; Liu, Jianguo; Liu, Wenqing

    2018-06-01

    In this work, underdetermined blind decomposition method is developed to recognize individual components from the three-way fluorescent spectra of their mixtures by using sparse component analysis (SCA). The mixing matrix is estimated from the mixtures using fuzzy data clustering algorithm together with the scatters corresponding to local energy maximum value in the time-frequency domain, and the spectra of object components are recovered by pseudo inverse technique. As an example, using this method three and four pure components spectra can be blindly extracted from two samples of their mixture, with similarities between resolved and reference spectra all above 0.80. This work opens a new and effective path to realize monitoring PAHs in water by three-way fluorescence spectroscopy technique.

  15. Antinociceptive and Anti-inflammatory Effects of Triterpenes from Pluchea quitoc DC. Aerial Parts

    PubMed Central

    Nobre da Silva, Francisco Alcione; de Farias Freire, Sônia Maria; da Rocha Borges, Marilene Oliveira; Vidal Barros, Francisco Erivaldo; da de Sousa, Maria; de Sousa Ribeiro, Maria Nilce; Pinheiro Guilhon, Giselle Maria Skelding; Müller, Adolfo Henrique; Romão Borges, Antonio Carlos

    2017-01-01

    Background: Pluchea quitoc DC. (Asteraceae), a medicinal plant known as “quitoco,” “caculucage,” “tabacarana” and “madre-cravo,” is indicated for inflammatory conditions such as bronchitis, arthritis, and inflammation in the uterus and digestive system. Objective: This study evaluated the analgesic and anti-inflammatory activities of the triterpenes compounds obtained from P. quitoc aerial parts. Materials and Methods: The triterpenes compounds β-amyrin, taraxasterol and pseudo-taraxasterol in a mixture (T); β-amyrin, taraxasterol and pseudo-taraxasterol acetates in a mixture (Ta); β-amyrin, taraxasterol, pseudo-taraxasterol acetates in a mixture with β-amyrin, taraxasterol and pseudo-taraxasterol myristates (Tafe) were analyzed in the models of nociception and inflammation. The evaluation of antinociceptive activity was carried out by the acetic acid-induced writhing and tail-flick tests while leukocyte migration to the peritoneal cavity was used for anti-inflammatory profile. Results: The oral administration of T or Tafe (40 mg/kg and 70 mg/kg) and Ta (70 mg/kg) to mice reduced acetic acid-induced writhing. The tail-flick response of mice was not affected by T or Tafe (40 mg/kg). T or Tafe (40 mg/kg) and Ta (70 mg/kg) also inhibited peritoneal leukocyte infiltration following the injection of carrageenan. Conclusion: The results demonstrate the anti-inflammatory and peripheral antinociceptive activity of the triterpenes β-amyrin, taraxasterol, and pseudo-taraxasterol that were decreased when these were acetylated; while the acetylated triterpenes in mixture with myristyloxy triterpenes improved this activity. These compounds seem, at least in part, to be related to the plant’s reported activity. SUMMARY The mixtures of hydroxylated, acetylated, and myristate triterpenes isolated from hexanic extracts of Pluchea quitoc DC. were analyzed in the models of nociception and inflammation in mice. The results demonstrate the anti-inflammatory and peripheral antinociceptive activity of the triterpenes β-amyrin, taraxasterol, and pseudo-taraxasterol. This study showed too that the activity of triterpenes may be decreased by their being acetylated, while the acetylated triterpenes in mixture with myristate triterpenes improved this activity. Abbreviations Used: T: Triterpenes compounds β-amyrin, taraxasterol, and pseudo-taraxasterol in a mixture, Ta: Triterpenes compounds β-amyrin, taraxasterol and pseudo-taraxasterol acetates in a mixture, Tafe: Triterpenes compounds β-amyrin, taraxasterol, pseudo-taraxasterol acetates in a mixture with β-amyrin, taraxasterol and pseudo-taraxasterol myristates, Ctrl: Control, Indo: Indomethacin, Dexa: Dexamethasone, EtOAc: Ethyl acetate, MeOH: Methanol. PMID:29333034

  16. Searching for quantum optimal controls under severe constraints

    DOE PAGES

    Riviello, Gregory; Tibbetts, Katharine Moore; Brif, Constantin; ...

    2015-04-06

    The success of quantum optimal control for both experimental and theoretical objectives is connected to the topology of the corresponding control landscapes, which are free from local traps if three conditions are met: (1) the quantum system is controllable, (2) the Jacobian of the map from the control field to the evolution operator is of full rank, and (3) there are no constraints on the control field. This paper investigates how the violation of assumption (3) affects gradient searches for globally optimal control fields. The satisfaction of assumptions (1) and (2) ensures that the control landscape lacks fundamental traps, butmore » certain control constraints can still prevent successful optimization of the objective. Using optimal control simulations, we show that the most severe field constraints are those that limit essential control resources, such as the number of control variables, the control duration, and the field strength. Proper management of these resources is an issue of great practical importance for optimization in the laboratory. For each resource, we show that constraints exceeding quantifiable limits can introduce artificial traps to the control landscape and prevent gradient searches from reaching a globally optimal solution. These results demonstrate that careful choice of relevant control parameters helps to eliminate artificial traps and facilitate successful optimization.« less

  17. Transect-scale imaging of root zone electrical conductivity by inversion of multiple-height EMI measurements under different salinity conditions

    NASA Astrophysics Data System (ADS)

    Piero Deidda, Gian; Coppola, Antonio; Dragonetti, Giovanna; Comegna, Alessandro; Rodriguez, Giuseppe; Vignoli, Giulio

    2017-04-01

    The ability to determine the effects of salts on soils and plants, are of great importance to agriculture. To control its harmful effects, soil salinity needs to be monitored in space and time. This requires knowledge of its magnitude, temporal dynamics, and spatial variability. Soil salinity can be evaluated by measuring the bulk electrical conductivity (σb) in the field. Measurements of σb can be made with either in situ or remote devices (Rhoades and Oster, 1986; Rhoades and Corwin, 1990; Rhoades and Miyamoto, 1990). Time Domain Reflectometry (TDR) sensors allow simultaneous measurements of water content, θ, and σb. They may be calibrated in the laboratory for estimating the electrical conductivity of the soil solution (σw). However, they have a relatively small observation volume and thus they only provide local-scale measurements. The spatial range of the sensors is limited to tens of centimeters and extension of the information to a large area can be problematic. Also, information on the vertical distribution of the σb soil profile may only be obtained by installing sensors at different depths. In this sense, the TDR may be considered as an invasive technique. Compared to the TDR, non-invasive electromagnetic induction (EMI) techniques can be used for extensively mapping the bulk electrical conductivity in the field. The problem is that all these techniques give depth-weighted apparent electrical conductivity (ECa) measurements, depending on the specific depth distribution of the σb, as well as on the depth response function of the sensor used. In order to deduce the actual distribution of local σb in the soil profile, one may invert the signal coming from EMI sensors. Most studies use the linear model proposed by McNeill (1980), describing the relative depth-response of the ground conductivity meter. By using the forward linear model of McNeill, Borchers et al. (1997) implemented a Least Squares inverse procedure with second order Tikhonov regularization, to estimate σb vertical distribution from EMI field data. More recent studies (Hendrickx et al., 2002; Deidda et al., 2003; Deidda et al., 2014, among others), extended the approach to a more complicated non linear model of the response of a ground conductivity meter to changes with depth of σb. Noteworthy, these inverse procedures are only based on electromagnetic physics. Thus, they are only based on ECa readings, possibly taken with both the horizontal and vertical configurations and with the sensor at different heights above the ground, and do not require any further field calibration. Nevertheless, as discussed by Hendrickx et al. (2002), important issues on inverse approaches are about: i) the applicability to heterogeneous field soils of physical equations originally developed for the electromagnetic response of homogeneous media and ii) nonuniqueness and instability problems inherent to inverse procedures, even after Tikhonov regularization. Besides, as discussed by Cook and Walker (1992), these mathematical inversions procedures using layered-earth models were originally designed for interpreting porous systems with distinct layering. Where subsurface layers are not sharply defined, this type of inversion may be subject to considerable error. With these premises, the main aim of this study is estimating the vertical σb distribution by ECa measured using ground surface EMI methods under different salinity conditions and using TDR data as ground-truth data for validation of the inversion procedure. The latter is based on a regularized 1D inversion procedure designed to swiftly manage nonlinear multiple EMI-depth responses (Deidda et al., 2014). It is based on the coupling of the damped Gauss-Newton method with either the truncated singular value decomposition (TSVD) or the truncated generalized singular value decomposition (TGSVD), and it implements an explicit (exact) representation of the Jacobian to solve the nonlinear inverse problem. The experimental field (30 m x 15.6 m; for a total area of 468 m2) was divided into three transects 30 m long and 4.2 width, cultivated with green bean and irrigated with three different salinity levels (1 dS/m, 3 dS/m, and 6 dS/m). Each transect consisted of seven rows equipped with a sprinkler irrigation system, which supplied a water flux of 2 l/h. As for the salt application, CaCl2 were dissolved in tap water, and subsequently siphoned into the irrigation system. For each transect, 24 regularly spaced monitoring sites (1 m apart) were selected for soil measurements, using different equipments: i) a TDR100, ii), a Geonics EM-38; iii). Overall, fifteen measurement campaigns were carried out.

  18. Auto-calibrated scanning-angle prism-type total internal reflection microscopy for nanometer-precision axial position determination and optional variable-illumination-depth pseudo total internal reflection microscopy

    DOEpatents

    Fang, Ning; Sun, Wei

    2015-04-21

    A method, apparatus, and system for improved VA-TIRFM microscopy. The method comprises automatically controlled calibration of one or more laser sources by precise control of presentation of each laser relative a sample for small incremental changes of incident angle over a range of critical TIR angles. The calibration then allows precise scanning of the sample for any of those calibrated angles for higher and more accurate resolution, and better reconstruction of the scans for super resolution reconstruction of the sample. Optionally the system can be controlled for incident angles of the excitation laser at sub-critical angles for pseudo TIRFM. Optionally both above-critical angle and sub critical angle measurements can be accomplished with the same system.

  19. Wavefront Control and Image Restoration with Less Computing

    NASA Technical Reports Server (NTRS)

    Lyon, Richard G.

    2010-01-01

    PseudoDiversity is a method of recovering the wavefront in a sparse- or segmented- aperture optical system typified by an interferometer or a telescope equipped with an adaptive primary mirror consisting of controllably slightly moveable segments. (PseudoDiversity should not be confused with a radio-antenna-arraying method called pseudodiversity.) As in the cases of other wavefront- recovery methods, the streams of wavefront data generated by means of PseudoDiversity are used as feedback signals for controlling electromechanical actuators of the various segments so as to correct wavefront errors and thereby, for example, obtain a clearer, steadier image of a distant object in the presence of atmospheric turbulence. There are numerous potential applications in astronomy, remote sensing from aircraft and spacecraft, targeting missiles, sighting military targets, and medical imaging (including microscopy) through such intervening media as cells or water. In comparison with prior wavefront-recovery methods used in adaptive optics, PseudoDiversity involves considerably simpler equipment and procedures and less computation. For PseudoDiversity, there is no need to install separate metrological equipment or to use any optomechanical components beyond those that are already parts of the optical system to which the method is applied. In Pseudo- Diversity, the actuators of a subset of the segments or subapertures are driven to make the segments dither in the piston, tilt, and tip degrees of freedom. Each aperture is dithered at a unique frequency at an amplitude of a half wavelength of light. During the dithering, images on the focal plane are detected and digitized at a rate of at least four samples per dither period. In the processing of the image samples, the use of different dither frequencies makes it possible to determine the separate effects of the various dithered segments or apertures. The digitized image-detector outputs are processed in the spatial-frequency (Fourier-transform) domain to obtain measures of the piston, tip, and tilt errors over each segment or subaperture. Once these measures are known, they are fed back to the actuators to correct the errors. In addition, measures of errors that remain after correction by use of the actuators are further utilized in an algorithm in which the image is phase-corrected in the spatial-frequency domain and then transformed back to the spatial domain at each time step and summed with the images from all previous time steps to obtain a final image having a greater signal-to-noise ratio (and, hence, a visual quality) higher than would otherwise be attainable.

  20. Adaptive Inverse Control for Rotorcraft Vibration Reduction

    NASA Technical Reports Server (NTRS)

    Jacklin, Stephen A.

    1985-01-01

    This thesis extends the Least Mean Square (LMS) algorithm to solve the mult!ple-input, multiple-output problem of alleviating N/Rev (revolutions per minute by number of blades) helicopter fuselage vibration by means of adaptive inverse control. A frequency domain locally linear model is used to represent the transfer matrix relating the higher harmonic pitch control inputs to the harmonic vibration outputs to be controlled. By using the inverse matrix as the controller gain matrix, an adaptive inverse regulator is formed to alleviate the N/Rev vibration. The stability and rate of convergence properties of the extended LMS algorithm are discussed. It is shown that the stability ranges for the elements of the stability gain matrix are directly related to the eigenvalues of the vibration signal information matrix for the learning phase, but not for the control phase. The overall conclusion is that the LMS adaptive inverse control method can form a robust vibration control system, but will require some tuning of the input sensor gains, the stability gain matrix, and the amount of control relaxation to be used. The learning curve of the controller during the learning phase is shown to be quantitatively close to that predicted by averaging the learning curves of the normal modes. For higher order transfer matrices, a rough estimate of the inverse is needed to start the algorithm efficiently. The simulation results indicate that the factor which most influences LMS adaptive inverse control is the product of the control relaxation and the the stability gain matrix. A small stability gain matrix makes the controller less sensitive to relaxation selection, and permits faster and more stable vibration reduction, than by choosing the stability gain matrix large and the control relaxation term small. It is shown that the best selections of the stability gain matrix elements and the amount of control relaxation is basically a compromise between slow, stable convergence and fast convergence with increased possibility of unstable identification. In the simulation studies, the LMS adaptive inverse control algorithm is shown to be capable of adapting the inverse (controller) matrix to track changes in the flight conditions. The algorithm converges quickly for moderate disturbances, while taking longer for larger disturbances. Perfect knowledge of the inverse matrix is not required for good control of the N/Rev vibration. However it is shown that measurement noise will prevent the LMS adaptive inverse control technique from controlling the vibration, unless the signal averaging method presented is incorporated into the algorithm.

  1. 3-D orbital evolution model of outer asteroid belt

    NASA Technical Reports Server (NTRS)

    Solovaya, Nina A.; Gerasimov, Igor A.; Pittich, Eduard M.

    1992-01-01

    The evolution of minor planets in the outer part of the asteroid belt is considered. In the framework of the semi-averaged elliptic restricted three-dimensional three-body model, the boundary of regions of the Hill's stability is found. As was shown in our work, the Jacobian integral exists.

  2. A General Pressure Gradient Formulation for Ocean Models, Part 1: Scheme Design and Diagnostic Analysis, Part II: Energy, Momentum, and Bottom Torque Consistency

    NASA Technical Reports Server (NTRS)

    Song, Y. T.

    1998-01-01

    A Jacobian formulation of the pressure gradient force for use in models with topography following coordinates is proposed. It can be used in conjunction with any vertical coordinate system and is easily implemented.

  3. Evaluation of Pseudo-Haptic Interactions with Soft Objects in Virtual Environments.

    PubMed

    Li, Min; Sareh, Sina; Xu, Guanghua; Ridzuan, Maisarah Binti; Luo, Shan; Xie, Jun; Wurdemann, Helge; Althoefer, Kaspar

    2016-01-01

    This paper proposes a pseudo-haptic feedback method conveying simulated soft surface stiffness information through a visual interface. The method exploits a combination of two feedback techniques, namely visual feedback of soft surface deformation and control of the indenter avatar speed, to convey stiffness information of a simulated surface of a soft object in virtual environments. The proposed method was effective in distinguishing different sizes of virtual hard nodules integrated into the simulated soft bodies. To further improve the interactive experience, the approach was extended creating a multi-point pseudo-haptic feedback system. A comparison with regards to (a) nodule detection sensitivity and (b) elapsed time as performance indicators in hard nodule detection experiments to a tablet computer incorporating vibration feedback was conducted. The multi-point pseudo-haptic interaction is shown to be more time-efficient than the single-point pseudo-haptic interaction. It is noted that multi-point pseudo-haptic feedback performs similarly well when compared to a vibration-based feedback method based on both performance measures elapsed time and nodule detection sensitivity. This proves that the proposed method can be used to convey detailed haptic information for virtual environmental tasks, even subtle ones, using either a computer mouse or a pressure sensitive device as an input device. This pseudo-haptic feedback method provides an opportunity for low-cost simulation of objects with soft surfaces and hard inclusions, as, for example, occurring in ever more realistic video games with increasing emphasis on interaction with the physical environment and minimally invasive surgery in the form of soft tissue organs with embedded cancer nodules. Hence, the method can be used in many low-budget applications where haptic sensation is required, such as surgeon training or video games, either using desktop computers or portable devices, showing reasonably high fidelity in conveying stiffness perception to the user.

  4. A New Understanding for the Rain Rate retrieval of Attenuating Radars Measurement

    NASA Astrophysics Data System (ADS)

    Koner, P.; Battaglia, A.; Simmer, C.

    2009-04-01

    The retrieval of rain rate from the attenuated radar (e.g. Cloud Profiling Radar on board of CloudSAT in orbit since June 2006) is a challenging problem. ĹEcuyer and Stephens [1] underlined this difficulty (for rain rates larger than 1.5 mm/h) and suggested the need of additional information (like path-integrated attenuations (PIA) derived from surface reference techniques or precipitation water path estimated from co-located passive microwave radiometer) to constrain the retrieval. It is generally discussed based on the optimal estimation theory that there are no solutions without constraining the problem in a case of visible attenuation because there is no enough information content to solve the problem. However, when the problem is constrained by the additional measurement of PIA, there is a reasonable solution. This raises the spontaneous question: Is all information enclosed in this additional measurement? This also contradicts with the information theory because one measurement can introduce only one degree of freedom in the retrieval. Why is one degree of freedom so important in the above problem? This question cannot be explained using the estimation and information theories of OEM. On the other hand, Koner and Drummond [2] argued that the OEM is basically a regularization method, where a-priori covariance is used as a stabilizer and the regularization strength is determined by the choices of the a-priori and error covariance matrices. The regularization is required for the reduction of the condition number of Jacobian, which drives the noise injection from the measurement and inversion spaces to the state space in an ill-posed inversion. In this work, the above mentioned question will be discussed based on the regularization theory, error mitigation and eigenvalue mathematics. References 1. L'Ecuyer TS and Stephens G. An estimation based precipitation retrieval algorithm for attenuating radar. J. Appl. Met., 2002, 41, 272-85. 2. Koner PK, Drummond JR. A comparison of regularization techniques for atmospheric trace gases retrievals. JQSRT 2008; 109:514-26.

  5. Development of the WRF-CO2 4D-Var assimilation system v1.0

    NASA Astrophysics Data System (ADS)

    Zheng, Tao; French, Nancy H. F.; Baxter, Martin

    2018-05-01

    Regional atmospheric CO2 inversions commonly use Lagrangian particle trajectory model simulations to calculate the required influence function, which quantifies the sensitivity of a receptor to flux sources. In this paper, an adjoint-based four-dimensional variational (4D-Var) assimilation system, WRF-CO2 4D-Var, is developed to provide an alternative approach. This system is developed based on the Weather Research and Forecasting (WRF) modeling system, including the system coupled to chemistry (WRF-Chem), with tangent linear and adjoint codes (WRFPLUS), and with data assimilation (WRFDA), all in version 3.6. In WRF-CO2 4D-Var, CO2 is modeled as a tracer and its feedback to meteorology is ignored. This configuration allows most WRF physical parameterizations to be used in the assimilation system without incurring a large amount of code development. WRF-CO2 4D-Var solves for the optimized CO2 flux scaling factors in a Bayesian framework. Two variational optimization schemes are implemented for the system: the first uses the limited memory Broyden-Fletcher-Goldfarb-Shanno (BFGS) minimization algorithm (L-BFGS-B) and the second uses the Lanczos conjugate gradient (CG) in an incremental approach. WRFPLUS forward, tangent linear, and adjoint models are modified to include the physical and dynamical processes involved in the atmospheric transport of CO2. The system is tested by simulations over a domain covering the continental United States at 48 km × 48 km grid spacing. The accuracy of the tangent linear and adjoint models is assessed by comparing against finite difference sensitivity. The system's effectiveness for CO2 inverse modeling is tested using pseudo-observation data. The results of the sensitivity and inverse modeling tests demonstrate the potential usefulness of WRF-CO2 4D-Var for regional CO2 inversions.

  6. CXCR2 inverse agonism detected by arrestin redistribution.

    PubMed

    Kredel, Simone; Wolff, Michael; Wiedenmann, Jörg; Moepps, Barbara; Nienhaus, G Ulrich; Gierschik, Peter; Kistler, Barbara; Heilker, Ralf

    2009-10-01

    To study CXCR2 modulated arrestin redistribution, the authors employed arrestin as a fusion protein containing either the Aequorea victoria-derived enhanced green fluorescent protein (EGFP) or a recently developed mutant of eqFP611, a red fluorescent protein derived from Entacmaea quadricolor. This mutant, referred to as RFP611, had earlier been found to assume a dimeric quarternary structure. It was therefore employed in this work as a "tandem" (td) construct for pseudo-monomeric fusion protein labeling. Both arrestin fusion proteins, containing either td-RFP611 (Arr-td-RFP611) or enhanced green fluorescent protein (EGFP; Arr-EGFP), were found to colocalize with internalized fluorescently labeled Gro-alpha a few minutes after Gro-alpha addition. Intriguingly, however, Arr-td-RFP611 and Arr-EGFP displayed distinct cellular distribution patterns in the absence of any CXCR2-activating ligand. Under these conditions, Arr-td-RFP611 showed a largely homogeneous cytosolic distribution, whereas Arr-EGFP segregated, to a large degree, into granular spots. These observations indicate a higher sensitivity of Arr EGFP to the constitutive activity of CXCR2 and, accordingly, an increased arrestin redistribution to coated pits and endocytic vesicles. In support of this interpretation, the authors found the known CXCR2 antagonist Sch527123 to act as an inverse agonist with respect to Arr-EGFP redistribution. The inverse agonistic properties of Sch527123 were confirmed in vitro in a guanine nucleotide binding assay, revealing an IC(50) value similar to that observed for Arr-EGFP redistribution. Thus, the redistribution assay, when based on Arr-EGFP, enables the profiling of antagonistic test compounds with respect to inverse agonism. When based on Arr-td-RFP611, the assay may be employed to study CXCR2 agonism or neutral antagonism.

  7. Extended fault inversion with random slipmaps: a resolution test for the 2012 Mw 7.6 Nicoya, Costa Rica earthquake

    NASA Astrophysics Data System (ADS)

    López-Comino, José Ángel; Stich, Daniel; Ferreira, Ana M. G.; Morales, Jose

    2015-09-01

    Inversions for the full slip distribution of earthquakes provide detailed models of earthquake sources, but stability and non-uniqueness of the inversions is a major concern. The problem is underdetermined in any realistic setting, and significantly different slip distributions may translate to fairly similar seismograms. In such circumstances, inverting for a single best model may become overly dependent on the details of the procedure. Instead, we propose to perform extended fault inversion trough falsification. We generate a representative set of heterogeneous slipmaps, compute their forward predictions, and falsify inappropriate trial models that do not reproduce the data within a reasonable level of mismodelling. The remainder of surviving trial models forms our set of coequal solutions. The solution set may contain only members with similar slip distributions, or else uncover some fundamental ambiguity such as, for example, different patterns of main slip patches. For a feasibility study, we use teleseismic body wave recordings from the 2012 September 5 Nicoya, Costa Rica earthquake, although the inversion strategy can be applied to any type of seismic, geodetic or tsunami data for which we can handle the forward problem. We generate 10 000 pseudo-random, heterogeneous slip distributions assuming a von Karman autocorrelation function, keeping the rake angle, rupture velocity and slip velocity function fixed. The slip distribution of the 2012 Nicoya earthquake turns out to be relatively well constrained from 50 teleseismic waveforms. Two hundred fifty-two slip models with normalized L1-fit within 5 per cent from the global minimum from our solution set. They consistently show a single dominant slip patch around the hypocentre. Uncertainties are related to the details of the slip maximum, including the amount of peak slip (2-3.5 m), as well as the characteristics of peripheral slip below 1 m. Synthetic tests suggest that slip patterns such as Nicoya may be a fortunate case, while it may be more difficult to unambiguously reconstruct more distributed slip from teleseismic data.

  8. Kinematically Optimal Robust Control of Redundant Manipulators

    NASA Astrophysics Data System (ADS)

    Galicki, M.

    2017-12-01

    This work deals with the problem of the robust optimal task space trajectory tracking subject to finite-time convergence. Kinematic and dynamic equations of a redundant manipulator are assumed to be uncertain. Moreover, globally unbounded disturbances are allowed to act on the manipulator when tracking the trajectory by the endeffector. Furthermore, the movement is to be accomplished in such a way as to minimize both the manipulator torques and their oscillations thus eliminating the potential robot vibrations. Based on suitably defined task space non-singular terminal sliding vector variable and the Lyapunov stability theory, we derive a class of chattering-free robust kinematically optimal controllers, based on the estimation of transpose Jacobian, which seem to be effective in counteracting both uncertain kinematics and dynamics, unbounded disturbances and (possible) kinematic and/or algorithmic singularities met on the robot trajectory. The numerical simulations carried out for a redundant manipulator of a SCARA type consisting of the three revolute kinematic pairs and operating in a two-dimensional task space, illustrate performance of the proposed controllers as well as comparisons with other well known control schemes.

  9. Robust Task Space Trajectory Tracking Control of Robotic Manipulators

    NASA Astrophysics Data System (ADS)

    Galicki, M.

    2016-08-01

    This work deals with the problem of the accurate task space trajectory tracking subject to finite-time convergence. Kinematic and dynamic equations of a redundant manipulator are assumed to be uncertain. Moreover, globally unbounded disturbances are allowed to act on the manipulator when tracking the trajectory by the end-effector. Furthermore, the movement is to be accomplished in such a way as to reduce both the manipulator torques and their oscillations thus eliminating the potential robot vibrations. Based on suitably defined task space non-singular terminal sliding vector variable and the Lyapunov stability theory, we propose a class of chattering-free robust controllers, based on the estimation of transpose Jacobian, which seem to be effective in counteracting both uncertain kinematics and dynamics, unbounded disturbances and (possible) kinematic and/or algorithmic singularities met on the robot trajectory. The numerical simulations carried out for a redundant manipulator of a SCARA type consisting of the three revolute kinematic pairs and operating in a two-dimensional task space, illustrate performance of the proposed controllers as well as comparisons with other well known control schemes.

  10. Impact of transport and modelling errors on the estimation of methane sources and sinks by inverse modelling

    NASA Astrophysics Data System (ADS)

    Locatelli, Robin; Bousquet, Philippe; Chevallier, Frédéric

    2013-04-01

    Since the nineties, inverse modelling by assimilating atmospheric measurements into a chemical transport model (CTM) has been used to derive sources and sinks of atmospheric trace gases. More recently, the high global warming potential of methane (CH4) and unexplained variations of its atmospheric mixing ratio caught the attention of several research groups. Indeed, the diversity and the variability of methane sources induce high uncertainty on the present and the future evolution of CH4 budget. With the increase of available measurement data to constrain inversions (satellite data, high frequency surface and tall tower observations, FTIR spectrometry,...), the main limiting factor is about to become the representation of atmospheric transport in CTMs. Indeed, errors in transport modelling directly converts into flux changes when assuming perfect transport in atmospheric inversions. Hence, we propose an inter-model comparison in order to quantify the impact of transport and modelling errors on the CH4 fluxes estimated into a variational inversion framework. Several inversion experiments are conducted using the same set-up (prior emissions, measurement and prior errors, OH field, initial conditions) of the variational system PYVAR, developed at LSCE (Laboratoire des Sciences du Climat et de l'Environnement, France). Nine different models (ACTM, IFS, IMPACT, IMPACT1x1, MOZART, PCTM, TM5, TM51x1 and TOMCAT) used in TRANSCOM-CH4 experiment (Patra el al, 2011) provide synthetic measurements data at up to 280 surface sites to constrain the inversions performed using the PYVAR system. Only the CTM (and the meteorological drivers which drive them) used to create the pseudo-observations vary among inversions. Consequently, the comparisons of the nine inverted methane fluxes obtained for 2005 give a good order of magnitude of the impact of transport and modelling errors on the estimated fluxes with current and future networks. It is shown that transport and modelling errors lead to a discrepancy of 27 TgCH4 per year at global scale, representing 5% of the total methane emissions for 2005. At continental scale, transport and modelling errors have bigger impacts in proportion to the area of the regions, ranging from 36 TgCH4 in North America to 7 TgCH4 in Boreal Eurasian, with a percentage range from 23% to 48%. Thus, contribution of transport and modelling errors to the mismatch between measurements and simulated methane concentrations is large considering the present questions on the methane budget. Moreover, diagnostics of statistics errors included in our inversions have been computed. It shows that errors contained in measurement errors covariance matrix are under-estimated in current inversions, suggesting to include more properly transport and modelling errors in future inversions.

  11. Cooperative pulses for pseudo-pure state preparation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wei, Daxiu; Chang, Yan; Yang, Xiaodong, E-mail: steffen.glaser@tum.de, E-mail: xiaodong.yang@sibet.ac.cn

    2014-06-16

    Using an extended version of the optimal-control-based gradient ascent pulse engineering algorithm, cooperative (COOP) pulses are designed for multi-scan experiments to prepare pseudo-pure states in quantum computation. COOP pulses can cancel undesired signal contributions, complementing and generalizing phase cycles. They also provide more flexibility and, in particular, eliminate the need to select specific individual target states and achieve the fidelity of theoretical limit by flexibly choosing appropriate number of scans and duration of pulses. The COOP approach is experimentally demonstrated for three-qubit and four-qubit systems.

  12. Dimensional synthesis of a 3-DOF parallel manipulator with full circle rotation

    NASA Astrophysics Data System (ADS)

    Ni, Yanbing; Wu, Nan; Zhong, Xueyong; Zhang, Biao

    2015-07-01

    Parallel robots are widely used in the academic and industrial fields. In spite of the numerous achievements in the design and dimensional synthesis of the low-mobility parallel robots, few research efforts are directed towards the asymmetric 3-DOF parallel robots whose end-effector can realize 2 translational and 1 rotational(2T1R) motion. In order to develop a manipulator with the capability of full circle rotation to enlarge the workspace, a new 2T1R parallel mechanism is proposed. The modeling approach and kinematic analysis of this proposed mechanism are investigated. Using the method of vector analysis, the inverse kinematic equations are established. This is followed by a vigorous proof that this mechanism attains an annular workspace through its circular rotation and 2 dimensional translations. Taking the first order perturbation of the kinematic equations, the error Jacobian matrix which represents the mapping relationship between the error sources of geometric parameters and the end-effector position errors is derived. With consideration of the constraint conditions of pressure angles and feasible workspace, the dimensional synthesis is conducted with a goal to minimize the global comprehensive performance index. The dimension parameters making the mechanism to have optimal error mapping and kinematic performance are obtained through the optimization algorithm. All these research achievements lay the foundation for the prototype building of such kind of parallel robots.

  13. The stability analysis of the nutrition restricted dynamic model of the microalgae biomass growth

    NASA Astrophysics Data System (ADS)

    Ratianingsih, R.; Fitriani, Nacong, N.; Resnawati, Mardlijah, Widodo, B.

    2018-03-01

    The biomass production is very essential in microalgae farming such that its growth rate is very important to be determined. This paper proposes the dynamics model of it that restricted by its nutrition. The model is developed by considers some related processes that are photosynthesis, respiration, nutrition absorption, stabilization, lipid synthesis and CO2 mobilization. The stability of the dynamical system that represents the processes is analyzed using the Jacobian matrix of the linearized system in the neighborhood of its critical point. There is a lipid formation threshold needed to require its existence. In such case, the absorption rate of respiration process has to be inversely proportional to the absorption rate of CO2 due to photosynthesis process. The Pontryagin minimal principal also shows that there are some requirements needed to have a stable critical point, such as the rate of CO2 released rate, due to the stabilization process that is restricted by 50%, and the threshold of its shifted critical point. In case of the rate of CO2 released rate due to the photosynthesis process is restricted in such interval; the stability of the model at the critical point could not be satisfied anymore. The simulation shows that the external nutrition plays a role in glucose formation such that sufficient for the biomass growth and the lipid production.

  14. Path planning for robotic truss assembly

    NASA Technical Reports Server (NTRS)

    Sanderson, Arthur C.

    1993-01-01

    A new Potential Fields approach to the robotic path planning problem is proposed and implemented. Our approach, which is based on one originally proposed by Munger, computes an incremental joint vector based upon attraction to a goal and repulsion from obstacles. By repetitively adding and computing these 'steps', it is hoped (but not guaranteed) that the robot will reach its goal. An attractive force exerted by the goal is found by solving for the the minimum norm solution to the linear Jacobian equation. A repulsive force between obstacles and the robot's links is used to avoid collisions. Its magnitude is inversely proportional to the distance. Together, these forces make the goal the global minimum potential point, but local minima can stop the robot from ever reaching that point. Our approach improves on a basic, potential field paradigm developed by Munger by using an active, adaptive field - what we will call a 'flexible' potential field. Active fields are stronger when objects move towards one another and weaker when they move apart. An adaptive field's strength is individually tailored to be just strong enough to avoid any collision. In addition to the local planner, a global planning algorithm helps the planner to avoid local field minima by providing subgoals. These subgoals are based on the obstacles which caused the local planner to fail. A best-first search algorithm A* is used for graph search.

  15. An iterative ensemble quasi-linear data assimilation approach for integrated reservoir monitoring

    NASA Astrophysics Data System (ADS)

    Li, J. Y.; Kitanidis, P. K.

    2013-12-01

    Reservoir forecasting and management are increasingly relying on an integrated reservoir monitoring approach, which involves data assimilation to calibrate the complex process of multi-phase flow and transport in the porous medium. The numbers of unknowns and measurements arising in such joint inversion problems are usually very large. The ensemble Kalman filter and other ensemble-based techniques are popular because they circumvent the computational barriers of computing Jacobian matrices and covariance matrices explicitly and allow nonlinear error propagation. These algorithms are very useful but their performance is not well understood and it is not clear how many realizations are needed for satisfactory results. In this presentation we introduce an iterative ensemble quasi-linear data assimilation approach for integrated reservoir monitoring. It is intended for problems for which the posterior or conditional probability density function is not too different from a Gaussian, despite nonlinearity in the state transition and observation equations. The algorithm generates realizations that have the potential to adequately represent the conditional probability density function (pdf). Theoretical analysis sheds light on the conditions under which this algorithm should work well and explains why some applications require very few realizations while others require many. This algorithm is compared with the classical ensemble Kalman filter (Evensen, 2003) and with Gu and Oliver's (2007) iterative ensemble Kalman filter on a synthetic problem of monitoring a reservoir using wellbore pressure and flux data.

  16. Fully three-dimensional and viscous semi-inverse method for axial/radial turbomachine blade design

    NASA Astrophysics Data System (ADS)

    Ji, Min

    2008-10-01

    A fully three-dimensional viscous semi-inverse method for the design of turbomachine blades is presented in this work. Built on a time marching Reynolds-Averaged Navier-Stokes solver, the inverse scheme is capable of designing axial/radial turbomachinery blades in flow regimes ranging from very low Mach number to transonic/supersonic flows. In order to solve flow at all-speed conditions, the preconditioning technique is incorporated into the basic JST time-marching scheme. The accuracy of the resulting flow solver is verified with documented experimental data and commercial CFD codes. The level of accuracy of the flow solver exhibited in those verification cases is typical of CFD analysis employed in the design process in industry. The inverse method described in the present work takes pressure loading and blade thickness as prescribed quantities and computes the corresponding three-dimensional blade camber surface. In order to have the option of imposing geometrical constraints on the designed blade shapes, a new inverse algorithm is developed to solve the camber surface at specified spanwise pseudo stream-tubes (i.e. along grid lines), while the blade geometry is constructed through ruling (e.g. straight-line element) at the remaining spanwise stations. The new inverse algorithm involves re-formulating the boundary condition on the blade surfaces as a hybrid inverse/analysis boundary condition, preserving the full three-dimensional nature of the flow. The new design procedure can be interpreted as a fully three-dimensional viscous semi-inverse method. The ruled surface design ensures the blade surface smoothness and mechanical integrity as well as achieves cost reduction for the manufacturing process. A numerical target shooting experiment for a mixed flow impeller shows that the semi-inverse method is able to accurately recover the target blade composed of straightline element from a different initial blade. The semi-inverse method is proved to work well with various loading strategies for the mixed flow impeller. It is demonstrated that uniformity of impeller exit flow and performance gain can be achieved with appropriate loading combinations at hub and shroud. An application of this semi-inverse method is also demonstrated through a redesign of an industrial shrouded subsonic centrifugal impeller. The redesigned impeller shows improved performance and operating range from the original one. Preliminary studies of blade designs presented in this work show that through the choice of the prescribed pressure loading profiles, this semi-inverse method can be used to design blade with the following objectives: (1) Various operating envelope. (2) Uniformity of impeller exit flow. (3) Overall performance improvement. By designing blade geometry with the proposed semi-inverse method whereby the blade pressure loading is specified instead of the conventional design approach of manually adjusting the blade angle to achieve blade design objectives, designers can discover blade geometry design space that has not been explored before.

  17. A time domain inverse dynamic method for the end point tracking control of a flexible manipulator

    NASA Technical Reports Server (NTRS)

    Kwon, Dong-Soo; Book, Wayne J.

    1991-01-01

    The inverse dynamic equation of a flexible manipulator was solved in the time domain. By dividing the inverse system equation into the causal part and the anticausal part, we calculated the torque and the trajectories of all state variables for a given end point trajectory. The interpretation of this method in the frequency domain was explained in detail using the two-sided Laplace transform and the convolution integral. The open loop control of the inverse dynamic method shows an excellent result in simulation. For real applications, a practical control strategy is proposed by adding a feedback tracking control loop to the inverse dynamic feedforward control, and its good experimental performance is presented.

  18. Adsorption of phenolic compound by aged-refuse.

    PubMed

    Xiaoli, Chai; Youcai, Zhao

    2006-09-01

    The adsorption of phenol, 2-chlorophenol, 4-chlorophenol and 2,4-dichlorophenol by aged-refuse has been studied. Adsorption isotherms have been determined for phenol, 2-chlorophenol, 4-chlorophenol and 2,4-dichlorophenol and the data fits well to the Freundlich equation. The chlorinated phenols are absorbed more strongly than the phenol and the adsorption capacity has an oblivious relationship with the numbers and the position of chlorine subsistent. The experiment data suggests that both the partition function and the chemical adsorption involve in the adsorption process. Pseudo-first-order and pseudo-second-order model were applied to investigate the kinetics of the adsorption and the results show that it fit the pseudo-second-order model. More than one step involves in the adsorption process and the overall rate of the adsorption process appears to be controlled by the chemical reaction. The thermodynamic analysis indicates that the adsorption is spontaneous and endothermic.

  19. Sensor-Only System Identification for Structural Health Monitoring of Advanced Aircraft

    NASA Technical Reports Server (NTRS)

    Kukreja, Sunil L.; Bernstein, Dennis S.

    2012-01-01

    Environmental conditions, cyclic loading, and aging contribute to structural wear and degradation, and thus potentially catastrophic events. The challenge of health monitoring technology is to determine incipient changes accurately and efficiently. This project addresses this challenge by developing health monitoring techniques that depend only on sensor measurements. Since actively controlled excitation is not needed, sensor-to-sensor identification (S2SID) provides an in-flight diagnostic tool that exploits ambient excitation to provide advance warning of significant changes. S2SID can subsequently be followed up by ground testing to localize and quantify structural changes. The conceptual foundation of S2SID is the notion of a pseudo-transfer function, where one sensor is viewed as the pseudo-input and another is viewed as the pseudo-output, is approach is less restrictive than transmissibility identification and operational modal analysis since no assumption is made about the locations of the sensors relative to the excitation.

  20. Magnetic basement and crustal structure in the Arabia-Eurasia collision zone from a combined gravity and magnetic model

    NASA Astrophysics Data System (ADS)

    Mousavi, Naeim; Ebbing, Jörg

    2017-04-01

    In this study, we investigate the magnetic basement and crustal structure in the region of Iran by inverse and forward modeling of aeromagnetic data and gravity data. The main focus is on the definition of the magnetic top basement. The combination of multiple shallow magnetic sources and an assumed shallow Curie isotherm depth beneath the Iranian Plateau creates a complex magnetic architecture over the area. Qualitative analysis, including pseudo gravity, wavelength filtering and upward continuation allowed a first separation of probable deep and shallow features, like the Sanandaj Sirjan zone, Urumieh Dokhtar Magmatic Assemblage, Kopet Dagh structural unit and Central Iran domain. In the second step, we apply inverse modeling to generate an estimate of the top basement geometry. The initial model was established from top basement to (a) constant depth of 25 km and (b) Moho depth. The inversion result was used as starting model for more detailed modelling in 3D to evaluate the effect of susceptibility heterogeneities in the crust. Subsequently, the model was modified with respect to tectonic and geological characterization of the region. Further modification of model in regards more details of susceptibility distribution was led to separating upper crust to different magnetic domains. In addition, we refined the top basement geometry by using terrestrial gravity observation as well. The best fitting model is consistent with the Curie isotherm depth as the base of magnetization. The Curie isotherm was derived from independent geophysical-petrological model.

  1. On the Locality of Transient Electromagnetic Soundings with a Single-Loop Configuration

    NASA Astrophysics Data System (ADS)

    Barsukov, P. O.; Fainberg, E. B.

    2018-03-01

    The possibilities of reconstructing two-dimensional (2D) cross sections based on the data of the profile soundings by the transient electromagnetic method (TEM) with a single ungrounded loop are illustrated on three-dimensional (3D) models. The process of reconstruction includes three main steps: transformation of the responses in the depth dependence of resistivity ρ(h) measured along the profile, with their subsequent stitching into the 2D pseudo section; point-by-point one-dimensional (1D) inversion of the responses with the starting model constructed based on the transformations; and correction of the 2D cross section with the use of 2.5-dimensional (2.5D) block inversion. It is shown that single-loop TEM soundings allow studying the geological media within a local domain the lateral dimensions of which are commensurate with the depth of the investigation. The structure of the medium beyond this domain insignificantly affects the sounding results. This locality enables the TEM to reconstruct the geoelectrical structure of the medium from the 2D cross sections with the minimal distortions caused by the lack of information beyond the profile of the transient response measurements.

  2. Exploiting Surface Albedos Products to Bridge the Gap Between Remote Sensing Information and Climate Models

    NASA Astrophysics Data System (ADS)

    Pinty, Bernard; Andredakis, Ioannis; Clerici, Marco; Kaminski, Thomas; Taberner, Malcolm; Stephen, Plummer

    2011-01-01

    We present results from the application of an inversion method conducted using MODIS derived broadband visible and near-infrared surface albedo products. This contribution is an extension of earlier efforts to optimally retrieve land surface fluxes and associated two- stream model parameters based on the Joint Research Centre Two-stream Inversion Package (JRC-TIP). The discussion focuses on products (based on the mean and one-sigma values of the Probability Distribution Functions (PDFs)) obtained during the summer and winter and highlight specific issues related to snowy conditions. This paper discusses the retrieved model parameters including the effective Leaf Area Index (LAI), the background brightness and the scattering efficiency of the vegetation elements. The spatial and seasonal changes exhibited by these parameters agree with common knowledge and underscore the richness of the high quality surface albedo data sets. At the same time, the opportunity to generate global maps of new products, such as the background albedo, underscores the advantages of using state of the art algorithmic approaches capable of fully exploiting accurate satellite remote sensing datasets. The detailed analyses of the retrieval uncertainties highlight the central role and contribution of the LAI, the main process parameter to interpret radiation transfer observations over vegetated surfaces. The posterior covariance matrix of the uncertainties is further exploited to quantify the knowledge gain from the ingestion of MODIS surface albedo products. The estimation of the radiation fluxes that are absorbed, transmitted and scattered by the vegetation layer and its background is achieved on the basis of the retrieved PDFs of the model parameters. The propagation of uncertainties from the observations to the model parameters is achieved via the Hessian of the cost function and yields a covariance matrix of posterior parameter uncertainties. This matrix is propagated to the radiation fluxes via the model’s Jacobian matrix of first derivatives. A definite asset of the JRC-TIP lies in its capability to control and ultimately relax a number of assumptions that are often implicit in traditional approaches. These features greatly help understand the discrepancies between the different data sets of land surface properties and fluxes that are currently available. Through a series of selected examples, the inverse procedure implemented in the JRC-TIP is shown to be robust, reliable and compliant with large scale processing requirements. Furthermore, this package ensures the physical consistency between the set of observations, the two-stream model parameters and radiation fluxes. It also documents the retrieval of associated uncertainties. The knowledge gained from the availability of remote sensing surface albedo products can be expressed in quantitative terms using a simple metric. This metric helps identify the geographical locations and periods of the year where the remote sensing products fail in reducing the uncertainty on the process model parameters as can be specified from current knowledge.

  3. An H-infinity approach to optimal control of oxygen and carbon dioxide contents in blood

    NASA Astrophysics Data System (ADS)

    Rigatos, Gerasimos; Siano, Pierluigi; Selisteanu, Dan; Precup, Radu

    2016-12-01

    Nonlinear H-infinity control is proposed for the regulation of the levels of oxygen and carbon dioxide in the blood of patients undergoing heart surgery and extracorporeal blood circulation. The levels of blood gases are administered through a membrane oxygenator and the control inputs are the externally supplied oxygen, the aggregate gas supply (oxygen plus nitrogen), and the blood flow which is regulated by a blood pump. The proposed control method is based on linearization of the oxygenator's dynamical model through Taylor series expansion and the computation of Jacobian matrices. The local linearization points are defined by the present value of the oxygenator's state vector and the last value of the control input that was exerted on this system. The modelling errors due to linearization are considered as disturbances which are compensated by the robustness of the control loop. Next, for the linearized model of the oxygenator an H-infinity control input is computed at each iteration of the control algorithm through the solution of an algebraic Riccati equation. With the use of Lyapunov stability analysis it is demonstrated that the control scheme satisfies the H-infinity tracking performance criterion, which signifies improved robustness against modelling uncertainty and external disturbances. Moreover, under moderate conditions the asymptotic stability of the control loop is also proven.

  4. A nonlinear optimal control approach for chaotic finance dynamics

    NASA Astrophysics Data System (ADS)

    Rigatos, G.; Siano, P.; Loia, V.; Tommasetti, A.; Troisi, O.

    2017-11-01

    A new nonlinear optimal control approach is proposed for stabilization of the dynamics of a chaotic finance model. The dynamic model of the financial system, which expresses interaction between the interest rate, the investment demand, the price exponent and the profit margin, undergoes approximate linearization round local operating points. These local equilibria are defined at each iteration of the control algorithm and consist of the present value of the systems state vector and the last value of the control inputs vector that was exerted on it. The approximate linearization makes use of Taylor series expansion and of the computation of the associated Jacobian matrices. The truncation of higher order terms in the Taylor series expansion is considered to be a modelling error that is compensated by the robustness of the control loop. As the control algorithm runs, the temporary equilibrium is shifted towards the reference trajectory and finally converges to it. The control method needs to compute an H-infinity feedback control law at each iteration, and requires the repetitive solution of an algebraic Riccati equation. Through Lyapunov stability analysis it is shown that an H-infinity tracking performance criterion holds for the control loop. This implies elevated robustness against model approximations and external perturbations. Moreover, under moderate conditions the global asymptotic stability of the control loop is proven.

  5. A robust direct-integration method for rotorcraft maneuver and periodic response

    NASA Technical Reports Server (NTRS)

    Panda, Brahmananda

    1992-01-01

    The Newmark-Beta method and the Newton-Raphson iteration scheme are combined to develop a direct-integration method for evaluating the maneuver and periodic-response expressions for rotorcraft. The method requires the generation of Jacobians and includes higher derivatives in the formulation of the geometric stiffness matrix to enhance the convergence of the system. The method leads to effective convergence with nonlinear structural dynamics and aerodynamic terms. Singularities in the matrices can be addressed with the method as they arise from a Lagrange multiplier approach for coupling equations with nonlinear constraints. The method is also shown to be general enough to handle singularities from quasisteady control-system models. The method is shown to be more general and robust than the similar 2GCHAS method for analyzing rotorcraft dynamics.

  6. New experimental results in atlas-based brain morphometry

    NASA Astrophysics Data System (ADS)

    Gee, James C.; Fabella, Brian A.; Fernandes, Siddharth E.; Turetsky, Bruce I.; Gur, Ruben C.; Gur, Raquel E.

    1999-05-01

    In a previous meeting, we described a computational approach to MRI morphometry, in which a spatial warp mapping a reference or atlas image into anatomic alignment with the subject is first inferred. Shape differences with respect to the atlas are then studied by calculating the pointwise Jacobian determinant for the warp, which provides a measure of the change in differential volume about a point in the reference as it transforms to its corresponding position in the subject. In this paper, the method is used to analyze sex differences in the shape and size of the corpus callosum in an ongoing study of a large population of normal controls. The preliminary results of the current analysis support findings in the literature that have observed the splenium to be larger in females than in males.

  7. Finite element implementation of state variable-based viscoplasticity models

    NASA Technical Reports Server (NTRS)

    Iskovitz, I.; Chang, T. Y. P.; Saleeb, A. F.

    1991-01-01

    The implementation of state variable-based viscoplasticity models is made in a general purpose finite element code for structural applications of metals deformed at elevated temperatures. Two constitutive models, Walker's and Robinson's models, are studied in conjunction with two implicit integration methods: the trapezoidal rule with Newton-Raphson iterations and an asymptotic integration algorithm. A comparison is made between the two integration methods, and the latter method appears to be computationally more appealing in terms of numerical accuracy and CPU time. However, in order to make the asymptotic algorithm robust, it is necessary to include a self adaptive scheme with subincremental step control and error checking of the Jacobian matrix at the integration points. Three examples are given to illustrate the numerical aspects of the integration methods tested.

  8. Efficient stabilization and acceleration of numerical simulation of fluid flows by residual recombination

    NASA Astrophysics Data System (ADS)

    Citro, V.; Luchini, P.; Giannetti, F.; Auteri, F.

    2017-09-01

    The study of the stability of a dynamical system described by a set of partial differential equations (PDEs) requires the computation of unstable states as the control parameter exceeds its critical threshold. Unfortunately, the discretization of the governing equations, especially for fluid dynamic applications, often leads to very large discrete systems. As a consequence, matrix based methods, like for example the Newton-Raphson algorithm coupled with a direct inversion of the Jacobian matrix, lead to computational costs too large in terms of both memory and execution time. We present a novel iterative algorithm, inspired by Krylov-subspace methods, which is able to compute unstable steady states and/or accelerate the convergence to stable configurations. Our new algorithm is based on the minimization of the residual norm at each iteration step with a projection basis updated at each iteration rather than at periodic restarts like in the classical GMRES method. The algorithm is able to stabilize any dynamical system without increasing the computational time of the original numerical procedure used to solve the governing equations. Moreover, it can be easily inserted into a pre-existing relaxation (integration) procedure with a call to a single black-box subroutine. The procedure is discussed for problems of different sizes, ranging from a small two-dimensional system to a large three-dimensional problem involving the Navier-Stokes equations. We show that the proposed algorithm is able to improve the convergence of existing iterative schemes. In particular, the procedure is applied to the subcritical flow inside a lid-driven cavity. We also discuss the application of Boostconv to compute the unstable steady flow past a fixed circular cylinder (2D) and boundary-layer flow over a hemispherical roughness element (3D) for supercritical values of the Reynolds number. We show that Boostconv can be used effectively with any spatial discretization, be it a finite-difference, finite-volume, finite-element or spectral method.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheung, Michael L M; Chan, Anthony T C; The Chinese University of Hong Kong

    Purpose: To develop a formulation for 4D treatment planning for a tumour tracking volumetric modulated arc therapy treatment (VMAT) plan for lung cancer. Methods: A VMAT plan was optimized based on a reference phase of the 4DCT of a lung cancer patient. The PTV was generated from the GTV of the reference phase. The collimator angle was set to 90 degrees such that the MLC travels along superior-inferior direction which is the main component of movement of a lung tumour. Then, each control point of the VMAT plan was assigned to a particular phase of the 4DCT in chronological order.more » The MLC positions of each control point were shifted according to the position of the tumour centroid of its assigned phase to form a tumour tracking VMAT plan. The control points of the same phase were grouped to form a pseudo VMAT plan for that particular phase. Dose calculation was performed for each pseudo VMAT plan on the corresponding phase of the 4DCT. The CTs of all phases were registered to the reference phase CT according to the displacement of the tumour centroid. The individual dose distributions of the pseudo VMAT plans were summed up and displayed on the reference phase of the 4DCT. A control VMAT plan was optimized based on a PTV generated from the ITV of all phases and compared with the tumour tracking VMAT plan. Results: Both plans achieved >95% volume coverage at the prescription dose level (96% for the tumour tracking plan and 97% for the control plan). But the normal lung volume irradiated at the prescription dose level was 39% less for the tumour tracking plan than the control plan. Conclusion: A formulation of 4D treatment planning for tumour tracking VMAT plans for lung cancer was developed.« less

  10. Pseudo-shock waves and their interactions in high-speed intakes

    NASA Astrophysics Data System (ADS)

    Gnani, F.; Zare-Behtash, H.; Kontis, K.

    2016-04-01

    In an air-breathing engine the flow deceleration from supersonic to subsonic conditions takes places inside the isolator through a gradual compression consisting of a series of shock waves. The wave system, referred to as a pseudo-shock wave or shock train, establishes the combustion chamber entrance conditions, and therefore influences the performance of the entire propulsion system. The characteristics of the pseudo-shock depend on a number of variables which make this flow phenomenon particularly challenging to be analysed. Difficulties in experimentally obtaining accurate flow quantities at high speeds and discrepancies of numerical approaches with measured data have been readily reported. Understanding the flow physics in the presence of the interaction of numerous shock waves with the boundary layer in internal flows is essential to developing methods and control strategies. To counteract the negative effects of shock wave/boundary layer interactions, which are responsible for the engine unstart process, multiple flow control methodologies have been proposed. Improved analytical models, advanced experimental methodologies and numerical simulations have allowed a more in-depth analysis of the flow physics. The present paper aims to bring together the main results, on the shock train structure and its associated phenomena inside isolators, studied using the aforementioned tools. Several promising flow control techniques that have more recently been applied to manipulate the shock wave/boundary layer interaction are also examined in this review.

  11. Sympathetic nerve dysfunction is common in patients with chronic intestinal pseudo-obstruction.

    PubMed

    Mattsson, Tomas; Roos, Robert; Sundkvist, Göran; Valind, Sven; Ohlsson, Bodil

    2008-02-01

    To clarify whether disturbances in the autonomic nervous system, reflected in abnormal cardiovascular reflexes, could explain symptoms of impaired heat regulation in patients with intestinal pseudo-obstruction. Chronic intestinal pseudo-obstruction is a clinical syndrome characterized by diffuse, unspecific gastrointestinal symptoms due to damage to the enteric nervous system or the smooth muscle cells. These patients often complain of excessive sweating or feeling cold, suggesting disturbances in the autonomic nervous system. Earlier studies have pointed to a coexistence of autonomic disturbances in the enteric and cardiovascular nervous system. Thirteen consecutive patients (age range 23 to 79, mean 44 y) fulfilling the criteria for chronic intestinal pseudo-obstruction were investigated. Six of them complained of sweating or a feeling of cold. Examination of autonomic reflexes included heart rate variation to deep-breathing (expiration/inspiration index), heart rate reaction to tilt (acceleration index, brake index), and vasoconstriction (VAC) due to indirect cooling by laser doppler (VAC-index; high index indicates impaired VAC). Test results in patients were compared with healthy individuals. Patients had significantly higher (more abnormal) median VAC-index compared with healthy controls [1.79 (interquartile ranges 1.89) vs. 0.08 (interquartile ranges 1.29); P=0.0007]. However, symptoms of impaired heat regulation were not related to the VAC-index. There were no differences in expiration/inspiration, acceleration index, or brake index between patients and controls. The patients with severe gastrointestinal dysmotility showed impaired sympathetic nerve function which, however, did not seem to be associated with symptoms of impaired heat regulation.

  12. Advanced classical thermodynamics

    NASA Astrophysics Data System (ADS)

    Emanuel, George

    The theoretical and mathematical foundations of thermodynamics are presented in an advanced text intended for graduate engineering students. Chapters are devoted to definitions and postulates, the fundamental equation, equilibrium, the application of Jacobian theory to thermodynamics, the Maxwell equations, stability, the theory of real gases, critical-point theory, and chemical thermodynamics. Diagrams, graphs, tables, and sample problems are provided.

  13. A systematic study of finite BRST-BFV transformations in generalized Hamiltonian formalism

    NASA Astrophysics Data System (ADS)

    Batalin, Igor A.; Lavrov, Peter M.; Tyutin, Igor V.

    2014-09-01

    We study systematically finite BRST-BFV transformations in the generalized Hamiltonian formalism. We present explicitly their Jacobians and the form of a solution to the compensation equation determining the functional field dependence of finite Fermionic parameters, necessary to generate an arbitrary finite change of gauge-fixing functions in the path integral.

  14. Finite BRST-BFV transformations for dynamical systems with second-class constraints

    NASA Astrophysics Data System (ADS)

    Batalin, Igor A.; Lavrov, Peter M.; Tyutin, Igor V.

    2015-06-01

    We study finite field-dependent BRST-BFV transformations for dynamical systems with first- and second-class constraints within the generalized Hamiltonian formalism. We find explicitly their Jacobians and the form of a solution to the compensation equation necessary for generating an arbitrary finite change of gauge-fixing functionals in the path integral.

  15. A Continuum Description of Nonlinear Elasticity, Slip and Twinning, With Application to Sapphire

    DTIC Science & Technology

    2009-03-01

    Twinning is modelled via the isochoric term FI, and residual volume changes associated with defects are captured by the Jacobian determinant J . The...BF00126994) Farber, Y. A., Yoon, S. Y., Lagerlof, K. P. D. & Heuer, A. H. 1993 Microplasticity during high temperature indentation and the Peierls

  16. GRIZZLY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2012-12-17

    Grizzly is a simulation tool for assessing the effects of age-related degradation on systems, structures, and components of nuclear power plants. Grizzly is built on the MOOSE framework, and uses a Jacobian-free Newton Krylov method to obtain solutions to tightly coupled thermo-mechanical simulations. Grizzly runs on a wide range of hardware, from a single processor to massively parallel machines.

  17. Topology preserving non-rigid image registration using time-varying elasticity model for MRI brain volumes.

    PubMed

    Ahmad, Sahar; Khan, Muhammad Faisal

    2015-12-01

    In this paper, we present a new non-rigid image registration method that imposes a topology preservation constraint on the deformation. We propose to incorporate the time varying elasticity model into the deformable image matching procedure and constrain the Jacobian determinant of the transformation over the entire image domain. The motion of elastic bodies is governed by a hyperbolic partial differential equation, generally termed as elastodynamics wave equation, which we propose to use as a deformation model. We carried out clinical image registration experiments on 3D magnetic resonance brain scans from IBSR database. The results of the proposed registration approach in terms of Kappa index and relative overlap computed over the subcortical structures were compared against the existing topology preserving non-rigid image registration methods and non topology preserving variant of our proposed registration scheme. The Jacobian determinant maps obtained with our proposed registration method were qualitatively and quantitatively analyzed. The results demonstrated that the proposed scheme provides good registration accuracy with smooth transformations, thereby guaranteeing the preservation of topology. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Robust Kalman filtering cooperated Elman neural network learning for vision-sensing-based robotic manipulation with global stability.

    PubMed

    Zhong, Xungao; Zhong, Xunyu; Peng, Xiafu

    2013-10-08

    In this paper, a global-state-space visual servoing scheme is proposed for uncalibrated model-independent robotic manipulation. The scheme is based on robust Kalman filtering (KF), in conjunction with Elman neural network (ENN) learning techniques. The global map relationship between the vision space and the robotic workspace is learned using an ENN. This learned mapping is shown to be an approximate estimate of the Jacobian in global space. In the testing phase, the desired Jacobian is arrived at using a robust KF to improve the ENN learning result so as to achieve robotic precise convergence of the desired pose. Meanwhile, the ENN weights are updated (re-trained) using a new input-output data pair vector (obtained from the KF cycle) to ensure robot global stability manipulation. Thus, our method, without requiring either camera or model parameters, avoids the corrupted performances caused by camera calibration and modeling errors. To demonstrate the proposed scheme's performance, various simulation and experimental results have been presented using a six-degree-of-freedom robotic manipulator with eye-in-hand configurations.

  19. Preliminary design study of a lateral-directional control system using thrust vectoring

    NASA Technical Reports Server (NTRS)

    Lallman, F. J.

    1985-01-01

    A preliminary design of a lateral-directional control system for a fighter airplane capable of controlled operation at extreme angles of attack is developed. The subject airplane is representative of a modern twin-engine high-performance jet fighter, is equipped with ailerons, rudder, and independent horizontal-tail surfaces. Idealized bidirectional thrust-vectoring engine nozzles are appended to the mathematic model of the airplane to provide additional control moments. Optimal schedules for lateral and directional pseudo control variables are calculated. Use of pseudo controls results in coordinated operation of the aerodynamic and thrust-vectoring controls with minimum coupling between the lateral and directional airplane dynamics. Linear quadratic regulator designs are used to specify a preliminary flight control system to improve the stability and response characteristics of the airplane. Simulated responses to step pilot control inputs are stable and well behaved. For lateral stick deflections, peak stability axis roll rates are between 1.25 and 1.60 rad/sec over an angle-of-attack range of 10 deg to 70 deg. For rudder pedal deflections, the roll rates accompanying the sideslip responses can be arrested by small lateral stick motions.

  20. Stabilization of business cycles of finance agents using nonlinear optimal control

    NASA Astrophysics Data System (ADS)

    Rigatos, G.; Siano, P.; Ghosh, T.; Sarno, D.

    2017-11-01

    Stabilization of the business cycles of interconnected finance agents is performed with the use of a new nonlinear optimal control method. First, the dynamics of the interacting finance agents and of the associated business cycles is described by a modeled of coupled nonlinear oscillators. Next, this dynamic model undergoes approximate linearization round a temporary operating point which is defined by the present value of the system's state vector and the last value of the control inputs vector that was exerted on it. The linearization procedure is based on Taylor series expansion of the dynamic model and on the computation of Jacobian matrices. The modelling error, which is due to the truncation of higher-order terms in the Taylor series expansion is considered as a disturbance which is compensated by the robustness of the control loop. Next, for the linearized model of the interacting finance agents, an H-infinity feedback controller is designed. The computation of the feedback control gain requires the solution of an algebraic Riccati equation at each iteration of the control algorithm. Through Lyapunov stability analysis it is proven that the control scheme satisfies an H-infinity tracking performance criterion, which signifies elevated robustness against modelling uncertainty and external perturbations. Moreover, under moderate conditions the global asymptotic stability features of the control loop are proven.

  1. Biosorption of Zn(II) from industrial effluents using sugar beet pulp and F. vesiculosus: From laboratory tests to a pilot approach.

    PubMed

    Castro, Laura; Blázquez, M Luisa; González, Felisa; Muñoz, Jesús A; Ballester, Antonio

    2017-11-15

    The aim of this work was to demonstrate the feasibility of the application of biosorption in the treatment of metal polluted wastewaters through the development of several pilot plants to be implemented by the industry. The use as biosorbents of both the brown seaweed Fucus vesiculosus and a sugar beet pulp was investigated to remove heavy metal ions from a wastewater generated in an electroplating industry: Industrial Goñabe (Valladolid, Spain). Batch experiments were performed to study the effects of pH, contact time and initial metal concentration on metal biosorption. It was observed that the adsorption capacity of the biosorbents strongly depended on the pH, increasing as the pH rises from 2 to 5. The adsorption kinetic was studied using three models: pseudo first order, pseudo second order and Elovich models. The experimental data were fitted to Langmuir and Freundlich isotherm models and the brown alga F. vesiculosus showed higher metal uptake than the sugar beet pulp. The biomasses were also used for zinc removal in fixed-bed columns. The performance of the system was evaluated in different experimental conditions. The mixture of the two biomasses, the use of serial columns and the inverse flow can be interesting attempts to improve the biosorption process for large-scale applications. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Neural-Based Compensation of Nonlinearities in an Airplane Longitudinal Model with Dynamic-Inversion Control

    PubMed Central

    Li, YuHui; Jin, FeiTeng

    2017-01-01

    The inversion design approach is a very useful tool for the complex multiple-input-multiple-output nonlinear systems to implement the decoupling control goal, such as the airplane model and spacecraft model. In this work, the flight control law is proposed using the neural-based inversion design method associated with the nonlinear compensation for a general longitudinal model of the airplane. First, the nonlinear mathematic model is converted to the equivalent linear model based on the feedback linearization theory. Then, the flight control law integrated with this inversion model is developed to stabilize the nonlinear system and relieve the coupling effect. Afterwards, the inversion control combined with the neural network and nonlinear portion is presented to improve the transient performance and attenuate the uncertain effects on both external disturbances and model errors. Finally, the simulation results demonstrate the effectiveness of this controller. PMID:29410680

  3. Overdetermined shooting methods for computing standing water waves with spectral accuracy

    NASA Astrophysics Data System (ADS)

    Wilkening, Jon; Yu, Jia

    2012-01-01

    A high-performance shooting algorithm is developed to compute time-periodic solutions of the free-surface Euler equations with spectral accuracy in double and quadruple precision. The method is used to study resonance and its effect on standing water waves. We identify new nucleation mechanisms in which isolated large-amplitude solutions, and closed loops of such solutions, suddenly exist for depths below a critical threshold. We also study degenerate and secondary bifurcations related to Wilton's ripples in the traveling case, and explore the breakdown of self-similarity at the crests of extreme standing waves. In shallow water, we find that standing waves take the form of counter-propagating solitary waves that repeatedly collide quasi-elastically. In deep water with surface tension, we find that standing waves resemble counter-propagating depression waves. We also discuss the existence and non-uniqueness of solutions, and smooth versus erratic dependence of Fourier modes on wave amplitude and fluid depth. In the numerical method, robustness is achieved by posing the problem as an overdetermined nonlinear system and using either adjoint-based minimization techniques or a quadratically convergent trust-region method to minimize the objective function. Efficiency is achieved in the trust-region approach by parallelizing the Jacobian computation, so the setup cost of computing the Dirichlet-to-Neumann operator in the variational equation is not repeated for each column. Updates of the Jacobian are also delayed until the previous Jacobian ceases to be useful. Accuracy is maintained using spectral collocation with optional mesh refinement in space, a high-order Runge-Kutta or spectral deferred correction method in time and quadruple precision for improved navigation of delicate regions of parameter space as well as validation of double-precision results. Implementation issues for transferring much of the computation to a graphic processing units are briefly discussed, and the performance of the algorithm is tested for a number of hardware configurations.

  4. TH-CD-202-06: A Method for Characterizing and Validating Dynamic Lung Density Change During Quiet Respiration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dou, T; Ruan, D; Heinrich, M

    2016-06-15

    Purpose: To obtain a functional relationship that calibrates the lung tissue density change under free breathing conditions through correlating Jacobian values to the Hounsfield units. Methods: Free-breathing lung computed tomography images were acquired using a fast helical CT protocol, where 25 scans were acquired per patient. Using a state-of-the-art deformable registration algorithm, a set of the deformation vector fields (DVF) was generated to provide spatial mapping from the reference image geometry to the other free-breathing scans. These DVFs were used to generate Jacobian maps, which estimate voxelwise volume change. Subsequently, the set of 25 corresponding Jacobian and voxel intensity inmore » Hounsfield units (HU) were collected and linear regression was performed based on the mass conservation relationship to correlate the volume change to density change. Based on the resulting fitting coefficients, the tissues were classified into parenchymal (Type I), vascular (Type II), and soft tissue (Type III) types. These coefficients modeled the voxelwise density variation during quiet breathing. The accuracy of the proposed method was assessed using mean absolute difference in HU between the CT scan intensities and the model predicted values. In addition, validation experiments employing a leave-five-out method were performed to evaluate the model accuracy. Results: The computed mean model errors were 23.30±9.54 HU, 29.31±10.67 HU, and 35.56±20.56 HU, respectively, for regions I, II, and III, respectively. The cross validation experiments averaged over 100 trials had mean errors of 30.02 ± 1.67 HU over the entire lung. These mean values were comparable with the estimated CT image background noise. Conclusion: The reported validation experiment statistics confirmed the lung density modeling during free breathing. The proposed technique was general and could be applied to a wide range of problem scenarios where accurate dynamic lung density information is needed. This work was supported in part by NIH R01 CA0096679.« less

  5. TH-E-BRF-06: Kinetic Modeling of Tumor Response to Fractionated Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhong, H; Gordon, J; Chetty, I

    2014-06-15

    Purpose: Accurate calibration of radiobiological parameters is crucial to predicting radiation treatment response. Modeling differences may have a significant impact on calibrated parameters. In this study, we have integrated two existing models with kinetic differential equations to formulate a new tumor regression model for calibrating radiobiological parameters for individual patients. Methods: A system of differential equations that characterizes the birth-and-death process of tumor cells in radiation treatment was analytically solved. The solution of this system was used to construct an iterative model (Z-model). The model consists of three parameters: tumor doubling time Td, half-life of dying cells Tr and cellmore » survival fraction SFD under dose D. The Jacobian determinant of this model was proposed as a constraint to optimize the three parameters for six head and neck cancer patients. The derived parameters were compared with those generated from the two existing models, Chvetsov model (C-model) and Lim model (L-model). The C-model and L-model were optimized with the parameter Td fixed. Results: With the Jacobian-constrained Z-model, the mean of the optimized cell survival fractions is 0.43±0.08, and the half-life of dying cells averaged over the six patients is 17.5±3.2 days. The parameters Tr and SFD optimized with the Z-model differ by 1.2% and 20.3% from those optimized with the Td-fixed C-model, and by 32.1% and 112.3% from those optimized with the Td-fixed L-model, respectively. Conclusion: The Z-model was analytically constructed from the cellpopulation differential equations to describe changes in the number of different tumor cells during the course of fractionated radiation treatment. The Jacobian constraints were proposed to optimize the three radiobiological parameters. The developed modeling and optimization methods may help develop high-quality treatment regimens for individual patients.« less

  6. Objectively Quantifying Radiation Esophagitis With Novel Computed Tomography–Based Metrics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Niedzielski, Joshua S., E-mail: jsniedzielski@mdanderson.org; University of Texas Houston Graduate School of Biomedical Science, Houston, Texas; Yang, Jinzhong

    Purpose: To study radiation-induced esophageal expansion as an objective measure of radiation esophagitis in patients with non-small cell lung cancer (NSCLC) treated with intensity modulated radiation therapy. Methods and Materials: Eighty-five patients had weekly intra-treatment CT imaging and esophagitis scoring according to Common Terminlogy Criteria for Adverse Events 4.0, (24 Grade 0, 45 Grade 2, and 16 Grade 3). Nineteen esophageal expansion metrics based on mean, maximum, spatial length, and volume of expansion were calculated as voxel-based relative volume change, using the Jacobian determinant from deformable image registration between the planning and weekly CTs. An anatomic variability correction method wasmore » validated and applied to these metrics to reduce uncertainty. An analysis of expansion metrics and radiation esophagitis grade was conducted using normal tissue complication probability from univariate logistic regression and Spearman rank for grade 2 and grade 3 esophagitis endpoints, as well as the timing of expansion and esophagitis grade. Metrics' performance in classifying esophagitis was tested with receiver operating characteristic analysis. Results: Expansion increased with esophagitis grade. Thirteen of 19 expansion metrics had receiver operating characteristic area under the curve values >0.80 for both grade 2 and grade 3 esophagitis endpoints, with the highest performance from maximum axial expansion (MaxExp1) and esophageal length with axial expansion ≥30% (LenExp30%) with area under the curve values of 0.93 and 0.91 for grade 2, 0.90 and 0.90 for grade 3 esophagitis, respectively. Conclusions: Esophageal expansion may be a suitable objective measure of esophagitis, particularly maximum axial esophageal expansion and esophageal length with axial expansion ≥30%, with 2.1 Jacobian value and 98.6 mm as the metric value for 50% probability of grade 3 esophagitis. The uncertainty in esophageal Jacobian calculations can be reduced with anatomic correction methods.« less

  7. Underdetermined blind separation of three-way fluorescence spectra of PAHs in water.

    PubMed

    Yang, Ruifang; Zhao, Nanjing; Xiao, Xue; Zhu, Wei; Chen, Yunan; Yin, Gaofang; Liu, Jianguo; Liu, Wenqing

    2018-06-15

    In this work, underdetermined blind decomposition method is developed to recognize individual components from the three-way fluorescent spectra of their mixtures by using sparse component analysis (SCA). The mixing matrix is estimated from the mixtures using fuzzy data clustering algorithm together with the scatters corresponding to local energy maximum value in the time-frequency domain, and the spectra of object components are recovered by pseudo inverse technique. As an example, using this method three and four pure components spectra can be blindly extracted from two samples of their mixture, with similarities between resolved and reference spectra all above 0.80. This work opens a new and effective path to realize monitoring PAHs in water by three-way fluorescence spectroscopy technique. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. A channel estimation scheme for MIMO-OFDM systems

    NASA Astrophysics Data System (ADS)

    He, Chunlong; Tian, Chu; Li, Xingquan; Zhang, Ce; Zhang, Shiqi; Liu, Chaowen

    2017-08-01

    In view of the contradiction of the time-domain least squares (LS) channel estimation performance and the practical realization complexity, a reduced complexity channel estimation method for multiple input multiple output-orthogonal frequency division multiplexing (MIMO-OFDM) based on pilot is obtained. This approach can transform the complexity of MIMO-OFDM channel estimation problem into a simple single input single output-orthogonal frequency division multiplexing (SISO-OFDM) channel estimation problem and therefore there is no need for large matrix pseudo-inverse, which greatly reduces the complexity of algorithms. Simulation results show that the bit error rate (BER) performance of the obtained method with time orthogonal training sequences and linear minimum mean square error (LMMSE) criteria is better than that of time-domain LS estimator and nearly optimal performance.

  9. Ground Demonstration on the Autonomous Docking of Two 3U CubeSats Using a Novel Permanent-Magnet Docking Mechanism

    NASA Technical Reports Server (NTRS)

    Pei, Jing; Murchison, Luke; BenShabat, Adam; Stewart, Victor; Rosenthal, James; Follman, Jacob; Branchy, Mark; Sellers, Drew; Elandt, Ryan; Elliott, Sawyer; hide

    2017-01-01

    Small spacecraft autonomous rendezvous and docking is an essential technology for future space structure assembly missions. A novel magnetic capture and latching mechanism is analyzed that allows for docking of two CubeSats without precise sensors and actuators. The proposed magnetic docking hardware not only provides the means to latch the CubeSats but it also significantly increases the likelihood of successful docking in the presence of relative attitude and position errors. The simplicity of the design allows it to be implemented on many CubeSat rendezvous missions. A CubeSat 3-DOF ground demonstration effort is on-going at NASA Langley Research Center that enables hardware-in-the loop testing of the autonomous approach and docking of a follower CubeSat to an identical leader CubeSat. The test setup consists of a 3 meter by 4 meter granite table and two nearly frictionless air bearing systems that support the two CubeSats. Four cold-gas on-off thrusters are used to translate the follower towards the leader, while a single reaction wheel is used to control the attitude of each CubeSat. An innovative modified pseudo inverse control allocation scheme was developed to address interactions between control effectors. The docking procedure requires relatively high actuator precision, a novel minimal impulse bit mitigation algorithm was developed to minimize the undesirable deadzone effects of the thrusters. Simulation of the ground demonstration shows that the Guidance, Navigation, and Control system along with the docking subsystem leads to successful docking under 3-sigma dispersions for all key system parameters. Extensive simulation and ground testing will provide sufficient confidence that the proposed docking mechanism along with the choosen suite of sensors and actuators will perform successful docking in the space environment.

  10. Concurrence control for transactions with priorities

    NASA Technical Reports Server (NTRS)

    Marzullo, Keith

    1989-01-01

    Priority inversion occurs when a process is delayed by the actions of another process with less priority. With atomic transactions, the concurrency control mechanism can cause delays, and without taking priorities into account can be a source of priority inversion. Three traditional concurrency control algorithms are extended so that they are free from unbounded priority inversion.

  11. A New Simplified Source Model to Explain Strong Ground Motions from a Mega-Thrust Earthquake - Application to the 2011 Tohoku Earthquake (Mw9.0) -

    NASA Astrophysics Data System (ADS)

    Nozu, A.

    2013-12-01

    A new simplified source model is proposed to explain strong ground motions from a mega-thrust earthquake. The proposed model is simpler, and involves less model parameters, than the conventional characterized source model, which itself is a simplified expression of actual earthquake source. In the proposed model, the spacio-temporal distribution of slip within a subevent is not modeled. Instead, the source spectrum associated with the rupture of a subevent is modeled and it is assumed to follow the omega-square model. By multiplying the source spectrum with the path effect and the site amplification factor, the Fourier amplitude at a target site can be obtained. Then, combining it with Fourier phase characteristics of a smaller event, the time history of strong ground motions from the subevent can be calculated. Finally, by summing up contributions from the subevents, strong ground motions from the entire rupture can be obtained. The source model consists of six parameters for each subevent, namely, longitude, latitude, depth, rupture time, seismic moment and corner frequency of the subevent. Finite size of the subevent can be taken into account in the model, because the corner frequency of the subevent is included in the model, which is inversely proportional to the length of the subevent. Thus, the proposed model is referred to as the 'pseudo point-source model'. To examine the applicability of the model, a pseudo point-source model was developed for the 2011 Tohoku earthquake. The model comprises nine subevents, located off Miyagi Prefecture through Ibaraki Prefecture. The velocity waveforms (0.2-1 Hz), the velocity envelopes (0.2-10 Hz) and the Fourier spectra (0.2-10 Hz) at 15 sites calculated with the pseudo point-source model agree well with the observed ones, indicating the applicability of the model. Then the results were compared with the results of a super-asperity (SPGA) model of the same earthquake (Nozu, 2012, AGU), which can be considered as an example of characterized source models. Although the pseudo point-source model involves much less model parameters than the super-asperity model, the errors associated with the former model were comparable to those for the latter model for velocity waveforms and envelopes. Furthermore, the errors associated with the former model were much smaller than those for the latter model for Fourier spectra. These evidences indicate the usefulness of the pseudo point-source model. Comparison of the observed (black) and synthetic (red) Fourier spectra. The spectra are the composition of two horizontal components and smoothed with a Parzen window with a band width of 0.05 Hz.

  12. Minimal-Inversion Feedforward-And-Feedback Control System

    NASA Technical Reports Server (NTRS)

    Seraji, Homayoun

    1990-01-01

    Recent developments in theory of control systems support concept of minimal-inversion feedforward-and feedback control system consisting of three independently designable control subsystems. Applicable to the control of linear, time-invariant plant.

  13. CR-Calculus and adaptive array theory applied to MIMO random vibration control tests

    NASA Astrophysics Data System (ADS)

    Musella, U.; Manzato, S.; Peeters, B.; Guillaume, P.

    2016-09-01

    Performing Multiple-Input Multiple-Output (MIMO) tests to reproduce the vibration environment in a user-defined number of control points of a unit under test is necessary in applications where a realistic environment replication has to be achieved. MIMO tests require vibration control strategies to calculate the required drive signal vector that gives an acceptable replication of the target. This target is a (complex) vector with magnitude and phase information at the control points for MIMO Sine Control tests while in MIMO Random Control tests, in the most general case, the target is a complete spectral density matrix. The idea behind this work is to tailor a MIMO random vibration control approach that can be generalized to other MIMO tests, e.g. MIMO Sine and MIMO Time Waveform Replication. In this work the approach is to use gradient-based procedures over the complex space, applying the so called CR-Calculus and the adaptive array theory. With this approach it is possible to better control the process performances allowing the step-by-step Jacobian Matrix update. The theoretical bases behind the work are followed by an application of the developed method to a two-exciter two-axis system and by performance comparisons with standard methods.

  14. Regenerable biocide delivery unit, volume 2

    NASA Technical Reports Server (NTRS)

    Atwater, James E.; Wheeler, Richard R., Jr.

    1992-01-01

    Source code for programs dealing with the following topics are presented: (1) life cycle test stand-parametric test stand control (in BASIC); (2) simultaneous aqueous iodine equilibria-true equilibrium (in C); (3) simultaneous aqueous iodine equilibria-pseudo-equilibrium (in C); (4) pseudo-(fast)-equilibrium with iodide initially present (in C); (5) solution of simultaneous iodine rate expressions (Mathematica); (6) 2nd order kinetics of I2-formic acid in humidity condensate (Mathematica); (7) prototype RMCV onboard microcontroller (CAMBASIC); (8) prototype RAM data dump to PC (in BASIC); and (9) prototype real time data transfer to PC (in BASIC).

  15. Deploy production sliding mesh capability with linear solver benchmarking.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Domino, Stefan P.; Thomas, Stephen; Barone, Matthew F.

    Wind applications require the ability to simulate rotating blades. To support this use-case, a novel design-order sliding mesh algorithm has been developed and deployed. The hybrid method combines the control volume finite element methodology (CVFEM) with concepts found within a discontinuous Galerkin (DG) finite element method (FEM) to manage a sliding mesh. The method has been demonstrated to be design-order for the tested polynomial basis (P=1 and P=2) and has been deployed to provide production simulation capability for a Vestas V27 (225 kW) wind turbine. Other stationary and canonical rotating ow simulations are also presented. As the majority of wind-energymore » applications are driving extensive usage of hybrid meshes, a foundational study that outlines near-wall numerical behavior for a variety of element topologies is presented. Results indicate that the proposed nonlinear stabilization operator (NSO) is an effective stabilization methodology to control Gibbs phenomena at large cell Peclet numbers. The study also provides practical mesh resolution guidelines for future analysis efforts. Application-driven performance and algorithmic improvements have been carried out to increase robustness of the scheme on hybrid production wind energy meshes. Specifically, the Kokkos-based Nalu Kernel construct outlined in the FY17/Q4 ExaWind milestone has been transitioned to the hybrid mesh regime. This code base is exercised within a full V27 production run. Simulation timings for parallel search and custom ghosting are presented. As the low-Mach application space requires implicit matrix solves, the cost of matrix reinitialization has been evaluated on a variety of production meshes. Results indicate that at low element counts, i.e., fewer than 100 million elements, matrix graph initialization and preconditioner setup times are small. However, as mesh sizes increase, e.g., 500 million elements, simulation time associated with \\setup-up" costs can increase to nearly 50% of overall simulation time when using the full Tpetra solver stack and nearly 35% when using a mixed Tpetra- Hypre-based solver stack. The report also highlights the project achievement of surpassing the 1 billion element mesh scale for a production V27 hybrid mesh. A detailed timing breakdown is presented that again suggests work to be done in the setup events associated with the linear system. In order to mitigate these initialization costs, several application paths have been explored, all of which are designed to reduce the frequency of matrix reinitialization. Methods such as removing Jacobian entries on the dynamic matrix columns (in concert with increased inner equation iterations), and lagging of Jacobian entries have reduced setup times at the cost of numerical stability. Artificially increasing, or bloating, the matrix stencil to ensure that full Jacobians are included is developed with results suggesting that this methodology is useful in decreasing reinitialization events without loss of matrix contributions. With the above foundational advances in computational capability, the project is well positioned to begin scientific inquiry on a variety of wind-farm physics such as turbine/turbine wake interactions.« less

  16. Robust model predictive control of nonlinear systems with unmodeled dynamics and bounded uncertainties based on neural networks.

    PubMed

    Yan, Zheng; Wang, Jun

    2014-03-01

    This paper presents a neural network approach to robust model predictive control (MPC) for constrained discrete-time nonlinear systems with unmodeled dynamics affected by bounded uncertainties. The exact nonlinear model of underlying process is not precisely known, but a partially known nominal model is available. This partially known nonlinear model is first decomposed to an affine term plus an unknown high-order term via Jacobian linearization. The linearization residue combined with unmodeled dynamics is then modeled using an extreme learning machine via supervised learning. The minimax methodology is exploited to deal with bounded uncertainties. The minimax optimization problem is reformulated as a convex minimization problem and is iteratively solved by a two-layer recurrent neural network. The proposed neurodynamic approach to nonlinear MPC improves the computational efficiency and sheds a light for real-time implementability of MPC technology. Simulation results are provided to substantiate the effectiveness and characteristics of the proposed approach.

  17. Dynamic Analysis and Control of Lightweight Manipulators with Flexible Parallel Link Mechanisms. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Lee, Jeh Won

    1990-01-01

    The objective is the theoretical analysis and the experimental verification of dynamics and control of a two link flexible manipulator with a flexible parallel link mechanism. Nonlinear equations of motion of the lightweight manipulator are derived by the Lagrangian method in symbolic form to better understand the structure of the dynamic model. The resulting equation of motion have a structure which is useful to reduce the number of terms calculated, to check correctness, or to extend the model to higher order. A manipulator with a flexible parallel link mechanism is a constrained dynamic system whose equations are sensitive to numerical integration error. This constrained system is solved using singular value decomposition of the constraint Jacobian matrix. Elastic motion is expressed by the assumed mode method. Mode shape functions of each link are chosen using the load interfaced component mode synthesis. The discrepancies between the analytical model and the experiment are explained using a simplified and a detailed finite element model.

  18. Assessing the Hydraulic Criticality of Deep Ocean Overflows

    NASA Astrophysics Data System (ADS)

    Pratt, L. J.; Helfrich, K. R.

    2004-12-01

    Two methods for assessing the hydraulic criticality of a modelled or observed deep overflow are discussed. The methods should be of use in determining the position of the control section, which is needed to establish the transport relation helpful for long-term monitoring from upstream. Both approaches are based on a multiple streamtube idealization in which the observed flow at a particular section is divided up into subsections (streamtubes). There are no restrictions on the bottom topography or potential vorticity distribution. The first criteria involves evauation of a generalized Jacobian condition based on the conservation laws for each streamtube; the second involves direct calculation of the long-wave phase speeds. We also comment on the significance of the local Froude number F of the flow and argue that F must pass through unity across a section of hydraulic control. These criteria are applied to some numerically modelled flows and are used in the companion presentation (Girton, et al.) to evaluate the hydraulic criticality of the Faroe Bank Channel.

  19. Measurement of brain perfusion in newborns: Pulsed arterial spin labeling (PASL) versus pseudo-continuous arterial spin labeling (pCASL)

    PubMed Central

    Boudes, Elodie; Gilbert, Guillaume; Leppert, Ilana Ruth; Tan, Xianming; Pike, G. Bruce; Saint-Martin, Christine; Wintermark, Pia

    2014-01-01

    Background Arterial spin labeling (ASL) perfusion-weighted imaging (PWI) by magnetic resonance imaging (MRI) has been shown to be useful for identifying asphyxiated newborns at risk of developing brain injury, whether or not therapeutic hypothermia was administered. However, this technique has been only rarely used in newborns until now, because of the challenges to obtain sufficient signal-to-noise ratio (SNR) and spatial resolution in newborns. Objective To compare two methods of ASL-PWI (i.e., single inversion-time pulsed arterial spin labeling [single TI PASL], and pseudo-continuous arterial spin labeling [pCASL]) to assess brain perfusion in asphyxiated newborns treated with therapeutic hypothermia and in healthy newborns. Design/methods We conducted a prospective cohort study of term asphyxiated newborns meeting the criteria for therapeutic hypothermia; four additional healthy term newborns were also included as controls. Each of the enrolled newborns was scanned at least once during the first month of life. Each MRI scan included conventional anatomical imaging, as well as PASL and pCASL PWI-MRI. Control and labeled images were registered separately to reduce the effect of motion artifacts. For each scan, the axial slice at the level of the basal ganglia was used for comparisons. Each scan was scored for its image quality. Quantification of whole-slice cerebral blood flow (CBF) was done afterwards using previously described formulas. Results A total number of 61 concomitant PASL and pCASL scans were obtained in nineteen asphyxiated newborns treated with therapeutic hypothermia and four healthy newborns. After discarding the scans with very poor image quality, 75% (46/61) remained for comparison between the two ASL methods. pCASL images presented a significantly superior image quality score compared to PASL images (p < 0.0001). Strong correlation was found between the CBF measured by PASL and pCASL (r = 0.61, p < 0.0001). Conclusion This study demonstrates that both ASL methods are feasible to assess brain perfusion in healthy and sick newborns. However, pCASL might be a better choice over PASL in newborns, as pCASL perfusion maps had a superior image quality that allowed a more detailed identification of the different brain structures. PMID:25379424

  20. Effects of momentary self-monitoring on empowerment in a randomized controlled trial in patients with depression.

    PubMed

    Simons, C J P; Hartmann, J A; Kramer, I; Menne-Lothmann, C; Höhn, P; van Bemmel, A L; Myin-Germeys, I; Delespaul, P; van Os, J; Wichers, M

    2015-11-01

    Interventions based on the experience sampling method (ESM) are ideally suited to provide insight into personal, contextualized affective patterns in the flow of daily life. Recently, we showed that an ESM-intervention focusing on positive affect was associated with a decrease in symptoms in patients with depression. The aim of the present study was to examine whether ESM-intervention increased patient empowerment. Depressed out-patients (n=102) receiving psychopharmacological treatment who had participated in a randomized controlled trial with three arms: (i) an experimental group receiving six weeks of ESM self-monitoring combined with weekly feedback sessions, (ii) a pseudo-experimental group participating in six weeks of ESM self-monitoring without feedback, and (iii) a control group (treatment as usual only). Patients were recruited in the Netherlands between January 2010 and February 2012. Self-report empowerment scores were obtained pre- and post-intervention. There was an effect of group×assessment period, indicating that the experimental (B=7.26, P=0.061, d=0.44, statistically imprecise) and pseudo-experimental group (B=11.19, P=0.003, d=0.76) increased more in reported empowerment compared to the control group. In the pseudo-experimental group, 29% of the participants showed a statistically reliable increase in empowerment score and 0% reliable decrease compared to 17% reliable increase and 21% reliable decrease in the control group. The experimental group showed 19% reliable increase and 4% reliable decrease. These findings tentatively suggest that self-monitoring to complement standard antidepressant treatment may increase patients' feelings of empowerment. Further research is necessary to investigate long-term empowering effects of self-monitoring in combination with person-tailored feedback. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

Top