Sample records for joint inversion algorithm

  1. 2D joint inversion of CSAMT and magnetic data based on cross-gradient theory

    NASA Astrophysics Data System (ADS)

    Wang, Kun-Peng; Tan, Han-Dong; Wang, Tao

    2017-06-01

    A two-dimensional forward and backward algorithm for the controlled-source audio-frequency magnetotelluric (CSAMT) method is developed to invert data in the entire region (near, transition, and far) and deal with the effects of artificial sources. First, a regularization factor is introduced in the 2D magnetic inversion, and the magnetic susceptibility is updated in logarithmic form so that the inversion magnetic susceptibility is always positive. Second, the joint inversion of the CSAMT and magnetic methods is completed with the introduction of the cross gradient. By searching for the weight of the cross-gradient term in the objective function, the mutual influence between two different physical properties at different locations are avoided. Model tests show that the joint inversion based on cross-gradient theory offers better results than the single-method inversion. The 2D forward and inverse algorithm for CSAMT with source can effectively deal with artificial sources and ensures the reliability of the final joint inversion algorithm.

  2. VES/TEM 1D joint inversion by using Controlled Random Search (CRS) algorithm

    NASA Astrophysics Data System (ADS)

    Bortolozo, Cassiano Antonio; Porsani, Jorge Luís; Santos, Fernando Acácio Monteiro dos; Almeida, Emerson Rodrigo

    2015-01-01

    Electrical (DC) and Transient Electromagnetic (TEM) soundings are used in a great number of environmental, hydrological, and mining exploration studies. Usually, data interpretation is accomplished by individual 1D models resulting often in ambiguous models. This fact can be explained by the way as the two different methodologies sample the medium beneath surface. Vertical Electrical Sounding (VES) is good in marking resistive structures, while Transient Electromagnetic sounding (TEM) is very sensitive to conductive structures. Another difference is VES is better to detect shallow structures, while TEM soundings can reach deeper layers. A Matlab program for 1D joint inversion of VES and TEM soundings was developed aiming at exploring the best of both methods. The program uses CRS - Controlled Random Search - algorithm for both single and 1D joint inversions. Usually inversion programs use Marquadt type algorithms but for electrical and electromagnetic methods, these algorithms may find a local minimum or not converge. Initially, the algorithm was tested with synthetic data, and then it was used to invert experimental data from two places in Paraná sedimentary basin (Bebedouro and Pirassununga cities), both located in São Paulo State, Brazil. Geoelectric model obtained from VES and TEM data 1D joint inversion is similar to the real geological condition, and ambiguities were minimized. Results with synthetic and real data show that 1D VES/TEM joint inversion better recovers simulated models and shows a great potential in geological studies, especially in hydrogeological studies.

  3. Joint inversion of multiple geophysical and petrophysical data using generalized fuzzy clustering algorithms

    NASA Astrophysics Data System (ADS)

    Sun, Jiajia; Li, Yaoguo

    2017-02-01

    Joint inversion that simultaneously inverts multiple geophysical data sets to recover a common Earth model is increasingly being applied to exploration problems. Petrophysical data can serve as an effective constraint to link different physical property models in such inversions. There are two challenges, among others, associated with the petrophysical approach to joint inversion. One is related to the multimodality of petrophysical data because there often exist more than one relationship between different physical properties in a region of study. The other challenge arises from the fact that petrophysical relationships have different characteristics and can exhibit point, linear, quadratic, or exponential forms in a crossplot. The fuzzy c-means (FCM) clustering technique is effective in tackling the first challenge and has been applied successfully. We focus on the second challenge in this paper and develop a joint inversion method based on variations of the FCM clustering technique. To account for the specific shapes of petrophysical relationships, we introduce several different fuzzy clustering algorithms that are capable of handling different shapes of petrophysical relationships. We present two synthetic and one field data examples and demonstrate that, by choosing appropriate distance measures for the clustering component in the joint inversion algorithm, the proposed joint inversion method provides an effective means of handling common petrophysical situations we encounter in practice. The jointly inverted models have both enhanced structural similarity and increased petrophysical correlation, and better represent the subsurface in the spatial domain and the parameter domain of physical properties.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kitanidis, Peter

    As large-scale, commercial storage projects become operational, the problem of utilizing information from diverse sources becomes more critically important. In this project, we developed, tested, and applied an advanced joint data inversion system for CO 2 storage modeling with large data sets for use in site characterization and real-time monitoring. Emphasis was on the development of advanced and efficient computational algorithms for joint inversion of hydro-geophysical data, coupled with state-of-the-art forward process simulations. The developed system consists of (1) inversion tools using characterization data, such as 3D seismic survey (amplitude images), borehole log and core data, as well as hydraulic,more » tracer and thermal tests before CO 2 injection, (2) joint inversion tools for updating the geologic model with the distribution of rock properties, thus reducing uncertainty, using hydro-geophysical monitoring data, and (3) highly efficient algorithms for directly solving the dense or sparse linear algebra systems derived from the joint inversion. The system combines methods from stochastic analysis, fast linear algebra, and high performance computing. The developed joint inversion tools have been tested through synthetic CO 2 storage examples.« less

  5. Joint Inversion of 1-D Magnetotelluric and Surface-Wave Dispersion Data with an Improved Multi-Objective Genetic Algorithm and Application to the Data of the Longmenshan Fault Zone

    NASA Astrophysics Data System (ADS)

    Wu, Pingping; Tan, Handong; Peng, Miao; Ma, Huan; Wang, Mao

    2018-05-01

    Magnetotellurics and seismic surface waves are two prominent geophysical methods for deep underground exploration. Joint inversion of these two datasets can help enhance the accuracy of inversion. In this paper, we describe a method for developing an improved multi-objective genetic algorithm (NSGA-SBX) and applying it to two numerical tests to verify the advantages of the algorithm. Our findings show that joint inversion with the NSGA-SBX method can improve the inversion results by strengthening structural coupling when the discontinuities of the electrical and velocity models are consistent, and in case of inconsistent discontinuities between these models, joint inversion can retain the advantages of individual inversions. By applying the algorithm to four detection points along the Longmenshan fault zone, we observe several features. The Sichuan Basin demonstrates low S-wave velocity and high conductivity in the shallow crust probably due to thick sedimentary layers. The eastern margin of the Tibetan Plateau shows high velocity and high resistivity in the shallow crust, while two low velocity layers and a high conductivity layer are observed in the middle lower crust, probably indicating the mid-crustal channel flow. Along the Longmenshan fault zone, a high conductivity layer from 8 to 20 km is observed beneath the northern segment and decreases with depth beneath the middle segment, which might be caused by the elevated fluid content of the fault zone.

  6. An Improved 3D Joint Inversion Method of Potential Field Data Using Cross-Gradient Constraint and LSQR Method

    NASA Astrophysics Data System (ADS)

    Joulidehsar, Farshad; Moradzadeh, Ali; Doulati Ardejani, Faramarz

    2018-06-01

    The joint interpretation of two sets of geophysical data related to the same source is an appropriate method for decreasing non-uniqueness of the resulting models during inversion process. Among the available methods, a method based on using cross-gradient constraint combines two datasets is an efficient approach. This method, however, is time-consuming for 3D inversion and cannot provide an exact assessment of situation and extension of anomaly of interest. In this paper, the first attempt is to speed up the required calculation by substituting singular value decomposition by least-squares QR method to solve the large-scale kernel matrix of 3D inversion, more rapidly. Furthermore, to improve the accuracy of resulting models, a combination of depth-weighing matrix and compacted constraint, as automatic selection covariance of initial parameters, is used in the proposed inversion algorithm. This algorithm was developed in Matlab environment and first implemented on synthetic data. The 3D joint inversion of synthetic gravity and magnetic data shows a noticeable improvement in the results and increases the efficiency of algorithm for large-scale problems. Additionally, a real gravity and magnetic dataset of Jalalabad mine, in southeast of Iran was tested. The obtained results by the improved joint 3D inversion of cross-gradient along with compacted constraint showed a mineralised zone in depth interval of about 110-300 m which is in good agreement with the available drilling data. This is also a further confirmation on the accuracy and progress of the improved inversion algorithm.

  7. An adaptive inverse kinematics algorithm for robot manipulators

    NASA Technical Reports Server (NTRS)

    Colbaugh, R.; Glass, K.; Seraji, H.

    1990-01-01

    An adaptive algorithm for solving the inverse kinematics problem for robot manipulators is presented. The algorithm is derived using model reference adaptive control (MRAC) theory and is computationally efficient for online applications. The scheme requires no a priori knowledge of the kinematics of the robot if Cartesian end-effector sensing is available, and it requires knowledge of only the forward kinematics if joint position sensing is used. Computer simulation results are given for the redundant seven-DOF robotics research arm, demonstrating that the proposed algorithm yields accurate joint angle trajectories for a given end-effector position/orientation trajectory.

  8. Joint inversions of two VTEM surveys using quasi-3D TDEM and 3D magnetic inversion algorithms

    NASA Astrophysics Data System (ADS)

    Kaminski, Vlad; Di Massa, Domenico; Viezzoli, Andrea

    2016-05-01

    In the current paper, we present results of a joint quasi-three-dimensional (quasi-3D) inversion of two versatile time domain electromagnetic (VTEM) datasets, as well as a joint 3D inversion of associated aeromagnetic datasets, from two surveys flown six years apart from one another (2007 and 2013) over a volcanogenic massive sulphide gold (VMS-Au) prospect in northern Ontario, Canada. The time domain electromagnetic (TDEM) data were inverted jointly using the spatially constrained inversion (SCI) approach. In order to increase the coherency in the model space, a calibration parameter was added. This was followed by a joint inversion of the total magnetic intensity (TMI) data extracted from the two surveys. The results of the inversions have been studied and matched with the known geology, adding some new valuable information to the ongoing mineral exploration initiative.

  9. Joint inversion of teleseismic receiver functions and magnetotelluric data using a genetic algorithm: Are seismic velocities and electrical conductivities compatible?

    NASA Astrophysics Data System (ADS)

    Moorkamp, M.; Jones, A. G.; Eaton, D. W.

    2007-08-01

    Joint inversion of different kinds of geophysical data has the potential to improve model resolution, under the assumption that the different observations are sensitive to the same subsurface features. Here, we examine the compatibility of P-wave teleseismic receiver functions and long-period magnetotelluric (MT) observations, using joint inversion, to infer one-dimensional lithospheric structure. We apply a genetic algorithm to invert teleseismic and MT data from the Slave craton; a region where previous independent analyses of these data have indicated correlated layering of the lithosphere. Examination of model resolution and parameter trade-off suggests that the main features of this area, the Moho, Central Slave Mantle Conductor and the Lithosphere-Asthenosphere boundary, are sensed to varying degrees by both methods. Thus, joint inversion of these two complementary data sets can be used to construct improved models of the lithosphere. Further studies will be needed to assess whether the approach can be applied globally.

  10. Real-time inverse kinematics for the upper limb: a model-based algorithm using segment orientations.

    PubMed

    Borbély, Bence J; Szolgay, Péter

    2017-01-17

    Model based analysis of human upper limb movements has key importance in understanding the motor control processes of our nervous system. Various simulation software packages have been developed over the years to perform model based analysis. These packages provide computationally intensive-and therefore off-line-solutions to calculate the anatomical joint angles from motion captured raw measurement data (also referred as inverse kinematics). In addition, recent developments in inertial motion sensing technology show that it may replace large, immobile and expensive optical systems with small, mobile and cheaper solutions in cases when a laboratory-free measurement setup is needed. The objective of the presented work is to extend the workflow of measurement and analysis of human arm movements with an algorithm that allows accurate and real-time estimation of anatomical joint angles for a widely used OpenSim upper limb kinematic model when inertial sensors are used for movement recording. The internal structure of the selected upper limb model is analyzed and used as the underlying platform for the development of the proposed algorithm. Based on this structure, a prototype marker set is constructed that facilitates the reconstruction of model-based joint angles using orientation data directly available from inertial measurement systems. The mathematical formulation of the reconstruction algorithm is presented along with the validation of the algorithm on various platforms, including embedded environments. Execution performance tables of the proposed algorithm show significant improvement on all tested platforms. Compared to OpenSim's Inverse Kinematics tool 50-15,000x speedup is achieved while maintaining numerical accuracy. The proposed algorithm is capable of real-time reconstruction of standardized anatomical joint angles even in embedded environments, establishing a new way for complex applications to take advantage of accurate and fast model-based inverse kinematics calculations.

  11. Robust inverse kinematics using damped least squares with dynamic weighting

    NASA Technical Reports Server (NTRS)

    Schinstock, D. E.; Faddis, T. N.; Greenway, R. B.

    1994-01-01

    This paper presents a general method for calculating the inverse kinematics with singularity and joint limit robustness for both redundant and non-redundant serial-link manipulators. Damped least squares inverse of the Jacobian is used with dynamic weighting matrices in approximating the solution. This reduces specific joint differential vectors. The algorithm gives an exact solution away from the singularities and joint limits, and an approximate solution at or near the singularities and/or joint limits. The procedure is here implemented for a six d.o.f. teleoperator and a well behaved slave manipulator resulted under teleoperational control.

  12. Two-dimensional joint inversion of Magnetotelluric and local earthquake data: Discussion on the contribution to the solution of deep subsurface structures

    NASA Astrophysics Data System (ADS)

    Demirci, İsmail; Dikmen, Ünal; Candansayar, M. Emin

    2018-02-01

    Joint inversion of data sets collected by using several geophysical exploration methods has gained importance and associated algorithms have been developed. To explore the deep subsurface structures, Magnetotelluric and local earthquake tomography algorithms are generally used individually. Due to the usage of natural resources in both methods, it is not possible to increase data quality and resolution of model parameters. For this reason, the solution of the deep structures with the individual usage of the methods cannot be fully attained. In this paper, we firstly focused on the effects of both Magnetotelluric and local earthquake data sets on the solution of deep structures and discussed the results on the basis of the resolving power of the methods. The presence of deep-focus seismic sources increase the resolution of deep structures. Moreover, conductivity distribution of relatively shallow structures can be solved with high resolution by using MT algorithm. Therefore, we developed a new joint inversion algorithm based on the cross gradient function in order to jointly invert Magnetotelluric and local earthquake data sets. In the study, we added a new regularization parameter into the second term of the parameter correction vector of Gallardo and Meju (2003). The new regularization parameter is enhancing the stability of the algorithm and controls the contribution of the cross gradient term in the solution. The results show that even in cases where resistivity and velocity boundaries are different, both methods influence each other positively. In addition, the region of common structural boundaries of the models are clearly mapped compared with original models. Furthermore, deep structures are identified satisfactorily even with using the minimum number of seismic sources. In this paper, in order to understand the future studies, we discussed joint inversion of Magnetotelluric and local earthquake data sets only in two-dimensional space. In the light of these results and by means of the acceleration on the three-dimensional modelling and inversion algorithms, it is thought that it may be easier to identify underground structures with high resolution.

  13. 2D data-space cross-gradient joint inversion of MT, gravity and magnetic data

    NASA Astrophysics Data System (ADS)

    Pak, Yong-Chol; Li, Tonglin; Kim, Gang-Sop

    2017-08-01

    We have developed a data-space multiple cross-gradient joint inversion algorithm, and validated it through synthetic tests and applied it to magnetotelluric (MT), gravity and magnetic datasets acquired along a 95 km profile in Benxi-Ji'an area of northeastern China. To begin, we discuss a generalized cross-gradient joint inversion for multiple datasets and model parameters sets, and formulate it in data space. The Lagrange multiplier required for the structural coupling in the data-space method is determined using an iterative solver to avoid calculation of the inverse matrix in solving the large system of equations. Next, using model-space and data-space methods, we inverted the synthetic data and field data. Based on our result, the joint inversion in data-space not only delineates geological bodies more clearly than the separate inversion, but also yields nearly equal results with the one in model-space while consuming much less memory.

  14. Joint Inversion of Body-Wave Arrival Times and Surface-Wave Dispersion Data in the Wavelet Domain Constrained by Sparsity Regularization

    NASA Astrophysics Data System (ADS)

    Zhang, H.; Fang, H.; Yao, H.; Maceira, M.; van der Hilst, R. D.

    2014-12-01

    Recently, Zhang et al. (2014, Pure and Appiled Geophysics) have developed a joint inversion code incorporating body-wave arrival times and surface-wave dispersion data. The joint inversion code was based on the regional-scale version of the double-difference tomography algorithm tomoDD. The surface-wave inversion part uses the propagator matrix solver in the algorithm DISPER80 (Saito, 1988) for forward calculation of dispersion curves from layered velocity models and the related sensitivities. The application of the joint inversion code to the SAFOD site in central California shows that the fault structure is better imaged in the new model, which is able to fit both the body-wave and surface-wave observations adequately. Here we present a new joint inversion method that solves the model in the wavelet domain constrained by sparsity regularization. Compared to the previous method, it has the following advantages: (1) The method is both data- and model-adaptive. For the velocity model, it can be represented by different wavelet coefficients at different scales, which are generally sparse. By constraining the model wavelet coefficients to be sparse, the inversion in the wavelet domain can inherently adapt to the data distribution so that the model has higher spatial resolution in the good data coverage zone. Fang and Zhang (2014, Geophysical Journal International) have showed the superior performance of the wavelet-based double-difference seismic tomography method compared to the conventional method. (2) For the surface wave inversion, the joint inversion code takes advantage of the recent development of direct inversion of surface wave dispersion data for 3-D variations of shear wave velocity without the intermediate step of phase or group velocity maps (Fang et al., 2014, Geophysical Journal International). A fast marching method is used to compute, at each period, surface wave traveltimes and ray paths between sources and receivers. We will test the new joint inversion code at the SAFOD site to compare its performance over the previous code. We will also select another fault zone such as the San Jacinto Fault Zone to better image its structure.

  15. An efficient sequential strategy for realizing cross-gradient joint inversion: method and its application to 2-D cross borehole seismic traveltime and DC resistivity tomography

    NASA Astrophysics Data System (ADS)

    Gao, Ji; Zhang, Haijiang

    2018-05-01

    Cross-gradient joint inversion that enforces structural similarity between different models has been widely utilized in jointly inverting different geophysical data types. However, it is a challenge to combine different geophysical inversion systems with the cross-gradient structural constraint into one joint inversion system because they may differ greatly in the model representation, forward modelling and inversion algorithm. Here we propose a new joint inversion strategy that can avoid this issue. Different models are separately inverted using the existing inversion packages and model structure similarity is only enforced through cross-gradient minimization between two models after each iteration. Although the data fitting and structural similarity enforcing processes are decoupled, our proposed strategy is still able to choose appropriate models to balance the trade-off between geophysical data fitting and structural similarity. This is realized by using model perturbations from separate data inversions to constrain the cross-gradient minimization process. We have tested this new strategy on 2-D cross borehole synthetic seismic traveltime and DC resistivity data sets. Compared to separate geophysical inversions, our proposed joint inversion strategy fits the separate data sets at comparable levels while at the same time resulting in a higher structural similarity between the velocity and resistivity models.

  16. Pareto-Optimal Multi-objective Inversion of Geophysical Data

    NASA Astrophysics Data System (ADS)

    Schnaidt, Sebastian; Conway, Dennis; Krieger, Lars; Heinson, Graham

    2018-01-01

    In the process of modelling geophysical properties, jointly inverting different data sets can greatly improve model results, provided that the data sets are compatible, i.e., sensitive to similar features. Such a joint inversion requires a relationship between the different data sets, which can either be analytic or structural. Classically, the joint problem is expressed as a scalar objective function that combines the misfit functions of multiple data sets and a joint term which accounts for the assumed connection between the data sets. This approach suffers from two major disadvantages: first, it can be difficult to assess the compatibility of the data sets and second, the aggregation of misfit terms introduces a weighting of the data sets. We present a pareto-optimal multi-objective joint inversion approach based on an existing genetic algorithm. The algorithm treats each data set as a separate objective, avoiding forced weighting and generating curves of the trade-off between the different objectives. These curves are analysed by their shape and evolution to evaluate data set compatibility. Furthermore, the statistical analysis of the generated solution population provides valuable estimates of model uncertainty.

  17. WSJointInv2D-MT-DCR: An efficient joint two-dimensional magnetotelluric and direct current resistivity inversion

    NASA Astrophysics Data System (ADS)

    Amatyakul, Puwis; Vachiratienchai, Chatchai; Siripunvaraporn, Weerachai

    2017-05-01

    An efficient joint two-dimensional direct current resistivity (DCR) and magnetotelluric (MT) inversion, referred to as WSJointInv2D-MT-DCR, was developed with FORTRAN 95 based on the data space Occam's inversion algorithm. Our joint inversion software can be used to invert just the MT data or the DCR data, or invert both data sets simultaneously to get the electrical resistivity structures. Since both MT and DCR surveys yield the same resistivity structures, the two data types enhance each other leading to a better interpretation. Two synthetic and a real field survey are used here to demonstrate that the joint DCR and MT surveys can help constrain each other to reduce the ambiguities occurring when inverting the DCR or MT alone. The DCR data increases the lateral resolution of the near surface structures while the MT data reveals the deeper structures. When the MT apparent resistivity suffers from the static shift, the DCR apparent resistivity can serve as a replacement for the estimation of the static shift factor using the joint inversion. In addition, we also used these examples to show the efficiency of our joint inversion code. With the availability of our new joint inversion software, we expect the number of joint DCR and MT surveys to increase in the future.

  18. Joint body and surface wave tomography applied to the Toba caldera complex (Indonesia)

    NASA Astrophysics Data System (ADS)

    Jaxybulatov, Kairly; Koulakov, Ivan; Shapiro, Nikolai

    2016-04-01

    We developed a new algorithm for a joint body and surface wave tomography. The algorithm is a modification of the existing LOTOS code (Koulakov, 2009) developed for local earthquake tomography. The input data for the new method are travel times of P and S waves and dispersion curves of Rayleigh and Love waves. The main idea is that the two data types have complementary sensitivities. The body-wave data have good resolution at depth, where we have enough crossing rays between sources and receivers, whereas the surface waves have very good near-surface resolution. The surface wave dispersion curves can be retrieved from the correlations of the ambient seismic noise and in this case the sampled path distribution does not depend on the earthquake sources. The contributions of the two data types to the inversion are controlled by the weighting of the respective equations. One of the clearest cases where such approach may be useful are volcanic systems in subduction zones with their complex magmatic feeding systems that have deep roots in the mantle and intermediate magma chambers in the crust. In these areas, the joint inversion of different types of data helps us to build a comprehensive understanding of the entire system. We apply our algorithm to data collected in the region surrounding the Toba caldera complex (north Sumatra, Indonesia) during two temporary seismic experiments (IRIS, PASSCAL, 1995, GFZ, LAKE TOBA, 2008). We invert 6644 P and 5240 S wave arrivals and ~500 group velocity dispersion curves of Rayleigh and Love waves. We present a series of synthetic tests and real data inversions which show that joint inversion approach gives more reliable results than the separate inversion of two data types. Koulakov, I., LOTOS code for local earthquake tomographic inversion. Benchmarks for testing tomographic algorithms, Bull. seism. Soc. Am., 99(1), 194-214, 2009, doi:10.1785/0120080013

  19. A new algorithm for three-dimensional joint inversion of body wave and surface wave data and its application to the Southern California plate boundary region

    NASA Astrophysics Data System (ADS)

    Fang, Hongjian; Zhang, Haijiang; Yao, Huajian; Allam, Amir; Zigone, Dimitri; Ben-Zion, Yehuda; Thurber, Clifford; van der Hilst, Robert D.

    2016-05-01

    We introduce a new algorithm for joint inversion of body wave and surface wave data to get better 3-D P wave (Vp) and S wave (Vs) velocity models by taking advantage of the complementary strengths of each data set. Our joint inversion algorithm uses a one-step inversion of surface wave traveltime measurements at different periods for 3-D Vs and Vp models without constructing the intermediate phase or group velocity maps. This allows a more straightforward modeling of surface wave traveltime data with the body wave arrival times. We take into consideration the sensitivity of surface wave data with respect to Vp in addition to its large sensitivity to Vs, which means both models are constrained by two different data types. The method is applied to determine 3-D crustal Vp and Vs models using body wave and Rayleigh wave data in the Southern California plate boundary region, which has previously been studied with both double-difference tomography method using body wave arrival times and ambient noise tomography method with Rayleigh and Love wave group velocity dispersion measurements. Our approach creates self-consistent and unique models with no prominent gaps, with Rayleigh wave data resolving shallow and large-scale features and body wave data constraining relatively deeper structures where their ray coverage is good. The velocity model from the joint inversion is consistent with local geological structures and produces better fits to observed seismic waveforms than the current Southern California Earthquake Center (SCEC) model.

  20. Inverse kinematics problem in robotics using neural networks

    NASA Technical Reports Server (NTRS)

    Choi, Benjamin B.; Lawrence, Charles

    1992-01-01

    In this paper, Multilayer Feedforward Networks are applied to the robot inverse kinematic problem. The networks are trained with endeffector position and joint angles. After training, performance is measured by having the network generate joint angles for arbitrary endeffector trajectories. A 3-degree-of-freedom (DOF) spatial manipulator is used for the study. It is found that neural networks provide a simple and effective way to both model the manipulator inverse kinematics and circumvent the problems associated with algorithmic solution methods.

  1. Joint Inversion of Body-Wave Arrival Times and Surface-Wave Dispersion Data for Three-Dimensional Seismic Velocity Structure Around SAFOD

    NASA Astrophysics Data System (ADS)

    Zhang, H.; Thurber, C. H.; Maceira, M.; Roux, P.

    2013-12-01

    The crust around the San Andreas Fault Observatory at depth (SAFOD) has been the subject of many geophysical studies aimed at characterizing in detail the fault zone structure and elucidating the lithologies and physical properties of the surrounding rocks. Seismic methods in particular have revealed the complex two-dimensional (2D) and three-dimensional (3D) structure of the crustal volume around SAFOD and the strong velocity reduction in the fault damage zone. In this study we conduct a joint inversion using body-wave arrival times and surface-wave dispersion data to image the P-and S-wave velocity structure of the upper crust surrounding SAFOD. The two data types have complementary strengths - the body-wave data have good resolution at depth, albeit only where there are crossing rays between sources and receivers, whereas the surface waves have very good near-surface resolution and are not dependent on the earthquake source distribution because they are derived from ambient noise. The body-wave data are from local earthquakes and explosions, comprising the dataset analyzed by Zhang et al. (2009). The surface-wave data are for Love waves from ambient noise correlations, and are from Roux et al. (2011). The joint inversion code is based on the regional-scale version of the double-difference (DD) tomography algorithm tomoDD. The surface-wave inversion code that is integrated into the joint inversion algorithm is from Maceira and Ammon (2009). The propagator matrix solver in the algorithm DISPER80 (Saito, 1988) is used for the forward calculation of dispersion curves from layered velocity models. We examined how the structural models vary as we vary the relative weighting of the fit to the two data sets and in comparison to the previous separate inversion results. The joint inversion with the 'optimal' weighting shows more clearly the U-shaped local structure from the Buzzard Canyon Fault on the west side of SAF to the Gold Hill Fault on the east side.

  2. Parana Basin Structure from Multi-Objective Inversion of Surface Wave and Receiver Function by Competent Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    An, M.; Assumpcao, M.

    2003-12-01

    The joint inversion of receiver function and surface wave is an effective way to diminish the influences of the strong tradeoff among parameters and the different sensitivity to the model parameters in their respective inversions, but the inversion problem becomes more complex. Multi-objective problems can be much more complicated than single-objective inversion in the model selection and optimization. If objectives are involved and conflicting, models can be ordered only partially. In this case, Pareto-optimal preference should be used to select solutions. On the other hand, the inversion to get only a few optimal solutions can not deal properly with the strong tradeoff between parameters, the uncertainties in the observation, the geophysical complexities and even the incompetency of the inversion technique. The effective way is to retrieve the geophysical information statistically from many acceptable solutions, which requires more competent global algorithms. Competent genetic algorithms recently proposed are far superior to the conventional genetic algorithm and can solve hard problems quickly, reliably and accurately. In this work we used one of competent genetic algorithms, Bayesian Optimization Algorithm as the main inverse procedure. This algorithm uses Bayesian networks to draw out inherited information and can use Pareto-optimal preference in the inversion. With this algorithm, the lithospheric structure of Paran"› basin is inverted to fit both the observations of inter-station surface wave dispersion and receiver function.

  3. Joint inversion of phase velocity dispersion and H/V ratio curves from seismic noise recordings using a genetic algorithm, considering higher modes

    NASA Astrophysics Data System (ADS)

    Parolai, S.; Picozzi, M.; Richwalski, S. M.; Milkereit, C.

    2005-01-01

    Seismic noise contains information on the local S-wave velocity structure, which can be obtained from the phase velocity dispersion curve by means of array measurements. The H/V ratio from single stations also contains information on the average S-wave velocity and the total thickness of the sedimentary cover. A joint inversion of the two data sets therefore might allow constraining the final model well. We propose a scheme that does not require a starting model because of usage of a genetic algorithm. Furthermore, we tested two suitable cost functions for our data set, using a-priori and data driven weighting. The latter one was more appropriate in our case. In addition, we consider the influence of higher modes on the data sets and use a suitable forward modeling procedure. Using real data we show that the joint inversion indeed allows for better fitting the observed data than using the dispersion curve only.

  4. Efficient realization of 3D joint inversion of seismic and magnetotelluric data with cross gradient structure constraint

    NASA Astrophysics Data System (ADS)

    Luo, H.; Zhang, H.; Gao, J.

    2016-12-01

    Seismic and magnetotelluric (MT) imaging methods are generally used to characterize subsurface structures at various scales. The two methods are complementary to each other and the integration of them is helpful for more reliably determining the resistivity and velocity models of the target region. Because of the difficulty in finding empirical relationship between resistivity and velocity parameters, Gallardo and Meju [2003] proposed a joint inversion method enforcing resistivity and velocity models consistent in structure, which is realized by minimizing cross gradients between two models. However, it is extremely challenging to combine two different inversion systems together along with the cross gradient constraints. For this reason, Gallardo [2007] proposed a joint inversion scheme that decouples the seismic and MT inversion systems by iteratively performing seismic and MT inversions as well as cross gradient minimization separately. This scheme avoids the complexity of combining two different systems together but it suffers the issue of balancing between data fitting and structure constraint. In this study, we have developed a new joint inversion scheme that avoids the problem encountered by the scheme of Gallardo [2007]. In the new scheme, seismic and MT inversions are still separately performed but the cross gradient minimization is also constrained by model perturbations from separate inversions. In this way, the new scheme still avoids the complexity of combining two different systems together and at the same time the balance between data fitting and structure consistency constraint can be enforced. We have tested our joint inversion algorithm for both 2D and 3D cases. Synthetic tests show that joint inversion better reconstructed the velocity and resistivity models than separate inversions. Compared to separate inversions, joint inversion can remove artifacts in the resistivity model and can improve the resolution for deeper resistivity structures. We will also show results applying the new joint seismic and MT inversion scheme to southwest China, where several MT profiles are available and earthquakes are very active.

  5. Kinematics, controls, and path planning results for a redundant manipulator

    NASA Technical Reports Server (NTRS)

    Gretz, Bruce; Tilley, Scott W.

    1989-01-01

    The inverse kinematics solution, a modal position control algorithm, and path planning results for a 7 degree of freedom manipulator are presented. The redundant arm consists of two links with shoulder and elbow joints and a spherical wrist. The inverse kinematics problem for tip position is solved and the redundant joint is identified. It is also shown that a locus of tip positions exists in which there are kinematic limitations on self-motion. A computationally simple modal position control algorithm has been developed which guarantees a nearly constant closed-loop dynamic response throughout the workspace. If all closed-loop poles are assigned to the same location, the algorithm can be implemented with very little computation. To further reduce the required computation, the modal gains are updated only at discrete time intervals. Criteria are developed for the frequency of these updates. For commanding manipulator movements, a 5th-order spline which minimizes jerk provides a smooth tip-space path. Schemes for deriving a corresponding joint-space trajectory are discussed. Modifying the trajectory to avoid joint torque saturation when a tip payload is added is also considered. Simulation results are presented.

  6. Simultaneous, Joint Inversion of Seismic Body Wave Travel Times and Satellite Gravity Data for Three-Dimensional Tomographic Imaging of Western Colombia

    NASA Astrophysics Data System (ADS)

    Dionicio, V.; Rowe, C. A.; Maceira, M.; Zhang, H.; Londoño, J.

    2009-12-01

    We report on the three-dimensional seismic structure of western Colombia determined through the use of a new, simultaneous, joint inversion tomography algorithm. Using data recorded by the national Seismological Network of Colombia (RSNC), we have selected 3,609 earthquakes recorded at 33 sensors distributed throughout the country, with additional data from stations in neighboring countries. 20,338 P-wave arrivals and 17,041 S-wave arrivals are used to invert for structure within a region extending approximately 72.5 to 77.5 degrees West and 2 to 7.5 degrees North. Our algorithm is a modification of the Maceira and Ammon joint inversion code, in combination with the Zhang and Thurber TomoDD (double-difference tomography) program, with a fast LSQR solver operating on the gridded values jointly. The inversion uses gravity anomalies obtained during the GRACE2 satellite mission, and solves using these values with the seismic travel-times through application of an empirical relationship first proposed by Harkrider, mapping densities to Vp and Vs within earth materials. In previous work, Maceira and Ammon demonstrated that incorporation of gravity data predicts shear wave velocities more accurately than the inversion of surface waves alone, particularly in regions where the crust exhibits abrupt and significant lateral variations in lithology, such as the Tarim Basin. The significant complexity of crustal structure in Colombia, due to its active tectonic environment, makes it a good candidate for the application with gravity and body waves. We present the results of this joint inversion and compare it to results obtained using travel times alone

  7. Pareto joint inversion of 2D magnetotelluric and gravity data

    NASA Astrophysics Data System (ADS)

    Miernik, Katarzyna; Bogacz, Adrian; Kozubal, Adam; Danek, Tomasz; Wojdyła, Marek

    2015-04-01

    In this contribution, the first results of the "Innovative technology of petrophysical parameters estimation of geological media using joint inversion algorithms" project were described. At this stage of the development, Pareto joint inversion scheme for 2D MT and gravity data was used. Additionally, seismic data were provided to set some constrains for the inversion. Sharp Boundary Interface(SBI) approach and description model with set of polygons were used to limit the dimensionality of the solution space. The main engine was based on modified Particle Swarm Optimization(PSO). This algorithm was properly adapted to handle two or more target function at once. Additional algorithm was used to eliminate non- realistic solution proposals. Because PSO is a method of stochastic global optimization, it requires a lot of proposals to be evaluated to find a single Pareto solution and then compose a Pareto front. To optimize this stage parallel computing was used for both inversion engine and 2D MT forward solver. There are many advantages of proposed solution of joint inversion problems. First of all, Pareto scheme eliminates cumbersome rescaling of the target functions, that can highly affect the final solution. Secondly, the whole set of solution is created in one optimization run, providing a choice of the final solution. This choice can be based off qualitative data, that are usually very hard to be incorporated into the regular inversion schema. SBI parameterisation not only limits the problem of dimensionality, but also makes constraining of the solution easier. At this stage of work, decision to test the approach using MT and gravity data was made, because this combination is often used in practice. It is important to mention, that the general solution is not limited to this two methods and it is flexible enough to be used with more than two sources of data. Presented results were obtained for synthetic models, imitating real geological conditions, where interesting density distributions are relatively shallow and resistivity changes are related to deeper parts. This kind of conditions are well suited for joint inversion of MT and gravity data. In the next stage of the solution development of further code optimization and extensive tests for real data will be realized. Presented work was supported by Polish National Centre for Research and Development under the contract number POIG.01.04.00-12-279/13

  8. Implementing a C++ Version of the Joint Seismic-Geodetic Algorithm for Finite-Fault Detection and Slip Inversion for Earthquake Early Warning

    NASA Astrophysics Data System (ADS)

    Smith, D. E.; Felizardo, C.; Minson, S. E.; Boese, M.; Langbein, J. O.; Guillemot, C.; Murray, J. R.

    2015-12-01

    The earthquake early warning (EEW) systems in California and elsewhere can greatly benefit from algorithms that generate estimates of finite-fault parameters. These estimates could significantly improve real-time shaking calculations and yield important information for immediate disaster response. Minson et al. (2015) determined that combining FinDer's seismic-based algorithm (Böse et al., 2012) with BEFORES' geodetic-based algorithm (Minson et al., 2014) yields a more robust and informative joint solution than using either algorithm alone. FinDer examines the distribution of peak ground accelerations from seismic stations and determines the best finite-fault extent and strike from template matching. BEFORES employs a Bayesian framework to search for the best slip inversion over all possible fault geometries in terms of strike and dip. Using FinDer and BEFORES together generates estimates of finite-fault extent, strike, dip, preferred slip, and magnitude. To yield the quickest, most flexible, and open-source version of the joint algorithm, we translated BEFORES and FinDer from Matlab into C++. We are now developing a C++ Application Protocol Interface for these two algorithms to be connected to the seismic and geodetic data flowing from the EEW system. The interface that is being developed will also enable communication between the two algorithms to generate the joint solution of finite-fault parameters. Once this interface is developed and implemented, the next step will be to run test seismic and geodetic data through the system via the Earthworm module, Tank Player. This will allow us to examine algorithm performance on simulated data and past real events.

  9. Elastic robot control - Nonlinear inversion and linear stabilization

    NASA Technical Reports Server (NTRS)

    Singh, S. N.; Schy, A. A.

    1986-01-01

    An approach to the control of elastic robot systems for space applications using inversion, servocompensation, and feedback stabilization is presented. For simplicity, a robot arm (PUMA type) with three rotational joints is considered. The third link is assumed to be elastic. Using an inversion algorithm, a nonlinear decoupling control law u(d) is derived such that in the closed-loop system independent control of joint angles by the three joint torquers is accomplished. For the stabilization of elastic oscillations, a linear feedback torquer control law u(s) is obtained applying linear quadratic optimization to the linearized arm model augmented with a servocompensator about the terminal state. Simulation results show that in spite of uncertainties in the payload and vehicle angular velocity, good joint angle control and damping of elastic oscillations are obtained with the torquer control law u = u(d) + u(s).

  10. Joint inversion of apparent resistivity and seismic surface and body wave data

    NASA Astrophysics Data System (ADS)

    Garofalo, Flora; Sauvin, Guillaume; Valentina Socco, Laura; Lecomte, Isabelle

    2013-04-01

    A novel inversion algorithm has been implemented to jointly invert apparent resistivity curves from vertical electric soundings, surface wave dispersion curves, and P-wave travel times. The algorithm works in the case of laterally varying layered sites. Surface wave dispersion curves and P-wave travel times can be extracted from the same seismic dataset and apparent resistivity curves can be obtained from continuous vertical electric sounding acquisition. The inversion scheme is based on a series of local 1D layered models whose unknown parameters are thickness h, S-wave velocity Vs, P-wave velocity Vp, and Resistivity R of each layer. 1D models are linked to surface-wave dispersion curves and apparent resistivity curves through classical 1D forward modelling, while a 2D model is created by interpolating the 1D models and is linked to refracted P-wave hodograms. A priori information can be included in the inversion and a spatial regularization is introduced as a set of constraints between model parameters of adjacent models and layers. Both a priori information and regularization are weighted by covariance matrixes. We show the comparison of individual inversions and joint inversion for a synthetic dataset that presents smooth lateral variations. Performing individual inversions, the poor sensitivity to some model parameters leads to estimation errors up to 62.5 %, whereas for joint inversion the cooperation of different techniques reduces most of the model estimation errors below 5% with few exceptions up to 39 %, with an overall improvement. Even though the final model retrieved by joint inversion is internally consistent and more reliable, the analysis of the results evidences unacceptable values of Vp/Vs ratio for some layers, thus providing negative Poisson's ratio values. To further improve the inversion performances, an additional constraint is added imposing Poisson's ratio in the range 0-0.5. The final results are globally improved by the introduction of this constraint further reducing the maximum error to 30 %. The same test was performed on field data acquired in a landslide-prone area close by the town of Hvittingfoss, Norway. Seismic data were recorded on two 160-m long profiles in roll-along mode using a 5-kg sledgehammer as source and 24 4.5-Hz vertical geophones with 4-m separation. First-arrival travel times were picked at every shot locations and surface wave dispersion curves extracted at 8 locations for each profile. 2D resistivity measurements were carried out on the same profiles using Gradient and Dipole-Dipole arrays with 2-m electrode spacing. The apparent resistivity curves were extracted at the same location as for the dispersion curves. The data were subsequently jointly inverted and the resulting model compared to individual inversions. Although models from both, individual and joint inversions are consistent, the estimation error is smaller for joint inversion, and more especially for first-arrival travel times. The joint inversion exploits different sensitivities of the methods to model parameters and therefore mitigates solution nonuniqueness and the effects of intrinsic limitations of the different techniques. Moreover, it produces an internally consistent multi-parametric final model that can be profitably interpreted to provide a better understanding of subsurface properties.

  11. Joint Stochastic Inversion of Pre-Stack 3D Seismic Data and Well Logs for High Resolution Hydrocarbon Reservoir Characterization

    NASA Astrophysics Data System (ADS)

    Torres-Verdin, C.

    2007-05-01

    This paper describes the successful implementation of a new 3D AVA stochastic inversion algorithm to quantitatively integrate pre-stack seismic amplitude data and well logs. The stochastic inversion algorithm is used to characterize flow units of a deepwater reservoir located in the central Gulf of Mexico. Conventional fluid/lithology sensitivity analysis indicates that the shale/sand interface represented by the top of the hydrocarbon-bearing turbidite deposits generates typical Class III AVA responses. On the other hand, layer- dependent Biot-Gassmann analysis shows significant sensitivity of the P-wave velocity and density to fluid substitution. Accordingly, AVA stochastic inversion, which combines the advantages of AVA analysis with those of geostatistical inversion, provided quantitative information about the lateral continuity of the turbidite reservoirs based on the interpretation of inverted acoustic properties (P-velocity, S-velocity, density), and lithotype (sand- shale) distributions. The quantitative use of rock/fluid information through AVA seismic amplitude data, coupled with the implementation of co-simulation via lithotype-dependent multidimensional joint probability distributions of acoustic/petrophysical properties, yields accurate 3D models of petrophysical properties such as porosity and permeability. Finally, by fully integrating pre-stack seismic amplitude data and well logs, the vertical resolution of inverted products is higher than that of deterministic inversions methods.

  12. Groundwater contamination in the Roorkee area, India: 2D joint inversion of radiomagnetotelluric and direct current resistivity data

    NASA Astrophysics Data System (ADS)

    Yogeshwar, P.; Tezkan, B.; Israil, M.; Candansayar, M. E.

    2012-01-01

    The impact of sewage irrigation and groundwater contamination were investigated near Roorkee in north India using the Direct Current Resistivity (DCR) method and the Radiomagnetotelluric (RMT) method. Intensive field measurements were carried out in the vicinity of a waste disposal site, which was extensively irrigated with sewage water. For comparison a profile was investigated on a reference site, where no contamination was expected. In addition to conventional 1D and 2D inversion, the measured data sets were interpreted using a 2D joint inversion algorithm. The inversion results from the data obtained from the sewage irrigated site indicate a decrease of resistivity up to 75% in comparison with the reference site. The depth range from 5 to 15 m is identified as a shallow unconfined aquifer and the decreased resistivities are ascribed as the influence of contamination. Furthermore, a systematic increase in the resistivities of the shallow unconfined aquifer is detected as we move away from the waste disposal site. The advantages of both, the DCR and RMT methods, are quantitatively integrated by the 2D joint inversion of both data sets and lead to a joint model, which explains both data sets.

  13. Three-dimensional cross-gradient joint inversion of gravity and normalized magnetic source strength data in the presence of remanent magnetization

    NASA Astrophysics Data System (ADS)

    Zhou, Junjie; Meng, Xiaohong; Guo, Lianghui; Zhang, Sheng

    2015-08-01

    Three-dimensional cross-gradient joint inversion of gravity and magnetic data has the potential to acquire improved density and magnetization distribution information. This method usually adopts the commonly held assumption that remanent magnetization can be ignored and all anomalies present are the result of induced magnetization. Accordingly, this method might fail to produce accurate results where significant remanent magnetization is present. In such a case, the simplification brings about unwanted and unknown deviations in the inverted magnetization model. Furthermore, because of the information transfer mechanism of the joint inversion framework, the inverted density results may also be influenced by the effect of remanent magnetization. The normalized magnetic source strength (NSS) is a transformed quantity that is insensitive to the magnetization direction. Thus, it has been applied in the standard magnetic inversion scheme to mitigate the remanence effects, especially in the case of varying remanence directions. In this paper, NSS data were employed along with gravity data for three-dimensional cross-gradient joint inversion, which can significantly reduce the remanence effects and enhance the reliability of both density and magnetization models. Meanwhile, depth-weightings and bound constraints were also incorporated in this joint algorithm to improve the inversion quality. Synthetic and field examples show that the proposed combination of cross-gradient constraints and the NSS transform produce better results in terms of the data resolution, compatibility, and reliability than that of separate inversions and that of joint inversions with the total magnetization intensity (TMI) data. Thus, this method was found to be very useful and is recommended for applications in the presence of strong remanent magnetization.

  14. Multi-GPU parallel algorithm design and analysis for improved inversion of probability tomography with gravity gradiometry data

    NASA Astrophysics Data System (ADS)

    Hou, Zhenlong; Huang, Danian

    2017-09-01

    In this paper, we make a study on the inversion of probability tomography (IPT) with gravity gradiometry data at first. The space resolution of the results is improved by multi-tensor joint inversion, depth weighting matrix and the other methods. Aiming at solving the problems brought by the big data in the exploration, we present the parallel algorithm and the performance analysis combining Compute Unified Device Architecture (CUDA) with Open Multi-Processing (OpenMP) based on Graphics Processing Unit (GPU) accelerating. In the test of the synthetic model and real data from Vinton Dome, we get the improved results. It is also proved that the improved inversion algorithm is effective and feasible. The performance of parallel algorithm we designed is better than the other ones with CUDA. The maximum speedup could be more than 200. In the performance analysis, multi-GPU speedup and multi-GPU efficiency are applied to analyze the scalability of the multi-GPU programs. The designed parallel algorithm is demonstrated to be able to process larger scale of data and the new analysis method is practical.

  15. Geoelectrical characterization by joint inversion of VES/TEM in Paraná basin, Brazil

    NASA Astrophysics Data System (ADS)

    Bortolozo, C. A.; Couto, M. A.; Almeida, E. R.; Porsani, J. L.; Santos, F. M.

    2012-12-01

    For many years electrical (DC) and transient electromagnetic (TEM) soundings have been used in a great number of environmental, hydrological and mining exploration studies. The data of both methods are interpreted usually by individual 1D models resulting in many cases in ambiguous models. This can be explained by how the two different methodologies sample the subsurface. The vertical electrical sounding (VES) is good on marking very resistive structures, while the transient electromagnetic sounding (TEM) is very sensitive to map conductive structures. Another characteristic is that VES is more sensitive to shallow structures, while TEM soundings can reach deeper structures. A Matlab program for joint inversion of VES and TEM soundings, by using CRS algorithm was developed aiming explore the best of the both methods. Initially, the algorithm was tested with synthetic data and after it was used to invert experimental data from Paraná sedimentary basin. We present the results of a re-interpretation of 46 VES/TEM soundings data set acquired in Bebedouro region in São Paulo State - Brazil. The previous interpretation was based in geoelectrical models obtained by single inversion of the VES and TEM soundings. In this work we present the results with single inversion of VES and TEM sounding inverted by the Curupira Program and a new interpretation based in the joint inversion of both methodologies. The goal is increase the accuracy in determining the underground structures. As a result a new geoelectrical model of the region is obtained.

  16. Computational structures for robotic computations

    NASA Technical Reports Server (NTRS)

    Lee, C. S. G.; Chang, P. R.

    1987-01-01

    The computational problem of inverse kinematics and inverse dynamics of robot manipulators by taking advantage of parallelism and pipelining architectures is discussed. For the computation of inverse kinematic position solution, a maximum pipelined CORDIC architecture has been designed based on a functional decomposition of the closed-form joint equations. For the inverse dynamics computation, an efficient p-fold parallel algorithm to overcome the recurrence problem of the Newton-Euler equations of motion to achieve the time lower bound of O(log sub 2 n) has also been developed.

  17. Joint Inversion of Source Location and Source Mechanism of Induced Microseismics

    NASA Astrophysics Data System (ADS)

    Liang, C.

    2014-12-01

    Seismic source mechanism is a useful property to indicate the source physics and stress and strain distribution in regional, local and micro scales. In this study we jointly invert source mechanisms and locations for microseismics induced in fluid fracturing treatment in the oil and gas industry. For the events that are big enough to see waveforms, there are quite a few techniques can be applied to invert the source mechanism including waveform inversion, first polarity inversion and many other methods and variants based on these methods. However, for events that are too small to identify in seismic traces such as the microseismics induced by the fluid fracturing in the Oil and Gas industry, a source scanning algorithms (SSA for short) with waveform stacking are usually applied. At the same time, a joint inversion of location and source mechanism are possible but at a cost of high computation budget. The algorithm is thereby called Source Location and Mechanism Scanning Algorithm, SLMSA for short. In this case, for given velocity structure, all possible combinations of source locations (X,Y and Z) and source mechanism (Strike, Dip and Rake) are used to compute travel-times and polarities of waveforms. Correcting Normal moveout times and polarities, and stacking all waveforms, the (X, Y, Z , strike, dip, rake) combination that gives the strongest stacking waveform is identified as the solution. To solve the problem of high computation problem, CPU-GPU programing is applied. Numerical datasets are used to test the algorithm. The SLMSA has also been applied to a fluid fracturing datasets and reveal several advantages against the location only method: (1) for shear sources, the source only program can hardly locate them because of the canceling out of positive and negative polarized traces, but the SLMSA method can successfully pick up those events; (2) microseismic locations alone may not be enough to indicate the directionality of micro-fractures. The statistics of source mechanisms can certainly provide more knowledges on the orientation of fractures; (3) in our practice, the joint inversion method almost always yield more events than the source only method and for those events that are also picked by the SSA method, the stacking power of SLMSA are always higher than the ones obtained in SSA.

  18. 3D joint inversion modeling of the lithospheric density structure based on gravity, geoid and topography data — Application to the Alborz Mountains (Iran) and South Caspian Basin region

    NASA Astrophysics Data System (ADS)

    Motavalli-Anbaran, Seyed-Hani; Zeyen, Hermann; Ebrahimzadeh Ardestani, Vahid

    2013-02-01

    We present a 3D algorithm to obtain the density structure of the lithosphere from joint inversion of free air gravity, geoid and topography data based on a Bayesian approach with Gaussian probability density functions. The algorithm delivers the crustal and lithospheric thicknesses and the average crustal density. Stabilization of the inversion process may be obtained through parameter damping and smoothing as well as use of a priori information like crustal thicknesses from seismic profiles. The algorithm is applied to synthetic models in order to demonstrate its usefulness. A real data application is presented for the area of northern Iran (with the Alborz Mountains as main target) and the South Caspian Basin. The resulting model shows an important crustal root (up to 55 km) under the Alborz Mountains and a thin crust (ca. 30 km) under the southernmost South Caspian Basin thickening northward to the Apsheron-Balkan Sill to 45 km. Central and NW Iran is underlain by a thin lithosphere (ca. 90-100 km). The lithosphere thickens under the South Caspian Basin until the Apsheron-Balkan Sill where it reaches more than 240 km. Under the stable Turan platform, we find a lithospheric thickness of 160-180 km.

  19. Joint inversion of surface and borehole magnetic data to prospect concealed orebodies: A case study from the Mengku iron deposit, northwestern China

    NASA Astrophysics Data System (ADS)

    Liu, Shuang; Hu, Xiangyun; Zhu, Rixiang

    2018-07-01

    The Mengku iron deposit is one of the largest magnetite deposits in Xinjiang Province, northwestern China. It is important to accurately delineate the positions and shapes of concealed orebodies for drillhole layout and resource quantity evaluations. Total-field surface and three-component borehole magnetic measurements were carried out in the deposit. We made a joint inversion of the surface and borehole magnetic data to investigate the characteristics of the orebodies. We recovered the distributions of the magnetization intensity using a preconditioned conjugate gradient algorithm. Synthetic examples show that the reconstructed models of the joint inversion yield a better consistency with the true models than those recovered using independent inversion. By using joint inversion, more accurate information is obtained on the position and shape of the orebodies in the Mengku iron deposit. The magnetization distribution of Line 135 reveals that the major magnetite orebodies occur at 200-400 m depth with a lenticular cross-section dipping north-east. The orebodies of Line 143 are modified and buried at 100-200 m depth with an elliptical cross-section caused by fault activities at north-northeast directions. This information is verified by well logs. The borehole component anomalies are combined with surface data to reconstruct the physical property model and improve the ability to distinguish vertical and horizontal directions, which provides an effective approach to prospect buried orebodies.

  20. Non-recursive augmented Lagrangian algorithms for the forward and inverse dynamics of constrained flexible multibodies

    NASA Technical Reports Server (NTRS)

    Bayo, Eduardo; Ledesma, Ragnar

    1993-01-01

    A technique is presented for solving the inverse dynamics of flexible planar multibody systems. This technique yields the non-causal joint efforts (inverse dynamics) as well as the internal states (inverse kinematics) that produce a prescribed nominal trajectory of the end effector. A non-recursive global Lagrangian approach is used in formulating the equations for motion as well as in solving the inverse dynamics equations. Contrary to the recursive method previously presented, the proposed method solves the inverse problem in a systematic and direct manner for both open-chain as well as closed-chain configurations. Numerical simulation shows that the proposed procedure provides an excellent tracking of the desired end effector trajectory.

  1. Integration of Visual and Joint Information to Enable Linear Reaching Motions

    NASA Astrophysics Data System (ADS)

    Eberle, Henry; Nasuto, Slawomir J.; Hayashi, Yoshikatsu

    2017-01-01

    A new dynamics-driven control law was developed for a robot arm, based on the feedback control law which uses the linear transformation directly from work space to joint space. This was validated using a simulation of a two-joint planar robot arm and an optimisation algorithm was used to find the optimum matrix to generate straight trajectories of the end-effector in the work space. We found that this linear matrix can be decomposed into the rotation matrix representing the orientation of the goal direction and the joint relation matrix (MJRM) representing the joint response to errors in the Cartesian work space. The decomposition of the linear matrix indicates the separation of path planning in terms of the direction of the reaching motion and the synergies of joint coordination. Once the MJRM is numerically obtained, the feedfoward planning of reaching direction allows us to provide asymptotically stable, linear trajectories in the entire work space through rotational transformation, completely avoiding the use of inverse kinematics. Our dynamics-driven control law suggests an interesting framework for interpreting human reaching motion control alternative to the dominant inverse method based explanations, avoiding expensive computation of the inverse kinematics and the point-to-point control along the desired trajectories.

  2. An adaptive coupling strategy for joint inversions that use petrophysical information as constraints

    NASA Astrophysics Data System (ADS)

    Heincke, Björn; Jegen, Marion; Moorkamp, Max; Hobbs, Richard W.; Chen, Jin

    2017-01-01

    Joint inversion strategies for geophysical data have become increasingly popular as they allow for the efficient combination of complementary information from different data sets. The algorithm used for the joint inversion needs to be flexible in its description of the subsurface so as to be able to handle the diverse nature of the data. Hence, joint inversion schemes are needed that 1) adequately balance data from the different methods, 2) have stable convergence behavior, 3) consider the different resolution power of the methods used and 4) link the parameter models in a way that they are suited for a wide range of applications. Here, we combine active source seismic P-wave tomography, gravity and magnetotelluric (MT) data in a petrophysical joint inversion that accounts for these issues. Data from the different methods are inverted separately but are linked through constraints accounting for parameter relationships. An advantage of performing the inversions separately is that no relative weighting between the data sets is required. To avoid perturbing the convergence behavior of the inversions by the coupling, the strengths of the constraints are readjusted at each iteration. The criterion we use to control the adaption of the coupling strengths is based on variations in the objective functions of the individual inversions from one to the next iteration. Adaption of the coupling strengths makes the joint inversion scheme also applicable to subsurface conditions, where assumed relationships are not valid everywhere, because the individual inversions decouple if it is not possible to reach adequately low data misfits for the made assumptions. In addition, the coupling constraints depend on the relative resolutions of the methods, which leads to an improved convergence behavior of the joint inversion. Another benefit of the proposed scheme is that structural information can easily be incorporated in the petrophysical joint inversion (no additional terms are added in the objective functions) by using mutually controlled structural weights for the smoothing constraints. We test our scheme using data generated from a synthetic 2-D sub-basalt model. We observe that the adaption of the coupling strengths makes the convergence of the inversions very robust (data misfits of all methods are close to the target misfits) and that final results are always close to the true models independent of the parameter choices. Finally, the scheme is applied on real data sets from the Faroe-Shetland Basin to image a basaltic sequence and underlying structures. The presence of a borehole and a 3-D reflection seismic survey in this region allows direct comparison and, hence, evaluate the quality of the joint inversion results. The results from joint inversion are more consistent with results from other studies than the ones from the corresponding individual inversions and the shape of the basaltic sequence is better resolved. However, due to the limited resolution of the individual methods used it was not possible to resolve structures underneath the basalt in detail, indicating that additional geophysical information (e.g. CSEM, reflection onsets) needs to be included.

  3. Incremental inverse kinematics based vision servo for autonomous robotic capture of non-cooperative space debris

    NASA Astrophysics Data System (ADS)

    Dong, Gangqi; Zhu, Z. H.

    2016-04-01

    This paper proposed a new incremental inverse kinematics based vision servo approach for robotic manipulators to capture a non-cooperative target autonomously. The target's pose and motion are estimated by a vision system using integrated photogrammetry and EKF algorithm. Based on the estimated pose and motion of the target, the instantaneous desired position of the end-effector is predicted by inverse kinematics and the robotic manipulator is moved incrementally from its current configuration subject to the joint speed limits. This approach effectively eliminates the multiple solutions in the inverse kinematics and increases the robustness of the control algorithm. The proposed approach is validated by a hardware-in-the-loop simulation, where the pose and motion of the non-cooperative target is estimated by a real vision system. The simulation results demonstrate the effectiveness and robustness of the proposed estimation approach for the target and the incremental control strategy for the robotic manipulator.

  4. Joint design of large-tip-angle parallel RF pulses and blipped gradient trajectories.

    PubMed

    Cao, Zhipeng; Donahue, Manus J; Ma, Jun; Grissom, William A

    2016-03-01

    To design multichannel large-tip-angle kT-points and spokes radiofrequency (RF) pulses and gradient waveforms for transmit field inhomogeneity compensation in high field magnetic resonance imaging. An algorithm to design RF subpulse weights and gradient blip areas is proposed to minimize a magnitude least-squares cost function that measures the difference between realized and desired state parameters in the spin domain, and penalizes integrated RF power. The minimization problem is solved iteratively with interleaved target phase updates, RF subpulse weights updates using the conjugate gradient method with optimal control-based derivatives, and gradient blip area updates using the conjugate gradient method. Two-channel parallel transmit simulations and experiments were conducted in phantoms and human subjects at 7 T to demonstrate the method and compare it to small-tip-angle-designed pulses and circularly polarized excitations. The proposed algorithm designed more homogeneous and accurate 180° inversion and refocusing pulses than other methods. It also designed large-tip-angle pulses on multiple frequency bands with independent and joint phase relaxation. Pulses designed by the method improved specificity and contrast-to-noise ratio in a finger-tapping spin echo blood oxygen level dependent functional magnetic resonance imaging study, compared with circularly polarized mode refocusing. A joint RF and gradient waveform design algorithm was proposed and validated to improve large-tip-angle inversion and refocusing at ultrahigh field. © 2015 Wiley Periodicals, Inc.

  5. Seismicity and structure of Akutan and Makushin Volcanoes, Alaska, using joint body and surface wave tomography

    DOE PAGES

    Syracuse, E. M.; Maceira, M.; Zhang, H.; ...

    2015-02-18

    Joint inversions of seismic data recover models that simultaneously fit multiple constraints while playing upon the strengths of each data type. Here, we jointly invert 14 years of local earthquake body wave arrival times from the Alaska Volcano Observatory catalog and Rayleigh wave dispersion curves based upon ambient noise measurements for local V p, V s, and hypocentral locations at Akutan and Makushin Volcanoes using a new joint inversion algorithm.The velocity structure and relocated seismicity of both volcanoes are significantly more complex than many other volcanoes studied using similar techniques. Seismicity is distributed among several areas beneath or beyond themore » flanks of both volcanoes, illuminating a variety of volcanic and tectonic features. The velocity structures of the two volcanoes are exemplified by the presence of narrow high-V p features in the near surface, indicating likely current or remnant pathways of magma to the surface. A single broad low-V p region beneath each volcano is slightly offset from each summit and centered at approximately 7 km depth, indicating a potential magma chamber, where magma is stored over longer time periods. Differing recovery capabilities of the Vp and Vs datasets indicate that the results of these types of joint inversions must be interpreted carefully.« less

  6. Upper limb joint forces and moments during underwater cyclical movements.

    PubMed

    Lauer, Jessy; Rouard, Annie Hélène; Vilas-Boas, João Paulo

    2016-10-03

    Sound inverse dynamics modeling is lacking in aquatic locomotion research because of the difficulty in measuring hydrodynamic forces in dynamic conditions. Here we report the successful implementation and validation of an innovative methodology crossing new computational fluid dynamics and inverse dynamics techniques to quantify upper limb joint forces and moments while moving in water. Upper limb kinematics of seven male swimmers sculling while ballasted with 4kg was recorded through underwater motion capture. Together with body scans, segment inertial properties, and hydrodynamic resistances computed from a unique dynamic mesh algorithm capable to handle large body deformations, these data were fed into an inverse dynamics model to solve for joint kinetics. Simulation validity was assessed by comparing the impulse produced by the arms, calculated by integrating vertical forces over a stroke period, to the net theoretical impulse of buoyancy and ballast forces. A resulting gap of 1.2±3.5% provided confidence in the results. Upper limb joint load was within 5% of swimmer׳s body weight, which tends to supports the use of low-load aquatic exercises to reduce joint stress. We expect this significant methodological improvement to pave the way towards deeper insights into the mechanics of aquatic movement and the establishment of practice guidelines in rehabilitation, fitness or swimming performance. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Measurement-induced nonlocality in arbitrary dimensions in terms of the inverse approximate joint diagonalization

    NASA Astrophysics Data System (ADS)

    Zhang, Li-qiang; Ma, Ting-ting; Yu, Chang-shui

    2018-03-01

    The computability of the quantifier of a given quantum resource is the essential challenge in the resource theory and the inevitable bottleneck for its application. Here we focus on the measurement-induced nonlocality and present a redefinition in terms of the skew information subject to a broken observable. It is shown that the obtained quantity possesses an obvious operational meaning, can tackle the noncontractivity of the measurement-induced nonlocality and has analytic expressions for pure states, (2 ⊗d )-dimensional quantum states, and some particular high-dimensional quantum states. Most importantly, an inverse approximate joint diagonalization algorithm, due to its simplicity, high efficiency, stability, and state independence, is presented to provide almost-analytic expressions for any quantum state, which can also shed light on other aspects in physics. To illustrate applications as well as demonstrate the validity of the algorithm, we compare the analytic and numerical expressions of various examples and show their perfect consistency.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Syracuse, E. M.; Maceira, M.; Zhang, H.

    Joint inversions of seismic data recover models that simultaneously fit multiple constraints while playing upon the strengths of each data type. Here, we jointly invert 14 years of local earthquake body wave arrival times from the Alaska Volcano Observatory catalog and Rayleigh wave dispersion curves based upon ambient noise measurements for local V p, V s, and hypocentral locations at Akutan and Makushin Volcanoes using a new joint inversion algorithm.The velocity structure and relocated seismicity of both volcanoes are significantly more complex than many other volcanoes studied using similar techniques. Seismicity is distributed among several areas beneath or beyond themore » flanks of both volcanoes, illuminating a variety of volcanic and tectonic features. The velocity structures of the two volcanoes are exemplified by the presence of narrow high-V p features in the near surface, indicating likely current or remnant pathways of magma to the surface. A single broad low-V p region beneath each volcano is slightly offset from each summit and centered at approximately 7 km depth, indicating a potential magma chamber, where magma is stored over longer time periods. Differing recovery capabilities of the Vp and Vs datasets indicate that the results of these types of joint inversions must be interpreted carefully.« less

  9. Solving Inverse Kinematics of Robot Manipulators by Means of Meta-Heuristic Optimisation

    NASA Astrophysics Data System (ADS)

    Wichapong, Kritsada; Bureerat, Sujin; Pholdee, Nantiwat

    2018-05-01

    This paper presents the use of meta-heuristic algorithms (MHs) for solving inverse kinematics of robot manipulators based on using forward kinematic. Design variables are joint angular displacements used to move a robot end-effector to the target in the Cartesian space while the design problem is posed to minimize error between target points and the positions of the robot end-effector. The problem is said to be a dynamic problem as the target points always changed by a robot user. Several well established MHs are used to solve the problem and the results obtained from using different meta-heuristics are compared based on the end-effector error and searching speed of the algorithms. From the study, the best performer will be obtained for setting as the baseline for future development of MH-based inverse kinematic solving.

  10. Software for Simulating a Complex Robot

    NASA Technical Reports Server (NTRS)

    Goza, S. Michael

    2003-01-01

    RoboSim (Robot Simulation) is a computer program that simulates the poses and motions of the Robonaut a developmental anthropomorphic robot that has a complex system of joints with 43 degrees of freedom and multiple modes of operation and control. RoboSim performs a full kinematic simulation of all degrees of freedom. It also includes interface components that duplicate the functionality of the real Robonaut interface with control software and human operators. Basically, users see no difference between the real Robonaut and the simulation. Consequently, new control algorithms can be tested by computational simulation, without risk to the Robonaut hardware, and without using excessive Robonaut-hardware experimental time, which is always at a premium. Previously developed software incorporated into RoboSim includes Enigma (for graphical displays), OSCAR (for kinematical computations), and NDDS (for communication between the Robonaut and external software). In addition, RoboSim incorporates unique inverse-kinematical algorithms for chains of joints that have fewer than six degrees of freedom (e.g., finger joints). In comparison with the algorithms of OSCAR, these algorithms are more readily adaptable and provide better results when using equivalent sets of data.

  11. Joint inversion of marine MT and CSEM data over Gemini prospect, Gulf of Mexico

    NASA Astrophysics Data System (ADS)

    Constable, S.; Orange, A. S.; Key, K.

    2013-12-01

    In 2003 we tested a prototype marine controlled-source electromagnetic (CSEM) transmitter over the Gemini salt body in the Gulf of Mexico, collecting one line of data over 15 seafloor receiver instruments using the Cox waveform with a 0.25 Hz fundamental, yielding 3 usable frequencies. Transmission current was 95 amps on a 150 m antenna. We had previously collected 16 sites of marine magnetotelluric (MT) data along this line during the development of broadband marine MT as a tool for mapping salt geometry. Recently we commissioned a finite element code capable of joint CSEM and MT 2D inversion incorporating bathymetry and anisotropy, and this heritage data set provided an opportunity to explore such inversions with real data. We reprocessed the CSEM data to obtain objective error estimates and inverted single frequency CSEM, multi-frequency CSEM, MT, and joint MT and CSEM data sets for a variety of target misfits, using the Occam regularized inversion algorithm. As expected, MT-only inversions produce a smoothed image of the salt and a resistive basement at 9 km depth. The CSEM data image a conductive cap over the salt body and have little sensitivity to the salt or structure at depths beyond about 1500 m below seafloor. However, the joint inversion yields more than the sum of the parts - the outline of the salt body is much sharper and there is much more structural detail even at depths beyond the resolution of the CSEM data. As usual, model complexity greatly depends on target misfit, and even with well-estimated errors the choice of misfit becomes a somewhat subjective decision. Our conclusion is a familiar one; more data are always good.

  12. Model based approach to UXO imaging using the time domain electromagnetic method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lavely, E.M.

    1999-04-01

    Time domain electromagnetic (TDEM) sensors have emerged as a field-worthy technology for UXO detection in a variety of geological and environmental settings. This success has been achieved with commercial equipment that was not optimized for UXO detection and discrimination. The TDEM response displays a rich spatial and temporal behavior which is not currently utilized. Therefore, in this paper the author describes a research program for enhancing the effectiveness of the TDEM method for UXO detection and imaging. Fundamental research is required in at least three major areas: (a) model based imaging capability i.e. the forward and inverse problem, (b) detectormore » modeling and instrument design, and (c) target recognition and discrimination algorithms. These research problems are coupled and demand a unified treatment. For example: (1) the inverse solution depends on solution of the forward problem and knowledge of the instrument response; (2) instrument design with improved diagnostic power requires forward and inverse modeling capability; and (3) improved target recognition algorithms (such as neural nets) must be trained with data collected from the new instrument and with synthetic data computed using the forward model. Further, the design of the appropriate input and output layers of the net will be informed by the results of the forward and inverse modeling. A more fully developed model of the TDEM response would enable the joint inversion of data collected from multiple sensors (e.g., TDEM sensors and magnetometers). Finally, the author suggests that a complementary approach to joint inversions is the statistical recombination of data using principal component analysis. The decomposition into principal components is useful since the first principal component contains those features that are most strongly correlated from image to image.« less

  13. An overview of a highly versatile forward and stable inverse algorithm for airborne, ground-based and borehole electromagnetic and electric data

    NASA Astrophysics Data System (ADS)

    Auken, Esben; Christiansen, Anders Vest; Kirkegaard, Casper; Fiandaca, Gianluca; Schamper, Cyril; Behroozmand, Ahmad Ali; Binley, Andrew; Nielsen, Emil; Effersø, Flemming; Christensen, Niels Bøie; Sørensen, Kurt; Foged, Nikolaj; Vignoli, Giulio

    2015-07-01

    We present an overview of a mature, robust and general algorithm providing a single framework for the inversion of most electromagnetic and electrical data types and instrument geometries. The implementation mainly uses a 1D earth formulation for electromagnetics and magnetic resonance sounding (MRS) responses, while the geoelectric responses are both 1D and 2D and the sheet's response models a 3D conductive sheet in a conductive host with an overburden of varying thickness and resistivity. In all cases, the focus is placed on delivering full system forward modelling across all supported types of data. Our implementation is modular, meaning that the bulk of the algorithm is independent of data type, making it easy to add support for new types. Having implemented forward response routines and file I/O for a given data type provides access to a robust and general inversion engine. This engine includes support for mixed data types, arbitrary model parameter constraints, integration of prior information and calculation of both model parameter sensitivity analysis and depth of investigation. We present a review of our implementation and methodology and show four different examples illustrating the versatility of the algorithm. The first example is a laterally constrained joint inversion (LCI) of surface time domain induced polarisation (TDIP) data and borehole TDIP data. The second example shows a spatially constrained inversion (SCI) of airborne transient electromagnetic (AEM) data. The third example is an inversion and sensitivity analysis of MRS data, where the electrical structure is constrained with AEM data. The fourth example is an inversion of AEM data, where the model is described by a 3D sheet in a layered conductive host.

  14. A musculoskeletal shoulder model based on pseudo-inverse and null-space optimization.

    PubMed

    Terrier, Alexandre; Aeberhard, Martin; Michellod, Yvan; Mullhaupt, Philippe; Gillet, Denis; Farron, Alain; Pioletti, Dominique P

    2010-11-01

    The goal of the present work was assess the feasibility of using a pseudo-inverse and null-space optimization approach in the modeling of the shoulder biomechanics. The method was applied to a simplified musculoskeletal shoulder model. The mechanical system consisted in the arm, and the external forces were the arm weight, 6 scapulo-humeral muscles and the reaction at the glenohumeral joint, which was considered as a spherical joint. The muscle wrapping was considered around the humeral head assumed spherical. The dynamical equations were solved in a Lagrangian approach. The mathematical redundancy of the mechanical system was solved in two steps: a pseudo-inverse optimization to minimize the square of the muscle stress and a null-space optimization to restrict the muscle force to physiological limits. Several movements were simulated. The mathematical and numerical aspects of the constrained redundancy problem were efficiently solved by the proposed method. The prediction of muscle moment arms was consistent with cadaveric measurements and the joint reaction force was consistent with in vivo measurements. This preliminary work demonstrated that the developed algorithm has a great potential for more complex musculoskeletal modeling of the shoulder joint. In particular it could be further applied to a non-spherical joint model, allowing for the natural translation of the humeral head in the glenoid fossa. Copyright © 2010 IPEM. Published by Elsevier Ltd. All rights reserved.

  15. The Natural-CCD Algorithm, a Novel Method to Solve the Inverse Kinematics of Hyper-redundant and Soft Robots.

    PubMed

    Martín, Andrés; Barrientos, Antonio; Del Cerro, Jaime

    2018-03-22

    This article presents a new method to solve the inverse kinematics problem of hyper-redundant and soft manipulators. From an engineering perspective, this kind of robots are underdetermined systems. Therefore, they exhibit an infinite number of solutions for the inverse kinematics problem, and to choose the best one can be a great challenge. A new algorithm based on the cyclic coordinate descent (CCD) and named as natural-CCD is proposed to solve this issue. It takes its name as a result of generating very harmonious robot movements and trajectories that also appear in nature, such as the golden spiral. In addition, it has been applied to perform continuous trajectories, to develop whole-body movements, to analyze motion planning in complex environments, and to study fault tolerance, even for both prismatic and rotational joints. The proposed algorithm is very simple, precise, and computationally efficient. It works for robots either in two or three spatial dimensions and handles a large amount of degrees-of-freedom. Because of this, it is aimed to break down barriers between discrete hyper-redundant and continuum soft robots.

  16. A Framework for a Supervisory Expert System for Robotic Manipulators with Joint-Position Limits and Joint-Rate Limits

    NASA Technical Reports Server (NTRS)

    Mutambara, Arthur G. O.; Litt, Jonathan

    1998-01-01

    This report addresses the problem of path planning and control of robotic manipulators which have joint-position limits and joint-rate limits. The manipulators move autonomously and carry out variable tasks in a dynamic, unstructured and cluttered environment. The issue considered is whether the robotic manipulator can achieve all its tasks, and if it cannot, the objective is to identify the closest achievable goal. This problem is formalized and systematically solved for generic manipulators by using inverse kinematics and forward kinematics. Inverse kinematics are employed to define the subspace, workspace and constrained workspace, which are then used to identify when a task is not achievable. The closest achievable goal is obtained by determining weights for an optimal control redistribution scheme. These weights are quantified by using forward kinematics. Conditions leading to joint rate limits are identified, in particular it is established that all generic manipulators have singularities at the boundary of their workspace, while some have loci of singularities inside their workspace. Once the manipulator singularity is identified the command redistribution scheme is used to compute the closest achievable Cartesian velocities. Two examples are used to illustrate the use of the algorithm: A three link planar manipulator and the Unimation Puma 560. Implementation of the derived algorithm is effected by using a supervisory expert system to check whether the desired goal lies in the constrained workspace and if not, to evoke the redistribution scheme which determines the constraint relaxation between end effector position and orientation, and then computes optimal gains.

  17. Human arm joints reconstruction algorithm in rehabilitation therapies assisted by end-effector robotic devices.

    PubMed

    Bertomeu-Motos, Arturo; Blanco, Andrea; Badesa, Francisco J; Barios, Juan A; Zollo, Loredana; Garcia-Aracil, Nicolas

    2018-02-20

    End-effector robots are commonly used in robot-assisted neuro-rehabilitation therapies for upper limbs where the patient's hand can be easily attached to a splint. Nevertheless, they are not able to estimate and control the kinematic configuration of the upper limb during the therapy. However, the Range of Motion (ROM) together with the clinical assessment scales offers a comprehensive assessment to the therapist. Our aim is to present a robust and stable kinematic reconstruction algorithm to accurately measure the upper limb joints using only an accelerometer placed onto the upper arm. The proposed algorithm is based on the inverse of the augmented Jaciobian as the algorithm (Papaleo, et al., Med Biol Eng Comput 53(9):815-28, 2015). However, the estimation of the elbow joint location is performed through the computation of the rotation measured by the accelerometer during the arm movement, making the algorithm more robust against shoulder movements. Furthermore, we present a method to compute the initial configuration of the upper limb necessary to start the integration method, a protocol to manually measure the upper arm and forearm lengths, and a shoulder position estimation. An optoelectronic system was used to test the accuracy of the proposed algorithm whilst healthy subjects were performing upper limb movements holding the end effector of the seven Degrees of Freedom (DoF) robot. In addition, the previous and the proposed algorithms were studied during a neuro-rehabilitation therapy assisted by the 'PUPArm' planar robot with three post-stroke patients. The proposed algorithm reports a Root Mean Square Error (RMSE) of 2.13cm in the elbow joint location and 1.89cm in the wrist joint location with high correlation. These errors lead to a RMSE about 3.5 degrees (mean of the seven joints) with high correlation in all the joints with respect to the real upper limb acquired through the optoelectronic system. Then, the estimation of the upper limb joints through both algorithms reveal an instability on the previous when shoulder movement appear due to the inevitable trunk compensation in post-stroke patients. The proposed algorithm is able to accurately estimate the human upper limb joints during a neuro-rehabilitation therapy assisted by end-effector robots. In addition, the implemented protocol can be followed in a clinical environment without optoelectronic systems using only one accelerometer attached in the upper arm. Thus, the ROM can be perfectly determined and could become an objective assessment parameter for a comprehensive assessment.

  18. Efficient mapping algorithms for scheduling robot inverse dynamics computation on a multiprocessor system

    NASA Technical Reports Server (NTRS)

    Lee, C. S. G.; Chen, C. L.

    1989-01-01

    Two efficient mapping algorithms for scheduling the robot inverse dynamics computation consisting of m computational modules with precedence relationship to be executed on a multiprocessor system consisting of p identical homogeneous processors with processor and communication costs to achieve minimum computation time are presented. An objective function is defined in terms of the sum of the processor finishing time and the interprocessor communication time. The minimax optimization is performed on the objective function to obtain the best mapping. This mapping problem can be formulated as a combination of the graph partitioning and the scheduling problems; both have been known to be NP-complete. Thus, to speed up the searching for a solution, two heuristic algorithms were proposed to obtain fast but suboptimal mapping solutions. The first algorithm utilizes the level and the communication intensity of the task modules to construct an ordered priority list of ready modules and the module assignment is performed by a weighted bipartite matching algorithm. For a near-optimal mapping solution, the problem can be solved by the heuristic algorithm with simulated annealing. These proposed optimization algorithms can solve various large-scale problems within a reasonable time. Computer simulations were performed to evaluate and verify the performance and the validity of the proposed mapping algorithms. Finally, experiments for computing the inverse dynamics of a six-jointed PUMA-like manipulator based on the Newton-Euler dynamic equations were implemented on an NCUBE/ten hypercube computer to verify the proposed mapping algorithms. Computer simulation and experimental results are compared and discussed.

  19. Research on the Optimization Method of Arm Movement in the Assembly Workshop Based on Ergonomics

    NASA Astrophysics Data System (ADS)

    Hu, X. M.; Qu, H. W.; Xu, H. J.; Yang, L.; Yu, C. C.

    2017-12-01

    In order to improve the work efficiency and comfortability, Ergonomics is used to research the work of the operator in the assembly workshop. An optimization algorithm of arm movement in the assembly workshop is proposed. In the algorithm, a mathematical model of arm movement is established based on multi rigid body movement model and D-H method. The solution of inverse kinematics equation on arm movement is solved through kinematics theory. The evaluation functions of each joint movement and the whole arm movement are given based on the comfortability of human body joint. The solution method of the optimal arm movement posture based on the evaluation functions is described. The software CATIA is used to verify that the optimal arm movement posture is valid in an example and the experimental result show the effectiveness of the algorithm.

  20. Structural Damage Detection Using Changes in Natural Frequencies: Theory and Applications

    NASA Astrophysics Data System (ADS)

    He, K.; Zhu, W. D.

    2011-07-01

    A vibration-based method that uses changes in natural frequencies of a structure to detect damage has advantages over conventional nondestructive tests in detecting various types of damage, including loosening of bolted joints, using minimum measurement data. Two major challenges associated with applications of the vibration-based damage detection method to engineering structures are addressed: accurate modeling of structures and the development of a robust inverse algorithm to detect damage, which are defined as the forward and inverse problems, respectively. To resolve the forward problem, new physics-based finite element modeling techniques are developed for fillets in thin-walled beams and for bolted joints, so that complex structures can be accurately modeled with a reasonable model size. To resolve the inverse problem, a logistical function transformation is introduced to convert the constrained optimization problem to an unconstrained one, and a robust iterative algorithm using a trust-region method, called the Levenberg-Marquardt method, is developed to accurately detect the locations and extent of damage. The new methodology can ensure global convergence of the iterative algorithm in solving under-determined system equations and deal with damage detection problems with relatively large modeling error and measurement noise. The vibration-based damage detection method is applied to various structures including lightning masts, a space frame structure and one of its components, and a pipeline. The exact locations and extent of damage can be detected in the numerical simulation where there is no modeling error and measurement noise. The locations and extent of damage can be successfully detected in experimental damage detection.

  1. Three dimensional inversion of magnetic survey data collected over kimberlite pipes in presence of remanent magnetization

    NASA Astrophysics Data System (ADS)

    Zhao, Pengzhi

    Magnetic method is a common geophysical technique used to explore kimberlites. The analysis and interpretation of measured magnetic data provides the information of magnetic and geometric properties of potential kimberlite pipes. A crucial parameter of kimberlite magnetic interpretation is the remanent magnetization that dominates the classification of kimberlite. However, the measured magnetic data is the total field affected by the remanent magnetization and the susceptibility. The presence of remanent magnetization can pose severe challenges to the quantitative interpretation of magnetic data by skewing or laterally shifting magnetic anomalies relative to the subsurface source (Haney and Li, 2002). Therefore, identification of remanence effects and determination of remanent magnetization are important in magnetic data interpretation. This project presents a new method to determine the magnetic and geometric properties of kimberlite pipes in the presence of strong remanent magnetization. This method consists of two steps. The first step is to estimate the total magnetization and geometric properties of magnetic anomaly. The second step is to separate the remanent magnetization from the total magnetization. In the first step, a joint parametric inversion of total-field magnetic data and its analytic signal (derived from the survey data by Fourier transform method) is used. The algorithm of the joint inversion is based on the Gauss-Newton method and it is more stable and more accurate than the separate inversion method. It has been tested with synthetic data and applied to interpret the field data from the Lac de Gras, North-West Territories of Canada. The results of the synthetic examples and the field data applications show that joint inversion can recovers the total magnetization and geometric properties of magnetic anomaly with a good data fit and stable convergence. In the second step, the remanent magnetization is separated from the total magnetization by using a determined susceptibility. The susceptibility value is estimated by using the frequency domain electromagnetic data. The inversion method is achieved by a code, named “EM1DFM”, developed by University of British Columbia was designed to construct one of four types of 1D model, using any type of geophysical frequency domain loop-loop data with one of four variations of the inversion algorithm. The results show that the susceptibility of magnetic body is recovered, even if the depth and thickness are not well estimated. This two-step process provides a new way to determine magnetic and geometric properties of kimberlite pipes in the presence of strong remanent magnetization. The joint inversion of the total-field magnetic data and its analytic signal obtains the total magnetization and geometric properties. The frequency domain EM method provides the susceptibility. As a result, the remanent magnetization can be separated from the total magnetization accurately.

  2. Mapping From an Instrumented Glove to a Robot Hand

    NASA Technical Reports Server (NTRS)

    Goza, Michael

    2005-01-01

    An algorithm has been developed to solve the problem of mapping from (1) a glove instrumented with joint-angle sensors to (2) an anthropomorphic robot hand. Such a mapping is needed to generate control signals to make the robot hand mimic the configuration of the hand of a human attempting to control the robot. The mapping problem is complicated by uncertainties in sensor locations caused by variations in sizes and shapes of hands and variations in the fit of the glove. The present mapping algorithm is robust in the face of these uncertainties, largely because it includes a calibration sub-algorithm that inherently adapts the mapping to the specific hand and glove, without need for measuring the hand and without regard for goodness of fit. The algorithm utilizes a forward-kinematics model of the glove derived from documentation provided by the manufacturer of the glove. In this case, forward-kinematics model signifies a mathematical model of the glove fingertip positions as functions of the sensor readings. More specifically, given the sensor readings, the forward-kinematics model calculates the glove fingertip positions in a Cartesian reference frame nominally attached to the palm. The algorithm also utilizes an inverse-kinematics model of the robot hand. In this case, inverse-kinematics model signifies a mathematical model of the robot finger-joint angles as functions of the robot fingertip positions. Again, more specifically, the inverse-kinematics model calculates the finger-joint commands needed to place the fingertips at specified positions in a Cartesian reference frame that is attached to the palm of the robot hand and that nominally corresponds to the Cartesian reference frame attached to the palm of the glove. Initially, because of the aforementioned uncertainties, the glove fingertip positions calculated by the forwardkinematics model in the glove Cartesian reference frame cannot be expected to match the robot fingertip positions in the robot-hand Cartesian reference frame. A calibration must be performed to make the glove and robot-hand fingertip positions correspond more precisely. The calibration procedure involves a few simple hand poses designed to provide well-defined fingertip positions. One of the poses is a fist. In each of the other poses, a finger touches the thumb. The calibration subalgorithm uses the sensor readings from these poses to modify the kinematical models to make the two sets of fingertip positions agree more closely.

  3. Nonlinear Algorithms for Channel Equalization and Map Symbol Detection.

    NASA Astrophysics Data System (ADS)

    Giridhar, K.

    The transfer of information through a communication medium invariably results in various kinds of distortion to the transmitted signal. In this dissertation, a feed -forward neural network-based equalizer, and a family of maximum a posteriori (MAP) symbol detectors are proposed for signal recovery in the presence of intersymbol interference (ISI) and additive white Gaussian noise. The proposed neural network-based equalizer employs a novel bit-mapping strategy to handle multilevel data signals in an equivalent bipolar representation. It uses a training procedure to learn the channel characteristics, and at the end of training, the multilevel symbols are recovered from the corresponding inverse bit-mapping. When the channel characteristics are unknown and no training sequences are available, blind estimation of the channel (or its inverse) and simultaneous data recovery is required. Convergence properties of several existing Bussgang-type blind equalization algorithms are studied through computer simulations, and a unique gain independent approach is used to obtain a fair comparison of their rates of convergence. Although simple to implement, the slow convergence of these Bussgang-type blind equalizers make them unsuitable for many high data-rate applications. Rapidly converging blind algorithms based on the principle of MAP symbol-by -symbol detection are proposed, which adaptively estimate the channel impulse response (CIR) and simultaneously decode the received data sequence. Assuming a linear and Gaussian measurement model, the near-optimal blind MAP symbol detector (MAPSD) consists of a parallel bank of conditional Kalman channel estimators, where the conditioning is done on each possible data subsequence that can convolve with the CIR. This algorithm is also extended to the recovery of convolutionally encoded waveforms in the presence of ISI. Since the complexity of the MAPSD algorithm increases exponentially with the length of the assumed CIR, a suboptimal decision-feedback mechanism is introduced to truncate the channel memory "seen" by the MAPSD section. Also, simpler gradient-based updates for the channel estimates, and a metric pruning technique are used to further reduce the MAPSD complexity. Spatial diversity MAP combiners are developed to enhance the error rate performance and combat channel fading. As a first application of the MAPSD algorithm, dual-mode recovery techniques for TDMA (time-division multiple access) mobile radio signals are presented. Combined estimation of the symbol timing and the multipath parameters is proposed, using an auxiliary extended Kalman filter during the training cycle, and then tracking of the fading parameters is performed during the data cycle using the blind MAPSD algorithm. For the second application, a single-input receiver is employed to jointly recover cochannel narrowband signals. Assuming known channels, this two-stage joint MAPSD (JMAPSD) algorithm is compared to the optimal joint maximum likelihood sequence estimator, and to the joint decision-feedback detector. A blind MAPSD algorithm for the joint recovery of cochannel signals is also presented. Computer simulation results are provided to quantify the performance of the various algorithms proposed in this dissertation.

  4. Stochastic Evolutionary Algorithms for Planning Robot Paths

    NASA Technical Reports Server (NTRS)

    Fink, Wolfgang; Aghazarian, Hrand; Huntsberger, Terrance; Terrile, Richard

    2006-01-01

    A computer program implements stochastic evolutionary algorithms for planning and optimizing collision-free paths for robots and their jointed limbs. Stochastic evolutionary algorithms can be made to produce acceptably close approximations to exact, optimal solutions for path-planning problems while often demanding much less computation than do exhaustive-search and deterministic inverse-kinematics algorithms that have been used previously for this purpose. Hence, the present software is better suited for application aboard robots having limited computing capabilities (see figure). The stochastic aspect lies in the use of simulated annealing to (1) prevent trapping of an optimization algorithm in local minima of an energy-like error measure by which the fitness of a trial solution is evaluated while (2) ensuring that the entire multidimensional configuration and parameter space of the path-planning problem is sampled efficiently with respect to both robot joint angles and computation time. Simulated annealing is an established technique for avoiding local minima in multidimensional optimization problems, but has not, until now, been applied to planning collision-free robot paths by use of low-power computers.

  5. The JPL Serpentine Robot: A 12 DOF System for Inspection

    NASA Technical Reports Server (NTRS)

    Paljug, E.; Ohm, T.; Hayati, S.

    1995-01-01

    The Serpentine Robot is a prototype hyper-redundant (snake-like) manipulator system developed at the Jet Propulsion Laboratory. It is designed to navigate and perform tasks in obstructed and constrained environments in which conventional 6 DOF manipulators cannot function. Described are the robot mechanical design, a joint assembly low level inverse kinematic algorithm, control development, and applications.

  6. An order (n) algorithm for the dynamics simulation of robotic systems

    NASA Technical Reports Server (NTRS)

    Chun, H. M.; Turner, J. D.; Frisch, Harold P.

    1989-01-01

    The formulation of an Order (n) algorithm for DISCOS (Dynamics Interaction Simulation of Controls and Structures), which is an industry-standard software package for simulation and analysis of flexible multibody systems is presented. For systems involving many bodies, the new Order (n) version of DISCOS is much faster than the current version. Results of the experimental validation of the dynamics software are also presented. The experiment is carried out on a seven-joint robot arm at NASA's Goddard Space Flight Center. The algorithm used in the current version of DISCOS requires the inverse of a matrix whose dimension is equal to the number of constraints in the system. Generally, the number of constraints in a system is roughly proportional to the number of bodies in the system, and matrix inversion requires O(p exp 3) operations, where p is the dimension of the matrix. The current version of DISCOS is therefore considered an Order (n exp 3) algorithm. In contrast, the Order (n) algorithm requires inversion of matrices which are small, and the number of matrices to be inverted increases only linearly with the number of bodies. The newly-developed Order (n) DISCOS is currently capable of handling chain and tree topologies as well as multiple closed loops. Continuing development will extend the capability of the software to deal with typical robotics applications such as put-and-place, multi-arm hand-off and surface sliding.

  7. On the adequacy of identified Cole Cole models

    NASA Astrophysics Data System (ADS)

    Xiang, Jianping; Cheng, Daizhan; Schlindwein, F. S.; Jones, N. B.

    2003-06-01

    The Cole-Cole model has been widely used to interpret electrical geophysical data. Normally an iterative computer program is used to invert the frequency domain complex impedance data and simple error estimation is obtained from the squared difference of the measured (field) and calculated values over the full frequency range. Recently a new direct inversion algorithm was proposed for the 'optimal' estimation of the Cole-Cole parameters, which differs from existing inversion algorithms in that the estimated parameters are direct solutions of a set of equations without the need for an initial guess for initialisation. This paper first briefly investigates the advantages and disadvantages of the new algorithm compared to the standard Levenberg-Marquardt "ridge regression" algorithm. Then, and more importantly, we address the adequacy of the models resulting from both the "ridge regression" and the new algorithm, using two different statistical tests and we give objective statistical criteria for acceptance or rejection of the estimated models. The first is the standard χ2 technique. The second is a parameter-accuracy based test that uses a joint multi-normal distribution. Numerical results that illustrate the performance of both testing methods are given. The main goals of this paper are (i) to provide the source code for the new ''direct inversion'' algorithm in Matlab and (ii) to introduce and demonstrate two methods to determine the reliability of a set of data before data processing, i.e., to consider the adequacy of the resulting Cole-Cole model.

  8. Advanced Multivariate Inversion Techniques for High Resolution 3D Geophysical Modeling (Invited)

    NASA Astrophysics Data System (ADS)

    Maceira, M.; Zhang, H.; Rowe, C. A.

    2009-12-01

    We focus on the development and application of advanced multivariate inversion techniques to generate a realistic, comprehensive, and high-resolution 3D model of the seismic structure of the crust and upper mantle that satisfies several independent geophysical datasets. Building on previous efforts of joint invesion using surface wave dispersion measurements, gravity data, and receiver functions, we have added a fourth dataset, seismic body wave P and S travel times, to the simultaneous joint inversion method. We present a 3D seismic velocity model of the crust and upper mantle of northwest China resulting from the simultaneous, joint inversion of these four data types. Surface wave dispersion measurements are primarily sensitive to seismic shear-wave velocities, but at shallow depths it is difficult to obtain high-resolution velocities and to constrain the structure due to the depth-averaging of the more easily-modeled, longer-period surface waves. Gravity inversions have the greatest resolving power at shallow depths, and they provide constraints on rock density variations. Moreover, while surface wave dispersion measurements are primarily sensitive to vertical shear-wave velocity averages, body wave receiver functions are sensitive to shear-wave velocity contrasts and vertical travel-times. Addition of the fourth dataset, consisting of seismic travel-time data, helps to constrain the shear wave velocities both vertically and horizontally in the model cells crossed by the ray paths. Incorporation of both P and S body wave travel times allows us to invert for both P and S velocity structure, capitalizing on empirical relationships between both wave types’ seismic velocities with rock densities, thus eliminating the need for ad hoc assumptions regarding the Poisson ratios. Our new tomography algorithm is a modification of the Maceira and Ammon joint inversion code, in combination with the Zhang and Thurber TomoDD (double-difference tomography) program.

  9. Exploring equivalence domain in nonlinear inverse problems using Covariance Matrix Adaption Evolution Strategy (CMAES) and random sampling

    NASA Astrophysics Data System (ADS)

    Grayver, Alexander V.; Kuvshinov, Alexey V.

    2016-05-01

    This paper presents a methodology to sample equivalence domain (ED) in nonlinear partial differential equation (PDE)-constrained inverse problems. For this purpose, we first applied state-of-the-art stochastic optimization algorithm called Covariance Matrix Adaptation Evolution Strategy (CMAES) to identify low-misfit regions of the model space. These regions were then randomly sampled to create an ensemble of equivalent models and quantify uncertainty. CMAES is aimed at exploring model space globally and is robust on very ill-conditioned problems. We show that the number of iterations required to converge grows at a moderate rate with respect to number of unknowns and the algorithm is embarrassingly parallel. We formulated the problem by using the generalized Gaussian distribution. This enabled us to seamlessly use arbitrary norms for residual and regularization terms. We show that various regularization norms facilitate studying different classes of equivalent solutions. We further show how performance of the standard Metropolis-Hastings Markov chain Monte Carlo algorithm can be substantially improved by using information CMAES provides. This methodology was tested by using individual and joint inversions of magneotelluric, controlled-source electromagnetic (EM) and global EM induction data.

  10. Time-lapse Joint Inversion of Geophysical Data and its Applications to Geothermal Prospecting - GEODE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Revil, Andre

    2015-12-31

    The objectives of this project were to develop new algorithms to decrease the cost of drilling for geothermal targets during the exploration phase of a hydrothermal field and to improve the monitoring of a geothermal field to better understand its plumbing system and keep the resource renewable. We developed both new software and algorithms for geothermal explorations (that can also be used in other areas of interest to the DOE) and we applied the methods to a geothermal field of interest to ORMAT in Nevada.

  11. Anisotropic modeling and joint-MAP stitching for improved ultrasound model-based iterative reconstruction of large and thick specimens

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Almansouri, Hani; Venkatakrishnan, Singanallur V.; Clayton, Dwight A.

    One-sided non-destructive evaluation (NDE) is widely used to inspect materials, such as concrete structures in nuclear power plants (NPP). A widely used method for one-sided NDE is the synthetic aperture focusing technique (SAFT). The SAFT algorithm produces reasonable results when inspecting simple structures. However, for complex structures, such as heavily reinforced thick concrete structures, SAFT results in artifacts and hence there is a need for a more sophisticated inversion technique. Model-based iterative reconstruction (MBIR) algorithms, which are typically equivalent to regularized inversion techniques, offer a powerful framework to incorporate complex models for the physics, detector miscalibrations and the materials beingmore » imaged to obtain high quality reconstructions. Previously, we have proposed an ultrasonic MBIR method that signifcantly improves reconstruction quality compared to SAFT. However, the method made some simplifying assumptions on the propagation model and did not disucss ways to handle data that is obtained by raster scanning a system over a surface to inspect large regions. In this paper, we propose a novel MBIR algorithm that incorporates an anisotropic forward model and allows for the joint processing of data obtained from a system that raster scans a large surface. We demonstrate that the new MBIR method can produce dramatic improvements in reconstruction quality compared to SAFT and suppresses articfacts compared to the perviously presented MBIR approach.« less

  12. Anisotropic modeling and joint-MAP stitching for improved ultrasound model-based iterative reconstruction of large and thick specimens

    NASA Astrophysics Data System (ADS)

    Almansouri, Hani; Venkatakrishnan, Singanallur; Clayton, Dwight; Polsky, Yarom; Bouman, Charles; Santos-Villalobos, Hector

    2018-04-01

    One-sided non-destructive evaluation (NDE) is widely used to inspect materials, such as concrete structures in nuclear power plants (NPP). A widely used method for one-sided NDE is the synthetic aperture focusing technique (SAFT). The SAFT algorithm produces reasonable results when inspecting simple structures. However, for complex structures, such as heavily reinforced thick concrete structures, SAFT results in artifacts and hence there is a need for a more sophisticated inversion technique. Model-based iterative reconstruction (MBIR) algorithms, which are typically equivalent to regularized inversion techniques, offer a powerful framework to incorporate complex models for the physics, detector miscalibrations and the materials being imaged to obtain high quality reconstructions. Previously, we have proposed an ultrasonic MBIR method that signifcantly improves reconstruction quality compared to SAFT. However, the method made some simplifying assumptions on the propagation model and did not disucss ways to handle data that is obtained by raster scanning a system over a surface to inspect large regions. In this paper, we propose a novel MBIR algorithm that incorporates an anisotropic forward model and allows for the joint processing of data obtained from a system that raster scans a large surface. We demonstrate that the new MBIR method can produce dramatic improvements in reconstruction quality compared to SAFT and suppresses articfacts compared to the perviously presented MBIR approach.

  13. ANNIT - An Efficient Inversion Algorithm based on Prediction Principles

    NASA Astrophysics Data System (ADS)

    Růžek, B.; Kolář, P.

    2009-04-01

    Solution of inverse problems represents meaningful job in geophysics. The amount of data is continuously increasing, methods of modeling are being improved and the computer facilities are also advancing great technical progress. Therefore the development of new and efficient algorithms and computer codes for both forward and inverse modeling is still up to date. ANNIT is contributing to this stream since it is a tool for efficient solution of a set of non-linear equations. Typical geophysical problems are based on parametric approach. The system is characterized by a vector of parameters p, the response of the system is characterized by a vector of data d. The forward problem is usually represented by unique mapping F(p)=d. The inverse problem is much more complex and the inverse mapping p=G(d) is available in an analytical or closed form only exceptionally and generally it may not exist at all. Technically, both forward and inverse mapping F and G are sets of non-linear equations. ANNIT solves such situation as follows: (i) joint subspaces {pD, pM} of original data and model spaces D, M, resp. are searched for, within which the forward mapping F is sufficiently smooth that the inverse mapping G does exist, (ii) numerical approximation of G in subspaces {pD, pM} is found, (iii) candidate solution is predicted by using this numerical approximation. ANNIT is working in an iterative way in cycles. The subspaces {pD, pM} are searched for by generating suitable populations of individuals (models) covering data and model spaces. The approximation of the inverse mapping is made by using three methods: (a) linear regression, (b) Radial Basis Function Network technique, (c) linear prediction (also known as "Kriging"). The ANNIT algorithm has built in also an archive of already evaluated models. Archive models are re-used in a suitable way and thus the number of forward evaluations is minimized. ANNIT is now implemented both in MATLAB and SCILAB. Numerical tests show good performance of the algorithm. Both versions and documentation are available on Internet and anybody can download them. The goal of this presentation is to offer the algorithm and computer codes for anybody interested in the solution to inverse problems.

  14. Logarithmic Laplacian Prior Based Bayesian Inverse Synthetic Aperture Radar Imaging.

    PubMed

    Zhang, Shuanghui; Liu, Yongxiang; Li, Xiang; Bi, Guoan

    2016-04-28

    This paper presents a novel Inverse Synthetic Aperture Radar Imaging (ISAR) algorithm based on a new sparse prior, known as the logarithmic Laplacian prior. The newly proposed logarithmic Laplacian prior has a narrower main lobe with higher tail values than the Laplacian prior, which helps to achieve performance improvement on sparse representation. The logarithmic Laplacian prior is used for ISAR imaging within the Bayesian framework to achieve better focused radar image. In the proposed method of ISAR imaging, the phase errors are jointly estimated based on the minimum entropy criterion to accomplish autofocusing. The maximum a posterior (MAP) estimation and the maximum likelihood estimation (MLE) are utilized to estimate the model parameters to avoid manually tuning process. Additionally, the fast Fourier Transform (FFT) and Hadamard product are used to minimize the required computational efficiency. Experimental results based on both simulated and measured data validate that the proposed algorithm outperforms the traditional sparse ISAR imaging algorithms in terms of resolution improvement and noise suppression.

  15. Computational neural learning formalisms for manipulator inverse kinematics

    NASA Technical Reports Server (NTRS)

    Gulati, Sandeep; Barhen, Jacob; Iyengar, S. Sitharama

    1989-01-01

    An efficient, adaptive neural learning paradigm for addressing the inverse kinematics of redundant manipulators is presented. The proposed methodology exploits the infinite local stability of terminal attractors - a new class of mathematical constructs which provide unique information processing capabilities to artificial neural systems. For robotic applications, synaptic elements of such networks can rapidly acquire the kinematic invariances embedded within the presented samples. Subsequently, joint-space configurations, required to follow arbitrary end-effector trajectories, can readily be computed. In a significant departure from prior neuromorphic learning algorithms, this methodology provides mechanisms for incorporating an in-training skew to handle kinematics and environmental constraints.

  16. Joint groupwise registration and ADC estimation in the liver using a B-value weighted metric.

    PubMed

    Sanz-Estébanez, Santiago; Rabanillo-Viloria, Iñaki; Royuela-Del-Val, Javier; Aja-Fernández, Santiago; Alberola-López, Carlos

    2018-02-01

    The purpose of this work is to develop a groupwise elastic multimodal registration algorithm for robust ADC estimation in the liver on multiple breath hold diffusion weighted images. We introduce a joint formulation to simultaneously solve both the registration and the estimation problems. In order to avoid non-reliable transformations and undesirable noise amplification, we have included appropriate smoothness constraints for both problems. Our metric incorporates the ADC estimation residuals, which are inversely weighted according to the signal content in each diffusion weighted image. Results show that the joint formulation provides a statistically significant improvement in the accuracy of the ADC estimates. Reproducibility has also been measured on real data in terms of the distribution of ADC differences obtained from different b-values subsets. The proposed algorithm is able to effectively deal with both the presence of motion and the geometric distortions, increasing accuracy and reproducibility in diffusion parameters estimation. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  17. Decomposing Large Inverse Problems with an Augmented Lagrangian Approach: Application to Joint Inversion of Body-Wave Travel Times and Surface-Wave Dispersion Measurements

    NASA Astrophysics Data System (ADS)

    Reiter, D. T.; Rodi, W. L.

    2015-12-01

    Constructing 3D Earth models through the joint inversion of large geophysical data sets presents numerous theoretical and practical challenges, especially when diverse types of data and model parameters are involved. Among the challenges are the computational complexity associated with large data and model vectors and the need to unify differing model parameterizations, forward modeling methods and regularization schemes within a common inversion framework. The challenges can be addressed in part by decomposing the inverse problem into smaller, simpler inverse problems that can be solved separately, providing one knows how to merge the separate inversion results into an optimal solution of the full problem. We have formulated an approach to the decomposition of large inverse problems based on the augmented Lagrangian technique from optimization theory. As commonly done, we define a solution to the full inverse problem as the Earth model minimizing an objective function motivated, for example, by a Bayesian inference formulation. Our decomposition approach recasts the minimization problem equivalently as the minimization of component objective functions, corresponding to specified data subsets, subject to the constraints that the minimizing models be equal. A standard optimization algorithm solves the resulting constrained minimization problems by alternating between the separate solution of the component problems and the updating of Lagrange multipliers that serve to steer the individual solution models toward a common model solving the full problem. We are applying our inversion method to the reconstruction of the·crust and upper-mantle seismic velocity structure across Eurasia.· Data for the inversion comprise a large set of P and S body-wave travel times·and fundamental and first-higher mode Rayleigh-wave group velocities.

  18. Bayesian seismic inversion based on rock-physics prior modeling for the joint estimation of acoustic impedance, porosity and lithofacies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Passos de Figueiredo, Leandro, E-mail: leandrop.fgr@gmail.com; Grana, Dario; Santos, Marcio

    We propose a Bayesian approach for seismic inversion to estimate acoustic impedance, porosity and lithofacies within the reservoir conditioned to post-stack seismic and well data. The link between elastic and petrophysical properties is given by a joint prior distribution for the logarithm of impedance and porosity, based on a rock-physics model. The well conditioning is performed through a background model obtained by well log interpolation. Two different approaches are presented: in the first approach, the prior is defined by a single Gaussian distribution, whereas in the second approach it is defined by a Gaussian mixture to represent the well datamore » multimodal distribution and link the Gaussian components to different geological lithofacies. The forward model is based on a linearized convolutional model. For the single Gaussian case, we obtain an analytical expression for the posterior distribution, resulting in a fast algorithm to compute the solution of the inverse problem, i.e. the posterior distribution of acoustic impedance and porosity as well as the facies probability given the observed data. For the Gaussian mixture prior, it is not possible to obtain the distributions analytically, hence we propose a Gibbs algorithm to perform the posterior sampling and obtain several reservoir model realizations, allowing an uncertainty analysis of the estimated properties and lithofacies. Both methodologies are applied to a real seismic dataset with three wells to obtain 3D models of acoustic impedance, porosity and lithofacies. The methodologies are validated through a blind well test and compared to a standard Bayesian inversion approach. Using the probability of the reservoir lithofacies, we also compute a 3D isosurface probability model of the main oil reservoir in the studied field.« less

  19. Joint Inversion of Gravity and Gravity Tensor Data Using the Structural Index as Weighting Function Rate Decay

    NASA Astrophysics Data System (ADS)

    Ialongo, S.; Cella, F.; Fedi, M.; Florio, G.

    2011-12-01

    Most geophysical inversion problems are characterized by a number of data considerably higher than the number of the unknown parameters. This corresponds to solve highly underdetermined systems. To get a unique solution, a priori information must be therefore introduced. We here analyze the inversion of the gravity gradient tensor (GGT). Previous approaches to invert jointly or independently more gradient components are by Li (2001) proposing an algorithm using a depth weighting function and Zhdanov et alii (2004), providing a well focused inversion of gradient data. Both the methods give a much-improved solution compared with the minimum length solution, which is invariably shallow and not representative of the true source distribution. For very undetermined problems, this feature is due to the role of the depth weighting matrices used by both the methods. Recently, Cella and Fedi (2011) showed however that for magnetic and gravity data the depth weighting function has to be defined carefully, under a preliminary application of Euler Deconvolution or Depth from Extreme Point methods, yielding the appropriate structural index and then using it as the rate decay of the weighting function. We therefore propose to extend this last approach to invert jointly or independently the GGT tensor using the structural index as weighting function rate decay. In case of a joint inversion, gravity data can be added as well. This multicomponent case is also relevant because the simultaneous use of several components and gravity increase the number of data and reduce the algebraic ambiguity compared to the inversion of a single component. The reduction of such ambiguity was shown in Fedi et al, (2005) decisive to get an improved depth resolution in inverse problems, independently from any form of depth weighting function. The method is demonstrated to synthetic cases and applied to real cases, such as the Vredefort impact area (South Africa), characterized by a complex density distribution, well defining a central uplift area, ring structures and low density sediments. REFERENCES Cella F., and Fedi M., 2011, Inversion of potential field data using the structural index as weighting function rate decay, Geophysical Prospecting, doi: 10.1111/j.1365-2478.2011.00974.x Fedi M., Hansen P. C., and Paoletti V., 2005 Analysis of depth resolution in potential-field inversion. Geophysics, 70, NO. 6 Li, Y., 2001, 3-D inversion of gravity gradiometry data: 71st Annual Meeting, SEG, Expanded Abstracts, 1470-1473. Zhdanov, M. S., Ellis, R. G., and Mukherjee, S., 2004, Regularized focusing inversion of 3-D gravity tensor data: Geophysics, 69, 925-937.

  20. Applications of the JARS method to study levee sites in southern Texas and southern New Mexico

    USGS Publications Warehouse

    Ivanov, J.; Miller, R.D.; Xia, J.; Dunbar, J.B.

    2007-01-01

    We apply the joint analysis of refractions with surface waves (JARS) method to several sites and compare its results to traditional refraction-tomography methods in efforts of finding a more realistic solution to the inverse refraction-traveltime problem. The JARS method uses a reference model, derived from surface-wave shear-wave velocity estimates, as a constraint. In all of the cases JARS estimates appear more realistic than those from the conventional refraction-tomography methods. As a result, we consider, the JARS algorithm as the preferred method for finding solutions to the inverse refraction-tomography problems. ?? 2007 Society of Exploration Geophysicists.

  1. pyGIMLi: An open-source library for modelling and inversion in geophysics

    NASA Astrophysics Data System (ADS)

    Rücker, Carsten; Günther, Thomas; Wagner, Florian M.

    2017-12-01

    Many tasks in applied geosciences cannot be solved by single measurements, but require the integration of geophysical, geotechnical and hydrological methods. Numerical simulation techniques are essential both for planning and interpretation, as well as for the process understanding of modern geophysical methods. These trends encourage open, simple, and modern software architectures aiming at a uniform interface for interdisciplinary and flexible modelling and inversion approaches. We present pyGIMLi (Python Library for Inversion and Modelling in Geophysics), an open-source framework that provides tools for modelling and inversion of various geophysical but also hydrological methods. The modelling component supplies discretization management and the numerical basis for finite-element and finite-volume solvers in 1D, 2D and 3D on arbitrarily structured meshes. The generalized inversion framework solves the minimization problem with a Gauss-Newton algorithm for any physical forward operator and provides opportunities for uncertainty and resolution analyses. More general requirements, such as flexible regularization strategies, time-lapse processing and different sorts of coupling individual methods are provided independently of the actual methods used. The usage of pyGIMLi is first demonstrated by solving the steady-state heat equation, followed by a demonstration of more complex capabilities for the combination of different geophysical data sets. A fully coupled hydrogeophysical inversion of electrical resistivity tomography (ERT) data of a simulated tracer experiment is presented that allows to directly reconstruct the underlying hydraulic conductivity distribution of the aquifer. Another example demonstrates the improvement of jointly inverting ERT and ultrasonic data with respect to saturation by a new approach that incorporates petrophysical relations in the inversion. Potential applications of the presented framework are manifold and include time-lapse, constrained, joint, and coupled inversions of various geophysical and hydrological data sets.

  2. Quantifying uncertainties of seismic Bayesian inversion of Northern Great Plains

    NASA Astrophysics Data System (ADS)

    Gao, C.; Lekic, V.

    2017-12-01

    Elastic waves excited by earthquakes are the fundamental observations of the seismological studies. Seismologists measure information such as travel time, amplitude, and polarization to infer the properties of earthquake source, seismic wave propagation, and subsurface structure. Across numerous applications, seismic imaging has been able to take advantage of complimentary seismic observables to constrain profiles and lateral variations of Earth's elastic properties. Moreover, seismic imaging plays a unique role in multidisciplinary studies of geoscience by providing direct constraints on the unreachable interior of the Earth. Accurate quantification of uncertainties of inferences made from seismic observations is of paramount importance for interpreting seismic images and testing geological hypotheses. However, such quantification remains challenging and subjective due to the non-linearity and non-uniqueness of geophysical inverse problem. In this project, we apply a reverse jump Markov chain Monte Carlo (rjMcMC) algorithm for a transdimensional Bayesian inversion of continental lithosphere structure. Such inversion allows us to quantify the uncertainties of inversion results by inverting for an ensemble solution. It also yields an adaptive parameterization that enables simultaneous inversion of different elastic properties without imposing strong prior information on the relationship between them. We present retrieved profiles of shear velocity (Vs) and radial anisotropy in Northern Great Plains using measurements from USArray stations. We use both seismic surface wave dispersion and receiver function data due to their complementary constraints of lithosphere structure. Furthermore, we analyze the uncertainties of both individual and joint inversion of those two data types to quantify the benefit of doing joint inversion. As an application, we infer the variation of Moho depths and crustal layering across the northern Great Plains.

  3. A Bayesian trans-dimensional approach for the fusion of multiple geophysical datasets

    NASA Astrophysics Data System (ADS)

    JafarGandomi, Arash; Binley, Andrew

    2013-09-01

    We propose a Bayesian fusion approach to integrate multiple geophysical datasets with different coverage and sensitivity. The fusion strategy is based on the capability of various geophysical methods to provide enough resolution to identify either subsurface material parameters or subsurface structure, or both. We focus on electrical resistivity as the target material parameter and electrical resistivity tomography (ERT), electromagnetic induction (EMI), and ground penetrating radar (GPR) as the set of geophysical methods. However, extending the approach to different sets of geophysical parameters and methods is straightforward. Different geophysical datasets are entered into a trans-dimensional Markov chain Monte Carlo (McMC) search-based joint inversion algorithm. The trans-dimensional property of the McMC algorithm allows dynamic parameterisation of the model space, which in turn helps to avoid bias of the post-inversion results towards a particular model. Given that we are attempting to develop an approach that has practical potential, we discretize the subsurface into an array of one-dimensional earth-models. Accordingly, the ERT data that are collected by using two-dimensional acquisition geometry are re-casted to a set of equivalent vertical electric soundings. Different data are inverted either individually or jointly to estimate one-dimensional subsurface models at discrete locations. We use Shannon's information measure to quantify the information obtained from the inversion of different combinations of geophysical datasets. Information from multiple methods is brought together via introducing joint likelihood function and/or constraining the prior information. A Bayesian maximum entropy approach is used for spatial fusion of spatially dispersed estimated one-dimensional models and mapping of the target parameter. We illustrate the approach with a synthetic dataset and then apply it to a field dataset. We show that the proposed fusion strategy is successful not only in enhancing the subsurface information but also as a survey design tool to identify the appropriate combination of the geophysical tools and show whether application of an individual method for further investigation of a specific site is beneficial.

  4. Joint Inversion of Earthquake Source Parameters with local and teleseismic body waves

    NASA Astrophysics Data System (ADS)

    Chen, W.; Ni, S.; Wang, Z.

    2011-12-01

    In the classical source parameter inversion algorithm of CAP (Cut and Paste method, by Zhao and Helmberger), waveform data at near distances (typically less than 500km) are partitioned into Pnl and surface waves to account for uncertainties in the crustal models and different amplitude weight of body and surface waves. The classical CAP algorithms have proven effective for resolving source parameters (focal mechanisms, depth and moment) for earthquakes well recorded on relatively dense seismic network. However for regions covered with sparse stations, it is challenging to achieve precise source parameters . In this case, a moderate earthquake of ~M6 is usually recorded on only one or two local stations with epicentral distances less than 500 km. Fortunately, an earthquake of ~M6 can be well recorded on global seismic networks. Since the ray paths for teleseismic and local body waves sample different portions of the focal sphere, combination of teleseismic and local body wave data helps constrain source parameters better. Here we present a new CAP mothod (CAPjoint), which emploits both teleseismic body waveforms (P and SH waves) and local waveforms (Pnl, Rayleigh and Love waves) to determine source parameters. For an earthquake in Nevada that is well recorded with dense local network (USArray stations), we compare the results from CAPjoint with those from the traditional CAP method involving only of local waveforms , and explore the efficiency with bootstraping statistics to prove the results derived by CAPjoint are stable and reliable. Even with one local station included in joint inversion, accuracy of source parameters such as moment and strike can be much better improved.

  5. Crustal seismic structure beneath the southwest Yunnan region from joint inversion of body-wave and surface wave data

    NASA Astrophysics Data System (ADS)

    Luo, Y.; Thurber, C. H.; Zeng, X.; Zhang, L.

    2016-12-01

    Data from 71 broadband stations of a dense transportable array deployed in southwest Yunnan makes it possible to improve the resolution of the seismic model in this region. Continuous waveforms from 12 permanent stations of the China National Seismic Network were also used in this study. We utilized one-year continuous vertical component records to compute ambient noise cross-correlation functions (NCF). More than 3,000 NCFs were obtained and used to measure group velocities between 5 and 25 seconds with the frequency-time analysis method. This frequency band is most sensitive to crustal seismic structure, especially the upper and middle crust. The group velocity at short-period shows a clear azimuthal anisotropy with a north-south fast direction. The fast direction is consistent with previous seismic results revealed from shear wave splitting. More than 2,000 group velocity measurements were employed to invert the surface wave dispersion data for group velocity maps. We applied a finite difference forward modeling algorithm with an iterative inversion. A new body-wave and surface wave joint inversion algorithm (Fang et al., 2016) was utilized to improve the resolution of both P and S models. About 60,000 P wave and S wave arrivals from 1,780 local earthquakes, which occurred from May 2011 to December 2013 with magnitudes larger than 2.0, were manually picked. The new high-resolution seismic structure shows good consistency with local geological features, e.g. Tengchong Volcano. The earthquake locations also were refined with our new velocity model.

  6. Modelling and simulation of the intervertebral movements of the lumbar spine using an inverse kinematic algorithm.

    PubMed

    Sun, L W; Lee, R Y W; Lu, W; Luk, K D K

    2004-11-01

    An inverse kinematic model is presented that was employed to determine the optimum intervertebral joint configuration for a given forward-bending posture of the human trunk. The lumbar spine was modelled as an open-end, kinematic chain of five links that represented the five vertebrae (L 1-L5). An optimisation equation with physiological constraints was employed to determine the intervertebral joint configuration. Intervertebral movements were measured from sagittal X-ray films of 22 subjects. The mean difference between the X-ray measurements of intervertebral rotations in the sagittal plane and the values predicted by the kinematic model was less than 1.6 degrees. Pearson product-moment correlation R was used to measure the relationship between the measured and predicted values. The R-values were found to be high, ranging from 0.83 to 0.97, for prediction of intervertebral rotation, but poor for intervertebral translation (R= 0.08-0.67). It is concluded that the inverse kinematic model will be clinically useful for predicting intervertebral rotation when X-ray or invasive measurements are undesirable. It will also be useful to biomechanical modelling, which requires accurate kinematic information as model input data.

  7. Joint reconstruction of x-ray fluorescence and transmission tomography

    PubMed Central

    Di, Zichao Wendy; Chen, Si; Hong, Young Pyo; Jacobsen, Chris; Leyffer, Sven; Wild, Stefan M.

    2017-01-01

    X-ray fluorescence tomography is based on the detection of fluorescence x-ray photons produced following x-ray absorption while a specimen is rotated; it provides information on the 3D distribution of selected elements within a sample. One limitation in the quality of sample recovery is the separation of elemental signals due to the finite energy resolution of the detector. Another limitation is the effect of self-absorption, which can lead to inaccurate results with dense samples. To recover a higher quality elemental map, we combine x-ray fluorescence detection with a second data modality: conventional x-ray transmission tomography using absorption. By using these combined signals in a nonlinear optimization-based approach, we demonstrate the benefit of our algorithm on real experimental data and obtain an improved quantitative reconstruction of the spatial distribution of dominant elements in the sample. Compared with single-modality inversion based on x-ray fluorescence alone, this joint inversion approach reduces ill-posedness and should result in improved elemental quantification and better correction of self-absorption. PMID:28788848

  8. Frequency-domain optical tomographic image reconstruction algorithm with the simplified spherical harmonics (SP3) light propagation model.

    PubMed

    Kim, Hyun Keol; Montejo, Ludguier D; Jia, Jingfei; Hielscher, Andreas H

    2017-06-01

    We introduce here the finite volume formulation of the frequency-domain simplified spherical harmonics model with n -th order absorption coefficients (FD-SP N ) that approximates the frequency-domain equation of radiative transfer (FD-ERT). We then present the FD-SP N based reconstruction algorithm that recovers absorption and scattering coefficients in biological tissue. The FD-SP N model with 3 rd order absorption coefficient (i.e., FD-SP 3 ) is used as a forward model to solve the inverse problem. The FD-SP 3 is discretized with a node-centered finite volume scheme and solved with a restarted generalized minimum residual (GMRES) algorithm. The absorption and scattering coefficients are retrieved using a limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm. Finally, the forward and inverse algorithms are evaluated using numerical phantoms with optical properties and size that mimic small-volume tissue such as finger joints and small animals. The forward results show that the FD-SP 3 model approximates the FD-ERT (S 12 ) solution within relatively high accuracy; the average error in the phase (<3.7%) and the amplitude (<7.1%) of the partial current at the boundary are reported. From the inverse results we find that the absorption and scattering coefficient maps are more accurately reconstructed with the SP 3 model than those with the SP 1 model. Therefore, this work shows that the FD-SP 3 is an efficient model for optical tomographic imaging of small-volume media with non-diffuse properties both in terms of computational time and accuracy as it requires significantly lower CPU time than the FD-ERT (S 12 ) and also it is more accurate than the FD-SP 1 .

  9. Recovery of surface mass redistribution from a joint inversion of GPS and GRACE data - A methodology and results from the Australian and other continents

    NASA Astrophysics Data System (ADS)

    Han, S. C.; Tangdamrongsub, N.; Razeghi, S. M.

    2017-12-01

    We present a methodology to invert a regional set of vertical displacement data from Global Positioning System (GPS) to determine surface mass redistribution. It is assumed that GPS deformation is a result of the Earth's elastic response to the surface mass load of hydrology, atmosphere, and ocean. The identical assumption is made when global geopotential change data from Gravity Recovery And Climate Experiment (GRACE) are used to determine surface mass changes. We developed an algorithm to estimate the spectral information of displacements from "regional" GPS data through regional spherical (Slepian) basis functions and apply the load Love numbers to estimate the mass load. We rigorously examine all systematic errors caused by various truncations (spherical harmonic series and Slepian series) and the smoothing constraint applied to the GPS-only inversion. We demonstrate the technique by processing 16 years of daily vertical motions determined from 114 GPS stations in Australia. The GPS inverted surface mass changes are validated against GRACE data, atmosphere and ocean models, and a land surface model. Seasonal and inter-annual terrestrial mass variations from GPS are in good agreement with GRACE data and the water storage models. The GPS recovery compares better with the water storage model around the smaller coastal basins of Australia than two different GRACE solutions. The sub-monthly mass changes from GPS provide meaningful results agreeing with atmospheric mass changes in central Australia. Finally, we integrate GPS data from different continents with GRACE in the least-square normal equations and solve for the global surface mass changes by jointly inverting GPS and GRACE data. We present the results of surface mass changes from the GPS-only inversion and from the joint GPS-GRACE inversion.

  10. Joint image encryption and compression scheme based on IWT and SPIHT

    NASA Astrophysics Data System (ADS)

    Zhang, Miao; Tong, Xiaojun

    2017-03-01

    A joint lossless image encryption and compression scheme based on integer wavelet transform (IWT) and set partitioning in hierarchical trees (SPIHT) is proposed to achieve lossless image encryption and compression simultaneously. Making use of the properties of IWT and SPIHT, encryption and compression are combined. Moreover, the proposed secure set partitioning in hierarchical trees (SSPIHT) via the addition of encryption in the SPIHT coding process has no effect on compression performance. A hyper-chaotic system, nonlinear inverse operation, Secure Hash Algorithm-256(SHA-256), and plaintext-based keystream are all used to enhance the security. The test results indicate that the proposed methods have high security and good lossless compression performance.

  11. A simplified Integer Cosine Transform and its application in image compression

    NASA Technical Reports Server (NTRS)

    Costa, M.; Tong, K.

    1994-01-01

    A simplified version of the integer cosine transform (ICT) is described. For practical reasons, the transform is considered jointly with the quantization of its coefficients. It differs from conventional ICT algorithms in that the combined factors for normalization and quantization are approximated by powers of two. In conventional algorithms, the normalization/quantization stage typically requires as many integer divisions as the number of transform coefficients. By restricting the factors to powers of two, these divisions can be performed by variable shifts in the binary representation of the coefficients, with speed and cost advantages to the hardware implementation of the algorithm. The error introduced by the factor approximations is compensated for in the inverse ICT operation, executed with floating point precision. The simplified ICT algorithm has potential applications in image-compression systems with disparate cost and speed requirements in the encoder and decoder ends. For example, in deep space image telemetry, the image processors on board the spacecraft could take advantage of the simplified, faster encoding operation, which would be adjusted on the ground, with high-precision arithmetic. A dual application is found in compressed video broadcasting. Here, a fast, high-performance processor at the transmitter would precompensate for the factor approximations in the inverse ICT operation, to be performed in real time, at a large number of low-cost receivers.

  12. Estimation of the full-field dynamic response of a floating bridge using Kalman-type filtering algorithms

    NASA Astrophysics Data System (ADS)

    Petersen, Ø. W.; Øiseth, O.; Nord, T. S.; Lourens, E.

    2018-07-01

    Numerical predictions of the dynamic response of complex structures are often uncertain due to uncertainties inherited from the assumed load effects. Inverse methods can estimate the true dynamic response of a structure through system inversion, combining measured acceleration data with a system model. This article presents a case study of full-field dynamic response estimation of a long-span floating bridge: the Bergøysund Bridge in Norway. This bridge is instrumented with a network of 14 triaxial accelerometers. The system model consists of 27 vibration modes with natural frequencies below 2 Hz, obtained from a tuned finite element model that takes the fluid-structure interaction with the surrounding water into account. Two methods, a joint input-state estimation algorithm and a dual Kalman filter, are applied to estimate the full-field response of the bridge. The results demonstrate that the displacements and the accelerations can be estimated at unmeasured locations with reasonable accuracy when the wave loads are the dominant source of excitation.

  13. Identifying seawater intrusion in coastal areas by means of 1D and quasi-2D joint inversion of TDEM and VES data

    NASA Astrophysics Data System (ADS)

    Martínez-Moreno, F. J.; Monteiro-Santos, F. A.; Bernardo, I.; Farzamian, M.; Nascimento, C.; Fernandes, J.; Casal, B.; Ribeiro, J. A.

    2017-09-01

    Seawater intrusion is an increasingly widespread problem in coastal aquifers caused by climate changes -sea-level rise, extreme phenomena like flooding and droughts- and groundwater depletion near to the coastline. To evaluate and mitigate the environmental risks of this phenomenon it is necessary to characterize the coastal aquifer and the salt intrusion. Geophysical methods are the most appropriate tool to address these researches. Among all geophysical techniques, electrical methods are able to detect seawater intrusions due to the high resistivity contrast between saltwater, freshwater and geological layers. The combination of two or more geophysical methods is recommended and they are more efficient when both data are inverted jointly because the final model encompasses the physical properties measured for each methods. In this investigation, joint inversion of vertical electric and time domain soundings has been performed to examine seawater intrusion in an area within the Ferragudo-Albufeira aquifer system (Algarve, South of Portugal). For this purpose two profiles combining electrical resistivity tomography (ERT) and time domain electromagnetic (TDEM) methods were measured and the results were compared with the information obtained from exploration drilling. Three different inversions have been carried out: single inversion of the ERT and TDEM data, 1D joint inversion and quasi-2D joint inversion. Single inversion results identify seawater intrusion, although the sedimentary layers detected in exploration drilling were not well differentiated. The models obtained with 1D joint inversion improve the previous inversion due to better detection of sedimentary layer and the seawater intrusion appear to be better defined. Finally, the quasi-2D joint inversion reveals a more realistic shape of the seawater intrusion and it is able to distinguish more sedimentary layers recognised in the exploration drilling. This study demonstrates that the quasi-2D joint inversion improves the previous inversions methods making it a powerful tool applicable to different research areas.

  14. Joint two-dimensional inversion of magnetotelluric and gravity data using correspondence maps

    NASA Astrophysics Data System (ADS)

    Carrillo, Jonathan; Gallardo, Luis A.

    2018-05-01

    An accurate characterization of subsurface targets relies on the interpretation of multiple geophysical properties and their relationships. There are mainly two links to jointly invert different geophysical parameters: structural and petrophysical relationships. Structural approaches aim at minimizing topological differences and are widely popular since they need only a few assumptions about models. Conversely, methods based on petrophysical links rely mostly on the property values themselves and can provide a strong coupling between models, but they need to be treated carefully because specific direct relationship must be known or assumed. While some petrophysical relationships are widely accepted, it remains the question whether we may be able to detect them directly from the geophysical data. Currently, there is no reported development that takes full advantage of the flexibility of jointly estimating in-situ empirical relationships and geophysical models for a given geological scenario. We thus developed an algorithm for the two dimensional joint inversion of gravity and magnetotelluric data that seeks simultaneously for a density-resistivity relationship optimal for each studied site described trough a polynomial function. The iterative two-dimensional scheme is tested using synthetic and field data from Cerro Prieto, Mexico. The resulting models show an enhanced resolution with an increased structural and petrophysical correlation. We show that by fitting a functional relationship we increased significantly the coupled geological sense of the models at a little cost in terms of data misfit.

  15. Inverse modeling of InSAR and ground leveling data for 3D volumetric strain distribution

    NASA Astrophysics Data System (ADS)

    Gallardo, L. A.; Glowacka, E.; Sarychikhina, O.

    2015-12-01

    Wide availability of modern Interferometric Synthetic aperture Radar (InSAR) data have made possible the extensive observation of differential surface displacements and are becoming an efficient tool for the detailed monitoring of terrain subsidence associated to reservoir dynamics, volcanic deformation and active tectonism. Unfortunately, this increasing popularity has not been matched by the availability of automated codes to estimate underground deformation, since many of them still rely on trial-error subsurface model building strategies. We posit that an efficient algorithm for the volumetric modeling of differential surface displacements should match the availability of current leveling and InSAR data and have developed an algorithm for the joint inversion of ground leveling and dInSAR data in 3D. We assume the ground displacements are originated by a stress free-volume strain distribution in a homogeneous elastic media and determined the displacement field associated to an ensemble of rectangular prisms. This formulation is then used to develop a 3D conjugate gradient inversion code that searches for the three-dimensional distribution of the volumetric strains that predict InSAR and leveling surface displacements simultaneously. The algorithm is regularized applying discontinuos first and zero order Thikonov constraints. For efficiency, the resulting computational code takes advantage of the resulting convolution integral associated to the deformation field and some basic tools for multithreading parallelization. We extensively test our algorithm on leveling and InSAR test and field data of the Northwest of Mexico and compare to some feasible geological scenarios of underground deformation.

  16. Influence of altered gait patterns on the hip joint contact forces.

    PubMed

    Carriero, Alessandra; Zavatsky, Amy; Stebbins, Julie; Theologis, Tim; Lenaerts, Gerlinde; Jonkers, Ilse; Shefelbine, Sandra J

    2014-01-01

    Children who exhibit gait deviations often present a range of bone deformities, particularly at the proximal femur. Altered gait may affect bone growth and lead to deformities by exerting abnormal stresses on the developing bones. The objective of this study was to calculate variations in the hip joint contact forces with different gait patterns. Muscle and hip joint contact forces of four children with different walking characteristics were calculated using an inverse dynamic analysis and a static optimisation algorithm. Kinematic and kinetic analyses were based on a generic musculoskeletal model scaled down to accommodate the dimensions of each child. Results showed that for all the children with altered gaits both the orientation and magnitude of the hip joint contact force deviated from normal. The child with the most severe gait deviations had hip joint contact forces 30% greater than normal, most likely due to the increase in muscle forces required to sustain his crouched stance. Determining how altered gait affects joint loading may help in planning treatment strategies to preserve correct loading on the bone from a young age.

  17. Active and passive electrical and seismic time-lapse monitoring of earthen embankments

    NASA Astrophysics Data System (ADS)

    Rittgers, Justin Bradley

    In this dissertation, I present research involving the application of active and passive geophysical data collection, data assimilation, and inverse modeling for the purpose of earthen embankment infrastructure assessment. Throughout the dissertation, I identify several data characteristics, and several challenges intrinsic to characterization and imaging of earthen embankments and anomalous seepage phenomena, from both a static and time-lapse geophysical monitoring perspective. I begin with the presentation of a field study conducted on a seeping earthen dam, involving static and independent inversions of active tomography data sets, and self-potential modeling of fluid flow within a confined aquifer. Additionally, I present results of active and passive time-lapse geophysical monitoring conducted during two meso-scale laboratory experiments involving the failure and self-healing of embankment filter materials via induced vertical cracking. Identified data signatures and trends, as well as 4D inversion results, are discussed as an underlying motivation for conducting subsequent research. Next, I present a new 4D acoustic emissions source localization algorithm that is applied to passive seismic monitoring data collected during a full-scale embankment failure test. Acoustic emissions localization results are then used to help spatially constrain 4D inversion of collocated self-potential monitoring data. I then turn to time-lapse joint inversion of active tomographic data sets applied to the characterization and monitoring of earthen embankments. Here, I develop a new technique for applying spatiotemporally varying structural joint inversion constraints. The new technique, referred to as Automatic Joint Constraints (AJC), is first demonstrated on a synthetic 2D joint model space, and is then applied to real geophysical monitoring data sets collected during a full-scale earthen embankment piping-failure test. Finally, I discuss some non-technical issues related to earthen embankment failures from a Science, Technology, Engineering, and Policy (STEP) perspective. Here, I discuss how the proclaimed scientific expertise and shifting of responsibility (Responsibilization) by governing entities tasked with operating and maintaining water storage and conveyance infrastructure throughout the United States tends to create barriers for 1) public voice and participation in relevant technical activities and outcomes, 2) meaningful discussions with the public and media during crisis communication, and 3) public perception of risk and the associated resilience of downhill communities.

  18. Joint state and parameter estimation of the hemodynamic model by particle smoother expectation maximization method

    NASA Astrophysics Data System (ADS)

    Aslan, Serdar; Taylan Cemgil, Ali; Akın, Ata

    2016-08-01

    Objective. In this paper, we aimed for the robust estimation of the parameters and states of the hemodynamic model by using blood oxygen level dependent signal. Approach. In the fMRI literature, there are only a few successful methods that are able to make a joint estimation of the states and parameters of the hemodynamic model. In this paper, we implemented a maximum likelihood based method called the particle smoother expectation maximization (PSEM) algorithm for the joint state and parameter estimation. Main results. Former sequential Monte Carlo methods were only reliable in the hemodynamic state estimates. They were claimed to outperform the local linearization (LL) filter and the extended Kalman filter (EKF). The PSEM algorithm is compared with the most successful method called square-root cubature Kalman smoother (SCKS) for both state and parameter estimation. SCKS was found to be better than the dynamic expectation maximization (DEM) algorithm, which was shown to be a better estimator than EKF, LL and particle filters. Significance. PSEM was more accurate than SCKS for both the state and the parameter estimation. Hence, PSEM seems to be the most accurate method for the system identification and state estimation for the hemodynamic model inversion literature. This paper do not compare its results with Tikhonov-regularized Newton—CKF (TNF-CKF), a recent robust method which works in filtering sense.

  19. A random-walk algorithm for modeling lithospheric density and the role of body forces in the evolution of the Midcontinent Rift

    USGS Publications Warehouse

    Levandowski, William Brower; Boyd, Oliver; Briggs, Richard; Gold, Ryan D.

    2015-01-01

    We test this algorithm on the Proterozoic Midcontinent Rift (MCR), north-central U.S. The MCR provides a challenge because it hosts a gravity high overlying low shear-wave velocity crust in a generally flat region. Our initial density estimates are derived from a seismic velocity/crustal thickness model based on joint inversion of surface-wave dispersion and receiver functions. By adjusting these estimates to reproduce gravity and topography, we generate a lithospheric-scale model that reveals dense middle crust and eclogitized lowermost crust within the rift. Mantle lithospheric density beneath the MCR is not anomalous, consistent with geochemical evidence that lithospheric mantle was not the primary source of rift-related magmas and suggesting that extension occurred in response to far-field stress rather than a hot mantle plume. Similarly, the subsequent inversion of normal faults resulted from changing far-field stress that exploited not only warm, recently faulted crust but also a gravitational potential energy low in the MCR. The success of this density modeling algorithm in the face of such apparently contradictory geophysical properties suggests that it may be applicable to a variety of tectonic and geodynamic problems. 

  20. Local sensory control of a dexterous end effector

    NASA Technical Reports Server (NTRS)

    Pinto, Victor H.; Everett, Louis J.; Driels, Morris

    1990-01-01

    A numerical scheme was developed to solve the inverse kinematics for a user-defined manipulator. The scheme was based on a nonlinear least-squares technique which determines the joint variables by minimizing the difference between the target end effector pose and the actual end effector pose. The scheme was adapted to a dexterous hand in which the joints are either prismatic or revolute and the fingers are considered open kinematic chains. Feasible solutions were obtained using a three-fingered dexterous hand. An algorithm to estimate the position and orientation of a pre-grasped object was also developed. The algorithm was based on triangulation using an ideal sensor and a spherical object model. By choosing the object to be a sphere, only the position of the object frame was important. Based on these simplifications, a minimum of three sensors are needed to find the position of a sphere. A two dimensional example to determine the position of a circle coordinate frame using a two-fingered dexterous hand was presented.

  1. 2D Inversion of DCR and Time Domain IP data: an example from ore exploration

    NASA Astrophysics Data System (ADS)

    Adrian, J.; Tezkan, B.

    2015-12-01

    Ore deposits often appear as disseminated sulfidic materials. Exploring these deposits with the Direct Current Resistivity (DCR) method alone is challenging because the resistivity signatures caused by disseminated material is often hard to detect. The Time-domain Induced Polarization (TDIP) method, on the other hand, is qualified to detect areas with disseminated sulfidic ores due to large electrode polarization effects which result in large chargeability anomalies. By employing both methods we gain information about both, the resistivity and the chargeability distribution of the subsurface.On the poster we present the current state of the development of a 2D smoothness constraint inversion algorithm for DCR and TDIP data. The implemented forward algorithm uses a Finite Element approach with an unstructured mesh. The model parameters resistivity and chargeability are connected by either a simple conductivity pertubation approach or a complex conductivity approach.As a case study, the 2D inversion results of DCR/TDIP and RMT data obtained during a survey on a sulfidic copper ore deposit in Turkey are presented. The presence of an ore deposit is indicated by areas with low resistivity and significantly high chargeability in the inversion models.This work is part of the BMBF/TUEBITAK funded project ``Two-dimensional joint interpretation of Radiomagnetotellurics (RMT), Direct Current Resistivity (DCR) and Induced Polarization (IP) data: an example from ore exploration''.

  2. Adaptive control of robotic manipulators

    NASA Technical Reports Server (NTRS)

    Seraji, H.

    1987-01-01

    The author presents a novel approach to adaptive control of manipulators to achieve trajectory tracking by the joint angles. The central concept in this approach is the utilization of the manipulator inverse as a feedforward controller. The desired trajectory is applied as an input to the feedforward controller which behaves as the inverse of the manipulator at any operating point; the controller output is used as the driving torque for the manipulator. The controller gains are then updated by an adaptation algorithm derived from MRAC (model reference adaptive control) theory to cope with variations in the manipulator inverse due to changes of the operating point. An adaptive feedback controller and an auxiliary signal are also used to enhance closed-loop stability and to achieve faster adaptation. The proposed control scheme is computationally fast and does not require a priori knowledge of the complex dynamic model or the parameter values of the manipulator or the payload.

  3. Time-Lapse Joint Inversion of Cross-Well DC Resistivity and Seismic Data: A Numerical Investigation

    EPA Science Inventory

    Time-lapse joint inversion of geophysical data is required to image the evolution of oil reservoirs during production and enhanced oil recovery, CO2 sequestration, geothermal fields during production, and to monitor the evolution of contaminant plumes. Joint inversion schemes red...

  4. A fast and robust kinematic model for a 12 DoF hyper-redundant robot positioning: An optimization proposal

    NASA Astrophysics Data System (ADS)

    Lima, José; Pereira, Ana I.; Costa, Paulo; Pinto, Andry; Costa, Pedro

    2017-07-01

    This paper describes an optimization procedure for a robot with 12 degrees of freedom avoiding the inverse kinematics problem, which is a hard task for this type of robot manipulator. This robot can be used to pick and place tasks in complex designs. Combining an accurate and fast direct kinematics model with optimization strategies, it is possible to achieve the joints angles for a desired end-effector position and orientation. The optimization methods stretched simulated annealing algorithm and genetic algorithm were used. The solutions found were validated using data originated by a real and by a simulated robot formed by 12 servomotors with a gripper.

  5. A new approach to adaptive control of manipulators

    NASA Technical Reports Server (NTRS)

    Seraji, H.

    1987-01-01

    An approach in which the manipulator inverse is used as a feedforward controller is employed in the adaptive control of manipulators in order to achieve trajectory tracking by the joint angles. The desired trajectory is applied as an input to the feedforward controller, and the controller output is used as the driving torque for the manipulator. An adaptive algorithm obtained from MRAC theory is used to update the controller gains to cope with variations in the manipulator inverse due to changes of the operating point. An adaptive feedback controller and an auxiliary signal enhance closed-loop stability and achieve faster adaptation. Simulation results demonstrate the effectiveness of the proposed control scheme for different reference trajectories, and despite large variations in the payload.

  6. Trans-dimensional joint inversion of seabed scattering and reflection data.

    PubMed

    Steininger, Gavin; Dettmer, Jan; Dosso, Stan E; Holland, Charles W

    2013-03-01

    This paper examines joint inversion of acoustic scattering and reflection data to resolve seabed interface roughness parameters (spectral strength, exponent, and cutoff) and geoacoustic profiles. Trans-dimensional (trans-D) Bayesian sampling is applied with both the number of sediment layers and the order (zeroth or first) of auto-regressive parameters in the error model treated as unknowns. A prior distribution that allows fluid sediment layers over an elastic basement in a trans-D inversion is derived and implemented. Three cases are considered: Scattering-only inversion, joint scattering and reflection inversion, and joint inversion with the trans-D auto-regressive error model. Including reflection data improves the resolution of scattering and geoacoustic parameters. The trans-D auto-regressive model further improves scattering resolution and correctly differentiates between strongly and weakly correlated residual errors.

  7. Effects of Heterogeneities on the Propagation, Scattering and Attenuation of Seismic Waves and the Characterization of Seismic Source

    DTIC Science & Technology

    1985-01-01

    of Kilauea volcano , Hawaii . Science. 223. 165-167. 1984. Tribolet. J.M., A new phase unwrapping algorithm IEEE Trans. Acoust. Speech and Signal...34 under the Kilauea volcano using ä travel time inversion. He found a high velocity core of the volcano surrounding an interior lower velocity region...Helens volcano ’his -.vas a joint effort undertaken by Oregon State University. Massachusetts Institute of Technology, and f.he L’.S. Geological Survey

  8. Application of the perfectly matched layer in 3-D marine controlled-source electromagnetic modelling

    NASA Astrophysics Data System (ADS)

    Li, Gang; Li, Yuguo; Han, Bo; Liu, Zhan

    2018-01-01

    In this study, the complex frequency-shifted perfectly matched layer (CFS-PML) in stretching Cartesian coordinates is successfully applied to 3-D frequency-domain marine controlled-source electromagnetic (CSEM) field modelling. The Dirichlet boundary, which is usually used within the traditional framework of EM modelling algorithms, assumes that the electric or magnetic field values are zero at the boundaries. This requires the boundaries to be sufficiently far away from the area of interest. To mitigate the boundary artefacts, a large modelling area may be necessary even though cell sizes are allowed to grow toward the boundaries due to the diffusion of the electromagnetic wave propagation. Compared with the conventional Dirichlet boundary, the PML boundary is preferred as the modelling area of interest could be restricted to the target region and only a few absorbing layers surrounding can effectively depress the artificial boundary effect without losing the numerical accuracy. Furthermore, for joint inversion of seismic and marine CSEM data, if we use the PML for CSEM field simulation instead of the conventional Dirichlet, the modelling area for these two different geophysical data collected from the same survey area could be the same, which is convenient for joint inversion grid matching. We apply the CFS-PML boundary to 3-D marine CSEM modelling by using the staggered finite-difference discretization. Numerical test indicates that the modelling algorithm using the CFS-PML also shows good accuracy compared to the Dirichlet. Furthermore, the modelling algorithm using the CFS-PML shows advantages in computational time and memory saving than that using the Dirichlet boundary. For the 3-D example in this study, the memory saving using the PML is nearly 42 per cent and the time saving is around 48 per cent compared to using the Dirichlet.

  9. Inversion Method for Early Detection of ARES-1 Case Breach Failure

    NASA Technical Reports Server (NTRS)

    Mackey, Ryan M.; Kulikov, Igor K.; Bajwa, Anupa; Berg, Peter; Smelyanskiy, Vadim

    2010-01-01

    A document describes research into the problem of detecting a case breach formation at an early stage of a rocket flight. An inversion algorithm for case breach allocation is proposed and analyzed. It is shown how the case breach can be allocated at an early stage of its development by using the rocket sensor data and the output data from the control block of the rocket navigation system. The results are simulated with MATLAB/Simulink software. The efficiency of an inversion algorithm for a case breach location is discussed. The research was devoted to the analysis of the ARES-l flight during the first 120 seconds after the launch and early prediction of case breach failure. During this time, the rocket is propelled by its first-stage Solid Rocket Booster (SRB). If a breach appears in SRB case, the gases escaping through it will produce the (side) thrust directed perpendicular to the rocket axis. The side thrust creates torque influencing the rocket attitude. The ARES-l control system will compensate for the side thrust until it reaches some critical value, after which the flight will be uncontrollable. The objective of this work was to obtain the start time of case breach development and its location using the rocket inertial navigation sensors and GNC data. The algorithm was effective for the detection and location of a breach in an SRB field joint at an early stage of its development.

  10. Stochastic Inversion of 2D Magnetotelluric Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Jinsong

    2010-07-01

    The algorithm is developed to invert 2D magnetotelluric (MT) data based on sharp boundary parametrization using a Bayesian framework. Within the algorithm, we consider the locations and the resistivity of regions formed by the interfaces are as unknowns. We use a parallel, adaptive finite-element algorithm to forward simulate frequency-domain MT responses of 2D conductivity structure. Those unknown parameters are spatially correlated and are described by a geostatistical model. The joint posterior probability distribution function is explored by Markov Chain Monte Carlo (MCMC) sampling methods. The developed stochastic model is effective for estimating the interface locations and resistivity. Most importantly, itmore » provides details uncertainty information on each unknown parameter. Hardware requirements: PC, Supercomputer, Multi-platform, Workstation; Software requirements C and Fortan; Operation Systems/version is Linux/Unix or Windows« less

  11. A Joint Optimization Criterion for Blind DS-CDMA Detection

    NASA Astrophysics Data System (ADS)

    Durán-Díaz, Iván; Cruces-Alvarez, Sergio A.

    2006-12-01

    This paper addresses the problem of the blind detection of a desired user in an asynchronous DS-CDMA communications system with multipath propagation channels. Starting from the inverse filter criterion introduced by Tugnait and Li in 2001, we propose to tackle the problem in the context of the blind signal extraction methods for ICA. In order to improve the performance of the detector, we present a criterion based on the joint optimization of several higher-order statistics of the outputs. An algorithm that optimizes the proposed criterion is described, and its improved performance and robustness with respect to the near-far problem are corroborated through simulations. Additionally, a simulation using measurements on a real software-radio platform at 5 GHz has also been performed.

  12. A simple approach to the joint inversion of seismic body and surface waves applied to the southwest U.S.

    NASA Astrophysics Data System (ADS)

    West, Michael; Gao, Wei; Grand, Stephen

    2004-08-01

    Body and surface wave tomography have complementary strengths when applied to regional-scale studies of the upper mantle. We present a straight-forward technique for their joint inversion which hinges on treating surface waves as horizontally-propagating rays with deep sensitivity kernels. This formulation allows surface wave phase or group measurements to be integrated directly into existing body wave tomography inversions with modest effort. We apply the joint inversion to a synthetic case and to data from the RISTRA project in the southwest U.S. The data variance reductions demonstrate that the joint inversion produces a better fit to the combined dataset, not merely a compromise. For large arrays, this method offers an improvement over augmenting body wave tomography with a one-dimensional model. The joint inversion combines the absolute velocity of a surface wave model with the high resolution afforded by body waves-both qualities that are required to understand regional-scale mantle phenomena.

  13. Joint reconstruction of x-ray fluorescence and transmission tomography

    DOE PAGES

    Di, Zichao; Chen, Si; Hong, Young Pyo; ...

    2017-05-30

    X-ray fluorescence tomography is based on the detection of fluorescence x-ray photons produced following x-ray absorption while a specimen is rotated; it provides information on the 3D distribution of selected elements within a sample. One limitation in the quality of sample recovery is the separation of elemental signals due to the finite energy resolution of the detector. Another limitation is the effect of self-absorption, which can lead to inaccurate results with dense samples. To recover a higher quality elemental map, we combine x-ray fluorescence detection with a second data modality: conventional x-ray transmission tomography using absorption. By using these combinedmore » signals in a nonlinear optimization-based approach, we demonstrate the benefit of our algorithm on real experimental data and obtain an improved quantitative reconstruction of the spatial distribution of dominant elements in the sample. Furthermore, compared with single-modality inversion based on x-ray fluorescence alone, this joint inversion approach reduces ill-posedness and should result in improved elemental quantification and better correction of self-absorption.« less

  14. Joint inversion of NMR and SIP data to estimate pore size distribution of geomaterials

    NASA Astrophysics Data System (ADS)

    Niu, Qifei; Zhang, Chi

    2018-03-01

    There are growing interests in using geophysical tools to characterize the microstructure of geomaterials because of the non-invasive nature and the applicability in field. In these applications, multiple types of geophysical data sets are usually processed separately, which may be inadequate to constrain the key feature of target variables. Therefore, simultaneous processing of multiple data sets could potentially improve the resolution. In this study, we propose a method to estimate pore size distribution by joint inversion of nuclear magnetic resonance (NMR) T2 relaxation and spectral induced polarization (SIP) spectra. The petrophysical relation between NMR T2 relaxation time and SIP relaxation time is incorporated in a nonlinear least squares problem formulation, which is solved using Gauss-Newton method. The joint inversion scheme is applied to a synthetic sample and a Berea sandstone sample. The jointly estimated pore size distributions are very close to the true model and results from other experimental method. Even when the knowledge of the petrophysical models of the sample is incomplete, the joint inversion can still capture the main features of the pore size distribution of the samples, including the general shape and relative peak positions of the distribution curves. It is also found from the numerical example that the surface relaxivity of the sample could be extracted with the joint inversion of NMR and SIP data if the diffusion coefficient of the ions in the electrical double layer is known. Comparing to individual inversions, the joint inversion could improve the resolution of the estimated pore size distribution because of the addition of extra data sets. The proposed approach might constitute a first step towards a comprehensive joint inversion that can extract the full pore geometry information of a geomaterial from NMR and SIP data.

  15. Kinematic Slip Model for 12 May 2008 Wenchuan-Beichuan Mw 7.9 Earthquake from Joint Inversion of ALOS, Envisat, and Teleseismic Data

    NASA Technical Reports Server (NTRS)

    Fielding, Eric; Sladen, Anthony; Avouac, Jean-Philippe; Li, Zhenhong; Ryder, Isabelle; Burgmann, Roland

    2008-01-01

    The presentations explores kinematics of the Wenchaun-Beichuan earthquake using data from ALOS, Envisat, and teleseismic recordings. Topics include geomorphic mapping, ALOS PALSAR range offsets, ALOS PALSAR interferometry, Envisat IM interferometry, Envisat ScanSAR, Joint GPS-InSAR inversion, and joint GPS-teleseismic inversion (static and kinematic).

  16. MARE2DEM: a 2-D inversion code for controlled-source electromagnetic and magnetotelluric data

    NASA Astrophysics Data System (ADS)

    Key, Kerry

    2016-10-01

    This work presents MARE2DEM, a freely available code for 2-D anisotropic inversion of magnetotelluric (MT) data and frequency-domain controlled-source electromagnetic (CSEM) data from onshore and offshore surveys. MARE2DEM parametrizes the inverse model using a grid of arbitrarily shaped polygons, where unstructured triangular or quadrilateral grids are typically used due to their ease of construction. Unstructured grids provide significantly more geometric flexibility and parameter efficiency than the structured rectangular grids commonly used by most other inversion codes. Transmitter and receiver components located on topographic slopes can be tilted parallel to the boundary so that the simulated electromagnetic fields accurately reproduce the real survey geometry. The forward solution is implemented with a goal-oriented adaptive finite-element method that automatically generates and refines unstructured triangular element grids that conform to the inversion parameter grid, ensuring accurate responses as the model conductivity changes. This dual-grid approach is significantly more efficient than the conventional use of a single grid for both the forward and inverse meshes since the more detailed finite-element meshes required for accurate responses do not increase the memory requirements of the inverse problem. Forward solutions are computed in parallel with a highly efficient scaling by partitioning the data into smaller independent modeling tasks consisting of subsets of the input frequencies, transmitters and receivers. Non-linear inversion is carried out with a new Occam inversion approach that requires fewer forward calls. Dense matrix operations are optimized for memory and parallel scalability using the ScaLAPACK parallel library. Free parameters can be bounded using a new non-linear transformation that leaves the transformed parameters nearly the same as the original parameters within the bounds, thereby reducing non-linear smoothing effects. Data balancing normalization weights for the joint inversion of two or more data sets encourages the inversion to fit each data type equally well. A synthetic joint inversion of marine CSEM and MT data illustrates the algorithm's performance and parallel scaling on up to 480 processing cores. CSEM inversion of data from the Middle America Trench offshore Nicaragua demonstrates a real world application. The source code and MATLAB interface tools are freely available at http://mare2dem.ucsd.edu.

  17. Voxel inversion of airborne electromagnetic data

    NASA Astrophysics Data System (ADS)

    Auken, E.; Fiandaca, G.; Kirkegaard, C.; Vest Christiansen, A.

    2013-12-01

    Inversion of electromagnetic data usually refers to a model space being linked to the actual observation points, and for airborne surveys the spatial discretization of the model space reflects the flight lines. On the contrary, geological and groundwater models most often refer to a regular voxel grid, not correlated to the geophysical model space. This means that incorporating the geophysical data into the geological and/or hydrological modelling grids involves a spatial relocation of the models, which in itself is a subtle process where valuable information is easily lost. Also the integration of prior information, e.g. from boreholes, is difficult when the observation points do not coincide with the position of the prior information, as well as the joint inversion of airborne and ground-based surveys. We developed a geophysical inversion algorithm working directly in a voxel grid disconnected from the actual measuring points, which then allows for informing directly geological/hydrogeological models, for easier incorporation of prior information and for straightforward integration of different data types in joint inversion. The new voxel model space defines the soil properties (like resistivity) on a set of nodes, and the distribution of the properties is computed everywhere by means of an interpolation function f (e.g. inverse distance or kriging). The position of the nodes is fixed during the inversion and is chosen to sample the soil taking into account topography and inversion resolution. Given this definition of the voxel model space, both 1D and 2D/3D forward responses can be computed. The 1D forward responses are computed as follows: A) a 1D model subdivision, in terms of model thicknesses and direction of the "virtual" horizontal stratification, is defined for each 1D data set. For EM soundings the "virtual" horizontal stratification is set up parallel to the topography at the sounding position. B) the "virtual" 1D models are constructed by interpolating the soil properties in the medium point of the "virtual" layers. For 2D/3D forward responses the algorithm operates similarly, simply filling the 2D/3D meshes of the forward responses by computing the interpolation values in the centres of the mesh cells. The new definition of the voxel model space allows for incorporating straightforwardly the geophysical information into geological and/or hydrological models, just by using for defining the geophysical model space a voxel (hydro)geological grid. This simplify also the propagation of the uncertainty of geophysical parameters into the (hydro)geological models. Furthermore, prior information from boreholes, like resistivity logs, can be applied directly to the voxel model space, even if the borehole positions do not coincide with the actual observation points. In fact, the prior information is constrained to the model parameters through the interpolation function at the borehole locations. The presented algorithm is a further development of the AarhusInv program package developed at Aarhus University (formerly em1dinv), which manages both large scale AEM surveys and ground-based data. This work has been carried out as part of the HyGEM project, supported by the Danish Council of Strategic Research under grant number DSF 11-116763.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Syracuse, Ellen Marie; Maceira, Monica; Phillips, William Scott

    These are slides which show many graphs and datasets for the above-mentioned topic and then concludes with the following: Joint inversion of multiple geophysical datasets improves recovery of velocity structures, particularly in Vs and in shallow parts of the model, in comparison to travel-time only models. Resulting fits to travel time data are minimally degraded by joint inversions. Correspondingly, fits to independent estimates of ground-truth locations are minimally affected by joint inversions.

  19. Magnetotelluric inversion via reverse time migration algorithm of seismic data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ha, Taeyoung; Shin, Changsoo

    2007-07-01

    We propose a new algorithm for two-dimensional magnetotelluric (MT) inversion. Our algorithm is an MT inversion based on the steepest descent method, borrowed from the backpropagation technique of seismic inversion or reverse time migration, introduced in the middle 1980s by Lailly and Tarantola. The steepest descent direction can be calculated efficiently by using the symmetry of numerical Green's function derived from a mixed finite element method proposed by Nedelec for Maxwell's equation, without calculating the Jacobian matrix explicitly. We construct three different objective functions by taking the logarithm of the complex apparent resistivity as introduced in the recent waveform inversionmore » algorithm by Shin and Min. These objective functions can be naturally separated into amplitude inversion, phase inversion and simultaneous inversion. We demonstrate our algorithm by showing three inversion results for synthetic data.« less

  20. Sorting signed permutations by inversions in O(nlogn) time.

    PubMed

    Swenson, Krister M; Rajan, Vaibhav; Lin, Yu; Moret, Bernard M E

    2010-03-01

    The study of genomic inversions (or reversals) has been a mainstay of computational genomics for nearly 20 years. After the initial breakthrough of Hannenhalli and Pevzner, who gave the first polynomial-time algorithm for sorting signed permutations by inversions, improved algorithms have been designed, culminating with an optimal linear-time algorithm for computing the inversion distance and a subquadratic algorithm for providing a shortest sequence of inversions--also known as sorting by inversions. Remaining open was the question of whether sorting by inversions could be done in O(nlogn) time. In this article, we present a qualified answer to this question, by providing two new sorting algorithms, a simple and fast randomized algorithm and a deterministic refinement. The deterministic algorithm runs in time O(nlogn + kn), where k is a data-dependent parameter. We provide the results of extensive experiments showing that both the average and the standard deviation for k are small constants, independent of the size of the permutation. We conclude (but do not prove) that almost all signed permutations can be sorted by inversions in O(nlogn) time.

  1. Anisotropic S-wave velocity structure from joint inversion of surface wave group velocity dispersion: A case study from India

    NASA Astrophysics Data System (ADS)

    Mitra, S.; Dey, S.; Siddartha, G.; Bhattacharya, S.

    2016-12-01

    We estimate 1-dimensional path average fundamental mode group velocity dispersion curves from regional Rayleigh and Love waves sampling the Indian subcontinent. The path average measurements are combined through a tomographic inversion to obtain 2-dimensional group velocity variation maps between periods of 10 and 80 s. The region of study is parametrised as triangular grids with 1° sides for the tomographic inversion. Rayleigh and Love wave dispersion curves from each node point is subsequently extracted and jointly inverted to obtain a radially anisotropic shear wave velocity model through global optimisation using Genetic Algorithm. The parametrization of the model space is done using three crustal layers and four mantle layers over a half-space with varying VpH , VsV and VsH. The anisotropic parameter (η) is calculated from empirical relations and the density of the layers are taken from PREM. Misfit for the model is calculated as a sum of error-weighted average dispersion curves. The 1-dimensional anisotropic shear wave velocity at each node point is combined using linear interpolation to obtain 3-dimensional structure beneath the region. Synthetic tests are performed to estimate the resolution of the tomographic maps which will be presented with our results. We envision to extend this to a larger dataset in near future to obtain high resolution anisotrpic shear wave velocity structure beneath India, Himalaya and Tibet.

  2. Seismic and thermodynamics constraints on temperature and composition of the Italian crust.

    NASA Astrophysics Data System (ADS)

    Diaferia, G.; Cammarano, F.; Piana Agostinetti, N.; Gao, C.; Boschi, L.; Molinari, I.

    2017-12-01

    Describing the variation of temperature and composition within the crust is of key importance for the understanding of its formation, evolution and its volcano-tectonic processes. We combine different geophysical observations with information on material properties, contributing to improve our knowledge on the structure, chemical and thermal heterogeneity of the crust. We use thermodynamic modeling to assess the effects of temperature, pressure and water content on seismic velocities. We find that i) temperature, rather than composition and water content, plays a major role in affecting seismic properties of crustal rocks, ii) mineralogical phase transitions, such as the α-β quartz transition and the plagioclase breakdown, play an important role on seismic observables, iii) the ratio between shear-wave velocity and density does not change appreciably in the crust, even as temperature and mineralogy are varied. Informed by these findings, we apply a trans-dimensional Montecarlo Markov-Chain inversion algorithm to jointly invert Rayleigh wave dispersion curves and receiver functions. Dispersion curves are derived from ambient-noise and provide a homogeneous coverage of the Italian Peninsula. More than 200 receiver functions are used with their error and correlation functions included during the inversion phase, to account for data uncertainty. The ensemble of seismic models obtained through the joint inversion is analyzed and preliminary interpretations based on petrological and thermodynamics constraints are presented.

  3. Spatial delineation, fluid-lithology characterization, and petrophysical modeling of deepwater Gulf of Mexico reservoirs though joint AVA deterministic and stochastic inversion of three-dimensional partially-stacked seismic amplitude data and well logs

    NASA Astrophysics Data System (ADS)

    Contreras, Arturo Javier

    This dissertation describes a novel Amplitude-versus-Angle (AVA) inversion methodology to quantitatively integrate pre-stack seismic data, well logs, geologic data, and geostatistical information. Deterministic and stochastic inversion algorithms are used to characterize flow units of deepwater reservoirs located in the central Gulf of Mexico. A detailed fluid/lithology sensitivity analysis was conducted to assess the nature of AVA effects in the study area. Standard AVA analysis indicates that the shale/sand interface represented by the top of the hydrocarbon-bearing turbidite deposits generate typical Class III AVA responses. Layer-dependent Biot-Gassmann analysis shows significant sensitivity of the P-wave velocity and density to fluid substitution, indicating that presence of light saturating fluids clearly affects the elastic response of sands. Accordingly, AVA deterministic and stochastic inversions, which combine the advantages of AVA analysis with those of inversion, have provided quantitative information about the lateral continuity of the turbidite reservoirs based on the interpretation of inverted acoustic properties and fluid-sensitive modulus attributes (P-Impedance, S-Impedance, density, and LambdaRho, in the case of deterministic inversion; and P-velocity, S-velocity, density, and lithotype (sand-shale) distributions, in the case of stochastic inversion). The quantitative use of rock/fluid information through AVA seismic data, coupled with the implementation of co-simulation via lithotype-dependent multidimensional joint probability distributions of acoustic/petrophysical properties, provides accurate 3D models of petrophysical properties such as porosity, permeability, and water saturation. Pre-stack stochastic inversion provides more realistic and higher-resolution results than those obtained from analogous deterministic techniques. Furthermore, 3D petrophysical models can be more accurately co-simulated from AVA stochastic inversion results. By combining AVA sensitivity analysis techniques with pre-stack stochastic inversion, geologic data, and awareness of inversion pitfalls, it is possible to substantially reduce the risk in exploration and development of conventional and non-conventional reservoirs. From the final integration of deterministic and stochastic inversion results with depositional models and analogous examples, the M-series reservoirs have been interpreted as stacked terminal turbidite lobes within an overall fan complex (the Miocene MCAVLU Submarine Fan System); this interpretation is consistent with previous core data interpretations and regional stratigraphic/depositional studies.

  4. Determining Crust and Upper Mantle Structure by Bayesian Joint Inversion of Receiver Functions and Surface Wave Dispersion at a Single Station: Preparation for Data from the InSight Mission

    NASA Astrophysics Data System (ADS)

    Jia, M.; Panning, M. P.; Lekic, V.; Gao, C.

    2017-12-01

    The InSight (Interior Exploration using Seismic Investigations, Geodesy and Heat Transport) mission will deploy a geophysical station on Mars in 2018. Using seismology to explore the interior structure of the Mars is one of the main targets, and as part of the mission, we will use 3-component seismic data to constrain the crust and upper mantle structure including P and S wave velocities and densities underneath the station. We will apply a reversible jump Markov chain Monte Carlo algorithm in the transdimensional hierarchical Bayesian inversion framework, in which the number of parameters in the model space and the noise level of the observed data are also treated as unknowns in the inversion process. Bayesian based methods produce an ensemble of models which can be analyzed to quantify uncertainties and trade-offs of the model parameters. In order to get better resolution, we will simultaneously invert three different types of seismic data: receiver functions, surface wave dispersion (SWD), and ZH ratios. Because the InSight mission will only deliver a single seismic station to Mars, and both the source location and the interior structure will be unknown, we will jointly invert the ray parameter in our approach. In preparation for this work, we first verify our approach by using a set of synthetic data. We find that SWD can constrain the absolute value of velocities while receiver functions constrain the discontinuities. By joint inversion, the velocity structure in the crust and upper mantle is well recovered. Then, we apply our approach to real data from an earth-based seismic station BFO located in Black Forest Observatory in Germany, as already used in a demonstration study for single station location methods. From the comparison of the results, our hierarchical treatment shows its advantage over the conventional method in which the noise level of observed data is fixed as a prior.

  5. Specific storage and hydraulic conductivity tomography through the joint inversion of hydraulic heads and self-potential data

    NASA Astrophysics Data System (ADS)

    Ahmed, A. Soueid; Jardani, A.; Revil, A.; Dupont, J. P.

    2016-03-01

    Transient hydraulic tomography is used to image the heterogeneous hydraulic conductivity and specific storage fields of shallow aquifers using time series of hydraulic head data. Such ill-posed and non-unique inverse problem can be regularized using some spatial geostatistical characteristic of the two fields. In addition to hydraulic heads changes, the flow of water, during pumping tests, generates an electrical field of electrokinetic nature. These electrical field fluctuations can be passively recorded at the ground surface using a network of non-polarizing electrodes connected to a high impedance (> 10 MOhm) and sensitive (0.1 mV) voltmeter, a method known in geophysics as the self-potential method. We perform a joint inversion of the self-potential and hydraulic head data to image the hydraulic conductivity and specific storage fields. We work on a 3D synthetic confined aquifer and we use the adjoint state method to compute the sensitivities of the hydraulic parameters to the hydraulic head and self-potential data in both steady-state and transient conditions. The inverse problem is solved using the geostatistical quasi-linear algorithm framework of Kitanidis. When the number of piezometers is small, the record of the transient self-potential signals provides useful information to characterize the hydraulic conductivity and specific storage fields. These results show that the self-potential method reveals the heterogeneities of some areas of the aquifer, which could not been captured by the tomography based on the hydraulic heads alone. In our analysis, the improvement on the hydraulic conductivity and specific storage estimations were based on perfect knowledge of electrical resistivity field. This implies that electrical resistivity will need to be jointly inverted with the hydraulic parameters in future studies and the impact of its uncertainty assessed with respect to the final tomograms of the hydraulic parameters.

  6. A stochastic framework for spot-scanning particle therapy.

    PubMed

    Robini, Marc; Yuemin Zhu; Wanyu Liu; Magnin, Isabelle

    2016-08-01

    In spot-scanning particle therapy, inverse treatment planning is usually limited to finding the optimal beam fluences given the beam trajectories and energies. We address the much more challenging problem of jointly optimizing the beam fluences, trajectories and energies. For this purpose, we design a simulated annealing algorithm with an exploration mechanism that balances the conflicting demands of a small mixing time at high temperatures and a reasonable acceptance rate at low temperatures. Numerical experiments substantiate the relevance of our approach and open new horizons to spot-scanning particle therapy.

  7. A combined joint diagonalization-MUSIC algorithm for subsurface targets localization

    NASA Astrophysics Data System (ADS)

    Wang, Yinlin; Sigman, John B.; Barrowes, Benjamin E.; O'Neill, Kevin; Shubitidze, Fridon

    2014-06-01

    This paper presents a combined joint diagonalization (JD) and multiple signal classification (MUSIC) algorithm for estimating subsurface objects locations from electromagnetic induction (EMI) sensor data, without solving ill-posed inverse-scattering problems. JD is a numerical technique that finds the common eigenvectors that diagonalize a set of multistatic response (MSR) matrices measured by a time-domain EMI sensor. Eigenvalues from targets of interest (TOI) can be then distinguished automatically from noise-related eigenvalues. Filtering is also carried out in JD to improve the signal-to-noise ratio (SNR) of the data. The MUSIC algorithm utilizes the orthogonality between the signal and noise subspaces in the MSR matrix, which can be separated with information provided by JD. An array of theoreticallycalculated Green's functions are then projected onto the noise subspace, and the location of the target is estimated by the minimum of the projection owing to the orthogonality. This combined method is applied to data from the Time-Domain Electromagnetic Multisensor Towed Array Detection System (TEMTADS). Examples of TEMTADS test stand data and field data collected at Spencer Range, Tennessee are analyzed and presented. Results indicate that due to its noniterative mechanism, the method can be executed fast enough to provide real-time estimation of objects' locations in the field.

  8. Joint inversion for transponder localization and sound-speed profile temporal variation in high-precision acoustic surveys.

    PubMed

    Li, Zhao; Dosso, Stan E; Sun, Dajun

    2016-07-01

    This letter develops a Bayesian inversion for localizing underwater acoustic transponders using a surface ship which compensates for sound-speed profile (SSP) temporal variation during the survey. The method is based on dividing observed acoustic travel-time data into time segments and including depth-independent SSP variations for each segment as additional unknown parameters to approximate the SSP temporal variation. SSP variations are estimated jointly with transponder locations, rather than calculated separately as in existing two-step inversions. Simulation and sea-trial results show this localization/SSP joint inversion performs better than two-step inversion in terms of localization accuracy, agreement with measured SSP variations, and computational efficiency.

  9. Cyclic coordinate descent: A robotics algorithm for protein loop closure.

    PubMed

    Canutescu, Adrian A; Dunbrack, Roland L

    2003-05-01

    In protein structure prediction, it is often the case that a protein segment must be adjusted to connect two fixed segments. This occurs during loop structure prediction in homology modeling as well as in ab initio structure prediction. Several algorithms for this purpose are based on the inverse Jacobian of the distance constraints with respect to dihedral angle degrees of freedom. These algorithms are sometimes unstable and fail to converge. We present an algorithm developed originally for inverse kinematics applications in robotics. In robotics, an end effector in the form of a robot hand must reach for an object in space by altering adjustable joint angles and arm lengths. In loop prediction, dihedral angles must be adjusted to move the C-terminal residue of a segment to superimpose on a fixed anchor residue in the protein structure. The algorithm, referred to as cyclic coordinate descent or CCD, involves adjusting one dihedral angle at a time to minimize the sum of the squared distances between three backbone atoms of the moving C-terminal anchor and the corresponding atoms in the fixed C-terminal anchor. The result is an equation in one variable for the proposed change in each dihedral. The algorithm proceeds iteratively through all of the adjustable dihedral angles from the N-terminal to the C-terminal end of the loop. CCD is suitable as a component of loop prediction methods that generate large numbers of trial structures. It succeeds in closing loops in a large test set 99.79% of the time, and fails occasionally only for short, highly extended loops. It is very fast, closing loops of length 8 in 0.037 sec on average.

  10. Time-lapse joint inversion of geophysical data with automatic joint constraints and dynamic attributes

    NASA Astrophysics Data System (ADS)

    Rittgers, J. B.; Revil, A.; Mooney, M. A.; Karaoulis, M.; Wodajo, L.; Hickey, C. J.

    2016-12-01

    Joint inversion and time-lapse inversion techniques of geophysical data are often implemented in an attempt to improve imaging of complex subsurface structures and dynamic processes by minimizing negative effects of random and uncorrelated spatial and temporal noise in the data. We focus on the structural cross-gradient (SCG) approach (enforcing recovered models to exhibit similar spatial structures) in combination with time-lapse inversion constraints applied to surface-based electrical resistivity and seismic traveltime refraction data. The combination of both techniques is justified by the underlying petrophysical models. We investigate the benefits and trade-offs of SCG and time-lapse constraints. Using a synthetic case study, we show that a combined joint time-lapse inversion approach provides an overall improvement in final recovered models. Additionally, we introduce a new approach to reweighting SCG constraints based on an iteratively updated normalized ratio of model sensitivity distributions at each time-step. We refer to the new technique as the Automatic Joint Constraints (AJC) approach. The relevance of the new joint time-lapse inversion process is demonstrated on the synthetic example. Then, these approaches are applied to real time-lapse monitoring field data collected during a quarter-scale earthen embankment induced-piping failure test. The use of time-lapse joint inversion is justified by the fact that a change of porosity drives concomitant changes in seismic velocities (through its effect on the bulk and shear moduli) and resistivities (through its influence upon the formation factor). Combined with the definition of attributes (i.e. specific characteristics) of the evolving target associated with piping, our approach allows localizing the position of the preferential flow path associated with internal erosion. This is not the case using other approaches.

  11. Joint inversion of lake-floor electrical resistivity tomography and boat-towed radio-magnetotelluric data illustrated on synthetic data and an application from the Äspö Hard Rock Laboratory site, Sweden

    NASA Astrophysics Data System (ADS)

    Wang, Shunguo; Kalscheuer, Thomas; Bastani, Mehrdad; Malehmir, Alireza; Pedersen, Laust B.; Dahlin, Torleif; Meqbel, Naser

    2018-04-01

    The electrical resistivity tomography (ERT) method provides moderately good constraints for both conductive and resistive structures, while the radio-magnetotelluric (RMT) method is well suited to constrain conductive structures. Additionally, RMT and ERT data may have different target coverage and are differently affected by various types of noise. Hence, joint inversion of RMT and ERT data sets may provide a better constrained model as compared to individual inversions. In this study, joint inversion of boat-towed RMT and lake-floor ERT data has for the first time been formulated and implemented. The implementation was tested on both synthetic and field data sets incorporating RMT transverse electrical mode and ERT data. Results from synthetic data demonstrate that the joint inversion yields models with better resolution compared with individual inversions. A case study from an area adjacent to the Äspö Hard Rock Laboratory (HRL) in southeastern Sweden was used to demonstrate the implementation of the method. A 790-m-long profile comprising lake-floor ERT and boat-towed RMT data combined with partial land data was used for this purpose. Joint inversions with and without weighting (applied to different data sets, vertical and horizontal model smoothness) as well as constrained joint inversions incorporating bathymetry data and water resistivity measurements were performed. The resulting models delineate subsurface structures such as a major northeasterly directed fracture system, which is observed in the HRL facility underground and confirmed by boreholes. A previously uncertain weakness zone, likely a fracture system in the northern part of the profile, is inferred in this study. The fractures are highly saturated with saline water, which make them good targets of resistivity-based geophysical methods. Nevertheless, conductive sediments overlain by the lake water add further difficulty to resolve these deep fracture zones. Therefore, the joint inversion of RMT and ERT data particularly helps to improve the resolution of the resistivity models in areas where the profile traverses shallow water and land sections. Our modification of the joint inversion of RMT and ERT data improves the study of geological units underneath shallow water bodies where underground infrastructures are planned. Thus, it allows better planning and mitigating the risks and costs associated with conductive weakness zones.

  12. Joint Loads and Cartilage Stress in Intact Joints of Military Transtibial Amputees: Enhancing Quality of Life

    DTIC Science & Technology

    2017-04-01

    crosstalk); analysis of tested subjects underway. 4) Developed analytical methods to obtain knee joint loads using EMG-driven inverse dynamics; analysis of...13/2018. Completion %: 40. Task 1.3: EMG-driven inverse dynamic (ID) analyses with OpenSim for amputee and control group subjects. Target date: 1...predicted by EMG-driven inverse dynamics. Two-three conference papers are being prepared for submission in February 2017. Other achievements. None

  13. Kinematic equations for control of the redundant eight-degree-of-freedom advanced research manipulator 2

    NASA Technical Reports Server (NTRS)

    Williams, Robert L., II

    1992-01-01

    The forward position and velocity kinematics for the redundant eight-degree-of-freedom Advanced Research Manipulator 2 (ARM2) are presented. Inverse position and velocity kinematic solutions are also presented. The approach in this paper is to specify two of the unknowns and solve for the remaining six unknowns. Two unknowns can be specified with two restrictions. First, the elbow joint angle and rate cannot be specified because they are known from the end-effector position and velocity. Second, one unknown must be specified from the four-jointed wrist, and the second from joints that translate the wrist, elbow joint excluded. There are eight solutions to the inverse position problem. The inverse velocity solution is unique, assuming the Jacobian matrix is not singular. A discussion of singularities is based on specifying two joint rates and analyzing the reduced Jacobian matrix. When this matrix is singular, the generalized inverse may be used as an alternate solution. Computer simulations were developed to verify the equations. Examples demonstrate agreement between forward and inverse solutions.

  14. Joint Inversion of 1-Hz GPS Data and Strong Motion Records for the Rupture Process of the 2008 Iwate-Miyagi Nairiku Earthquake: Objectively Determining Relative Weighting

    NASA Astrophysics Data System (ADS)

    Wang, Z.; Kato, T.; Wang, Y.

    2015-12-01

    The spatiotemporal fault slip history of the 2008 Iwate-Miyagi Nairiku earthquake, Japan, is obtained by the joint inversion of 1-Hz GPS waveforms and near-field strong motion records. 1-Hz GPS data from GEONET is processed by GAMIT/GLOBK and then a low-pass filter of 0.05 Hz is applied. The ground surface strong motion records from stations of K-NET and Kik-Net are band-pass filtered for the range of 0.05 ~ 0.3 Hz and integrated once to obtain velocity. The joint inversion exploits a broader frequency band for near-field ground motions, which provides excellent constraints for both the detailed slip history and slip distribution. A fully Bayesian inversion method is performed to simultaneously and objectively determine the rupture model, the unknown relative weighting of multiple data sets and the unknown smoothing hyperparameters. The preferred rupture model is stable for different choices of velocity structure model and station distribution, with maximum slip of ~ 8.0 m and seismic moment of 2.9 × 1019 Nm (Mw 6.9). By comparison with the single inversion of strong motion records, the cumulative slip distribution of joint inversion shows sparser slip distribution with two slip asperities. One common slip asperity extends from the hypocenter southeastward to the ground surface of breakage; another slip asperity, which is unique for joint inversion contributed by 1-Hz GPS waveforms, appears in the deep part of fault where very few aftershocks are occurring. The differential moment rate function of joint and single inversions obviously indicates that rich high frequency waves are radiated in the first three seconds but few low frequency waves.

  15. Anatomy assisted PET image reconstruction incorporating multi-resolution joint entropy

    NASA Astrophysics Data System (ADS)

    Tang, Jing; Rahmim, Arman

    2015-01-01

    A promising approach in PET image reconstruction is to incorporate high resolution anatomical information (measured from MR or CT) taking the anato-functional similarity measures such as mutual information or joint entropy (JE) as the prior. These similarity measures only classify voxels based on intensity values, while neglecting structural spatial information. In this work, we developed an anatomy-assisted maximum a posteriori (MAP) reconstruction algorithm wherein the JE measure is supplied by spatial information generated using wavelet multi-resolution analysis. The proposed wavelet-based JE (WJE) MAP algorithm involves calculation of derivatives of the subband JE measures with respect to individual PET image voxel intensities, which we have shown can be computed very similarly to how the inverse wavelet transform is implemented. We performed a simulation study with the BrainWeb phantom creating PET data corresponding to different noise levels. Realistically simulated T1-weighted MR images provided by BrainWeb modeling were applied in the anatomy-assisted reconstruction with the WJE-MAP algorithm and the intensity-only JE-MAP algorithm. Quantitative analysis showed that the WJE-MAP algorithm performed similarly to the JE-MAP algorithm at low noise level in the gray matter (GM) and white matter (WM) regions in terms of noise versus bias tradeoff. When noise increased to medium level in the simulated data, the WJE-MAP algorithm started to surpass the JE-MAP algorithm in the GM region, which is less uniform with smaller isolated structures compared to the WM region. In the high noise level simulation, the WJE-MAP algorithm presented clear improvement over the JE-MAP algorithm in both the GM and WM regions. In addition to the simulation study, we applied the reconstruction algorithms to real patient studies involving DPA-173 PET data and Florbetapir PET data with corresponding T1-MPRAGE MRI images. Compared to the intensity-only JE-MAP algorithm, the WJE-MAP algorithm resulted in comparable regional mean values to those from the maximum likelihood algorithm while reducing noise. Achieving robust performance in various noise-level simulation and patient studies, the WJE-MAP algorithm demonstrates its potential in clinical quantitative PET imaging.

  16. Research on joint parameter inversion for an integrated underground displacement 3D measuring sensor.

    PubMed

    Shentu, Nanying; Qiu, Guohua; Li, Qing; Tong, Renyuan; Shentu, Nankai; Wang, Yanjie

    2015-04-13

    Underground displacement monitoring is a key means to monitor and evaluate geological disasters and geotechnical projects. There exist few practical instruments able to monitor subsurface horizontal and vertical displacements simultaneously due to monitoring invisibility and complexity. A novel underground displacement 3D measuring sensor had been proposed in our previous studies, and great efforts have been taken in the basic theoretical research of underground displacement sensing and measuring characteristics by virtue of modeling, simulation and experiments. This paper presents an innovative underground displacement joint inversion method by mixing a specific forward modeling approach with an approximate optimization inversion procedure. It can realize a joint inversion of underground horizontal displacement and vertical displacement for the proposed 3D sensor. Comparative studies have been conducted between the measured and inversed parameters of underground horizontal and vertical displacements under a variety of experimental and inverse conditions. The results showed that when experimentally measured horizontal displacements and vertical displacements are both varied within 0~30 mm, horizontal displacement and vertical displacement inversion discrepancies are generally less than 3 mm and 1 mm, respectively, under three kinds of simulated underground displacement monitoring circumstances. This implies that our proposed underground displacement joint inversion method is robust and efficient to predict the measuring values of underground horizontal and vertical displacements for the proposed sensor.

  17. Getting in shape: Reconstructing three-dimensional long-track speed skating kinematics by comparing several body pose reconstruction techniques.

    PubMed

    van der Kruk, E; Schwab, A L; van der Helm, F C T; Veeger, H E J

    2018-03-01

    In gait studies body pose reconstruction (BPR) techniques have been widely explored, but no previous protocols have been developed for speed skating, while the peculiarities of the skating posture and technique do not automatically allow for the transfer of the results of those explorations to kinematic skating data. The aim of this paper is to determine the best procedure for body pose reconstruction and inverse dynamics of speed skating, and to what extend this choice influences the estimation of joint power. The results show that an eight body segment model together with a global optimization method with revolute joint in the knee and in the lumbosacral joint, while keeping the other joints spherical, would be the most realistic model to use for the inverse kinematics in speed skating. To determine joint power, this method should be combined with a least-square error method for the inverse dynamics. Reporting on the BPR technique and the inverse dynamic method is crucial to enable comparison between studies. Our data showed an underestimation of up to 74% in mean joint power when no optimization procedure was applied for BPR and an underestimation of up to 31% in mean joint power when a bottom-up inverse dynamics method was chosen instead of a least square error approach. Although these results are aimed at speed skating, reporting on the BPR procedure and the inverse dynamics method, together with setting a golden standard should be common practice in all human movement research to allow comparison between studies. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. Objective function analysis for electric soundings (VES), transient electromagnetic soundings (TEM) and joint inversion VES/TEM

    NASA Astrophysics Data System (ADS)

    Bortolozo, Cassiano Antonio; Bokhonok, Oleg; Porsani, Jorge Luís; Monteiro dos Santos, Fernando Acácio; Diogo, Liliana Alcazar; Slob, Evert

    2017-11-01

    Ambiguities in geophysical inversion results are always present. How these ambiguities appear in most cases open to interpretation. It is interesting to investigate ambiguities with regard to the parameters of the models under study. Residual Function Dispersion Map (RFDM) can be used to differentiate between global ambiguities and local minima in the objective function. We apply RFDM to Vertical Electrical Sounding (VES) and TEM Sounding inversion results. Through topographic analysis of the objective function we evaluate the advantages and limitations of electrical sounding data compared with TEM sounding data, and the benefits of joint inversion in comparison with the individual methods. The RFDM analysis proved to be a very interesting tool for understanding the joint inversion method of VES/TEM. Also the advantage of the applicability of the RFDM analyses in real data is explored in this paper to demonstrate not only how the objective function of real data behaves but the applicability of the RFDM approach in real cases. With the analysis of the results, it is possible to understand how the joint inversion can reduce the ambiguity of the methods.

  19. Time-lapse joint AVO inversion using generalized linear method based on exact Zoeppritz equations

    NASA Astrophysics Data System (ADS)

    Zhi, Longxiao; Gu, Hanming

    2018-03-01

    The conventional method of time-lapse AVO (Amplitude Versus Offset) inversion is mainly based on the approximate expression of Zoeppritz equations. Though the approximate expression is concise and convenient to use, it has certain limitations. For example, its application condition is that the difference of elastic parameters between the upper medium and lower medium is little and the incident angle is small. In addition, the inversion of density is not stable. Therefore, we develop the method of time-lapse joint AVO inversion based on exact Zoeppritz equations. In this method, we apply exact Zoeppritz equations to calculate the reflection coefficient of PP wave. And in the construction of objective function for inversion, we use Taylor series expansion to linearize the inversion problem. Through the joint AVO inversion of seismic data in baseline survey and monitor survey, we can obtain the P-wave velocity, S-wave velocity, density in baseline survey and their time-lapse changes simultaneously. We can also estimate the oil saturation change according to inversion results. Compared with the time-lapse difference inversion, the joint inversion doesn't need certain assumptions and can estimate more parameters simultaneously. It has a better applicability. Meanwhile, by using the generalized linear method, the inversion is easily implemented and its calculation cost is small. We use the theoretical model to generate synthetic seismic records to test and analyze the influence of random noise. The results can prove the availability and anti-noise-interference ability of our method. We also apply the inversion to actual field data and prove the feasibility of our method in actual situation.

  20. Joint time/frequency-domain inversion of reflection data for seabed geoacoustic profiles and uncertainties.

    PubMed

    Dettmer, Jan; Dosso, Stan E; Holland, Charles W

    2008-03-01

    This paper develops a joint time/frequency-domain inversion for high-resolution single-bounce reflection data, with the potential to resolve fine-scale profiles of sediment velocity, density, and attenuation over small seafloor footprints (approximately 100 m). The approach utilizes sequential Bayesian inversion of time- and frequency-domain reflection data, employing ray-tracing inversion for reflection travel times and a layer-packet stripping method for spherical-wave reflection-coefficient inversion. Posterior credibility intervals from the travel-time inversion are passed on as prior information to the reflection-coefficient inversion. Within the reflection-coefficient inversion, parameter information is passed from one layer packet inversion to the next in terms of marginal probability distributions rotated into principal components, providing an efficient approach to (partially) account for multi-dimensional parameter correlations with one-dimensional, numerical distributions. Quantitative geoacoustic parameter uncertainties are provided by a nonlinear Gibbs sampling approach employing full data error covariance estimation (including nonstationary effects) and accounting for possible biases in travel-time picks. Posterior examination of data residuals shows the importance of including data covariance estimates in the inversion. The joint inversion is applied to data collected on the Malta Plateau during the SCARAB98 experiment.

  1. Subtalar joint stress imaging with tomosynthesis.

    PubMed

    Teramoto, Atsushi; Watanabe, Kota; Takashima, Hiroyuki; Yamashita, Toshihiko

    2014-06-01

    The purpose of this study was to perform stress imaging of hindfoot inversion and eversion using tomosynthesis and to assess the subtalar joint range of motion (ROM) of healthy subjects. The subjects were 15 healthy volunteers with a mean age of 29.1 years. Coronal tomosynthesis stress imaging of the subtalar joint was performed in a total of 30 left and right ankles. A Telos stress device was used for the stress load, and the load was 150 N for both inversion and eversion. Tomographic images in which the posterior talocalcaneal joint could be confirmed on the neutral position images were used in measurements. The angle of the intersection formed by a line through the lateral articular facet of the posterior talocalcaneal joint and a line through the surface of the trochlea of the talus was measured. The mean change in the angle of the calcaneus with respect to the talus was 10.3 ± 4.8° with inversion stress and 5.0 ± 3.8° with eversion stress from the neutral position. The result was a clearer depiction of the subtalar joint, and inversion and eversion ROM of the subtalar joint was shown to be about 15° in healthy subjects. Diagnostic, Level IV.

  2. Improving the accurate assessment of a layered shear-wave velocity model using joint inversion of the effective Rayleigh wave and Love wave dispersion curves

    NASA Astrophysics Data System (ADS)

    Yin, X.; Xia, J.; Xu, H.

    2016-12-01

    Rayleigh and Love waves are two types of surface waves that travel along a free surface.Based on the assumption of horizontal layered homogenous media, Rayleigh-wave phase velocity can be defined as a function of frequency and four groups of earth parameters: P-wave velocity, SV-wave velocity, density and thickness of each layer. Unlike Rayleigh waves, Love-wave phase velocities of a layered homogenous earth model could be calculated using frequency and three groups of earth properties: SH-wave velocity, density, and thickness of each layer. Because the dispersion of Love waves is independent of P-wave velocities, Love-wave dispersion curves are much simpler than Rayleigh wave. The research of joint inversion methods of Rayleigh and Love dispersion curves is necessary. (1) This dissertation adopts the combinations of theoretical analysis and practical applications. In both lateral homogenous media and radial anisotropic media, joint inversion approaches of Rayleigh and Love waves are proposed to improve the accuracy of S-wave velocities.A 10% random white noise and a 20% random white noise are added to the synthetic dispersion curves to check out anti-noise ability of the proposed joint inversion method.Considering the influences of the anomalous layer, Rayleigh and Love waves are insensitive to those layers beneath the high-velocity layer or low-velocity layer and the high-velocity layer itself. Low sensitivities will give rise to high degree of uncertainties of the inverted S-wave velocities of these layers. Considering that sensitivity peaks of Rayleigh and Love waves separate at different frequency ranges, the theoretical analyses have demonstrated that joint inversion of these two types of waves would probably ameliorate the inverted model.The lack of surface-wave (Rayleigh or Love waves) dispersion data may lead to inaccuracy S-wave velocities through the single inversion of Rayleigh or Love waves, so this dissertation presents the joint inversion method of Rayleigh and Love waves which will improve the accuracy of S-wave velocities. Finally, a real-world example is applied to verify the accuracy and stability of the proposed joint inversion method. Keywords: Rayleigh wave; Love wave; Sensitivity analysis; Joint inversion method.

  3. Test-retest reliability of sudden ankle inversion measurements in subjects with healthy ankle joints.

    PubMed

    Eechaute, Christophe; Vaes, Peter; Duquet, William; Van Gheluwe, Bart

    2007-01-01

    Sudden ankle inversion tests have been used to investigate whether the onset of peroneal muscle activity is delayed in patients with chronically unstable ankle joints. Before interpreting test results of latency times in patients with chronic ankle instability and healthy subjects, the reliability of these measures must be first demonstrated. To investigate the test-retest reliability of variables measured during a sudden ankle inversion movement in standing subjects with healthy ankle joints. Validation study. Research laboratory. 15 subjects with healthy ankle joints (30 ankles). Subjects stood on an ankle inversion platform with both feet tightly fixed to independently moveable trapdoors. An unexpected sudden ankle inversion of 50 degrees was imposed. We measured latency and motor response times and electromechanical delay of the peroneus longus muscle, along with the time and angular position of the first and second decelerating moments, the mean and maximum inversion speed, and the total inversion time. Correlation coefficients and standard error of measurements were calculated. Intraclass correlation coefficients ranged from 0.17 for the electromechanical delay of the peroneus longus muscle (standard error of measurement = 2.7 milliseconds) to 0.89 for the maximum inversion speed (standard error of measurement = 34.8 milliseconds). The reliability of the latency and motor response times of the peroneus longus muscle, the time of the first and second decelerating moments, and the mean and maximum inversion speed was acceptable in subjects with healthy ankle joints and supports the investigation of the reliability of these measures in subjects with chronic ankle instability. The lower reliability of the electromechanical delay of the peroneus longus muscle and the angular positions of both decelerating moments calls the use of these variables into question.

  4. A complete analytical solution for the inverse instantaneous kinematics of a spherical-revolute-spherical (7R) redundant manipulator

    NASA Technical Reports Server (NTRS)

    Podhorodeski, R. P.; Fenton, R. G.; Goldenberg, A. A.

    1989-01-01

    Using a method based upon resolving joint velocities using reciprocal screw quantities, compact analytical expressions are generated for the inverse solution of the joint rates of a seven revolute (spherical-revolute-spherical) manipulator. The method uses a sequential decomposition of screw coordinates to identify reciprocal screw quantities used in the resolution of a particular joint rate solution, and also to identify a Jacobian null-space basis used for the direct solution of optimal joint rates. The results of the screw decomposition are used to study special configurations of the manipulator, generating expressions for the inverse velocity solution for all non-singular configurations of the manipulator, and identifying singular configurations and their characteristics. Two functions are therefore served: a new general method for the solution of the inverse velocity problem is presented; and complete analytical expressions are derived for the resolution of the joint rates of a seven degree of freedom manipulator useful for telerobotic and industrial robotic application.

  5. Coupled Land Surface-Subsurface Hydrogeophysical Inverse Modeling to Estimate Soil Organic Carbon Content in an Arctic Tundra

    NASA Astrophysics Data System (ADS)

    Tran, A. P.; Dafflon, B.; Hubbard, S.

    2017-12-01

    Soil organic carbon (SOC) is crucial for predicting carbon climate feedbacks in the vulnerable organic-rich Arctic region. However, it is challenging to achieve this property due to the general limitations of conventional core sampling and analysis methods. In this study, we develop an inversion scheme that uses single or multiple datasets, including soil liquid water content, temperature and ERT data, to estimate the vertical profile of SOC content. Our approach relies on the fact that SOC content strongly influences soil hydrological-thermal parameters, and therefore, indirectly controls the spatiotemporal dynamics of soil liquid water content, temperature and their correlated electrical resistivity. The scheme includes several advantages. First, this is the first time SOC content is estimated by using a coupled hydrogeophysical inversion. Second, by using the Community Land Model, we can account for the land surface dynamics (evapotranspiration, snow accumulation and melting) and ice/liquid phase transition. Third, we combine a deterministic and an adaptive Markov chain Monte Carlo optimization algorithm to better estimate the posterior distributions of desired model parameters. Finally, the simulated subsurface variables are explicitly linked to soil electrical resistivity via petrophysical and geophysical models. We validate the developed scheme using synthetic experiments. The results show that compared to inversion of single dataset, joint inversion of these datasets significantly reduces parameter uncertainty. The joint inversion approach is able to estimate SOC content within the shallow active layer with high reliability. Next, we apply the scheme to estimate OC content along an intensive ERT transect in Barrow, Alaska using multiple datasets acquired in the 2013-2015 period. The preliminary results show a good agreement between modeled and measured soil temperature, thaw layer thickness and electrical resistivity. The accuracy of estimated SOC content will be evaluated by comparison with measurements from soil samples along the transect. Our study presents a new surface-subsurface, deterministic-stochastic hydrogeophysical inversion approach, as well as the benefit of including multiple types of data to estimate SOC and associated hydrological-thermal dynamics.

  6. Simultaneous inversion of seismic velocity and moment tensor using elastic-waveform inversion of microseismic data: Application to the Aneth CO2-EOR field

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Huang, L.

    2017-12-01

    Moment tensors are key parameters for characterizing CO2-injection-induced microseismic events. Elastic-waveform inversion has the potential to providing accurate results of moment tensors. Microseismic waveforms contains information of source moment tensors and the wave propagation velocity along the wavepaths. We develop an elastic-waveform inversion method to jointly invert the seismic velocity model and moment tensor. We first use our adaptive moment-tensor joint inversion method to estimate moment tensors of microseismic events. Our adaptive moment-tensor inversion method jointly inverts multiple microseismic events with similar waveforms within a cluster to reduce inversion uncertainty for microseismic data recorded using a single borehole geophone array. We use this inversion result as the initial model for our elastic-waveform inversion to minimize the cross-correlated-based data misfit between observed data and synthetic data. We verify our method using synthetic microseismic data and obtain improved results of both moment tensors and seismic velocity model. We apply our new inversion method to microseismic data acquired at a CO2-enhanced oil recovery field in Aneth, Utah, using a single borehole geophone array. The results demonstrate that our new inversion method significantly reduces the data misfit compared to the conventional ray-theory-based moment-tensor inversion.

  7. Assessment of ankle and hindfoot stability and joint pressures using a human cadaveric model of a large lateral talar process excision: a biomechanical study.

    PubMed

    Sands, Andrew; White, Charles; Blankstein, Michael; Zderic, Ivan; Wahl, Dieter; Ernst, Manuela; Windolf, Markus; Hagen, Jennifer E; Richards, R Geoff; Stoffel, Karl; Gueorguiev, Boyko

    2015-03-01

    Lateral talar process fragment excision may be followed by hindfoot instability and altered biomechanics. There is controversy regarding the ideal fragment size for internal fixation versus excision and a concern that excision of a large fragment may lead to significant instability. The aim of this study was to assess the effect of a simulated large lateral talar process excision on ankle and subtalar joint stability.A custom-made seesaw rig was designed to apply inversion/eversion stress loading on 7 fresh-frozen human cadaveric lower legs and investigate them in pre-excision, 5 cm and 10 cm lateral talar process fragment excision states. Anteroposterior radiographs were taken to assess ankle and subtalar joint tilt and calculate angular change from neutral hindfoot alignment to 10-kg forced inversion/eversion. Ankle joint pressures and contact areas were measured under 30-kg axial load in neutral hindfoot alignment.In comparison to the pre-excision state, no significantly different mediolateral angular change was observed in the subtalar joint after 5 and 10 cm lateral talar process fragment excision in inversion and eversion. With respect to the ankle joint, 10-cm fragment excision produced significantly bigger inversion tibiotalar tilt compared with the pre-excision state, P = .04. No significant change of the ankle joint pressure and contact area was detected after 5 and 10-cm excision in comparison with the pre-excison state.An excision of up to 10 cm of the lateral talar process does not cause a significant instability at the level of the subtalar joint but might be a destabilizing factor at the ankle joint under inversion stress. The latter could be related to extensive soft tissue dissection required for resection.

  8. Inversion Of Jacobian Matrix For Robot Manipulators

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Bejczy, Antal K.

    1989-01-01

    Report discusses inversion of Jacobian matrix for class of six-degree-of-freedom arms with spherical wrist, i.e., with last three joints intersecting. Shows by taking advantage of simple geometry of such arms, closed-form solution of Q=J-1X, which represents linear transformation from task space to joint space, obtained efficiently. Presents solutions for PUMA arm, JPL/Stanford arm, and six-revolute-joint coplanar arm along with all singular points. Main contribution of paper shows simple geometry of this type of arms exploited in performing inverse transformation without any need to compute Jacobian or its inverse explicitly. Implication of this computational efficiency advanced task-space control schemes for spherical-wrist arms implemented more efficiently.

  9. NON-Shor Factorization Via BEQS BEC: Watkins Number-Theory ``Pure''-Mathematics U With Statistical-Physics; Benford Log-Law Inversion to ONLY BEQS digit d=0 BEC!!!

    NASA Astrophysics Data System (ADS)

    Lyons, M.; Siegel, Edward Carl-Ludwig

    2011-03-01

    Weiss-Page-Holthaus[Physica A,341,586(04); http://arxiv.org/abs/cond-mat/0403295] number-FACTORIZATION VIA BEQS BEC VS.(?) Shor-algorithm, strongly-supporting Watkins' [www.secamlocal.ex.ac.uk/people/staff/mrwatkin/] Intersection of number-theory "pure"-maths WITH (Statistical)-Physics, as Siegel[AMS Joint.Mtg.(02)-Abs.973-60-124] Benford logarithmic-law algebraic-INVERSION to ONLY BEQS with d=0 digit P (d = 0) > = oogapFULBEC ! ! ! SiegelRiemann - hypothesisproofviaRayleigh [ Phil . Trans . CLXI (1870) ] - Polya [ Math . Ann . (21) ] - [ Random - WalksElectric - Nets . , MAA (81) ] - nderson [ PRL (58) ] - localization - Siegel [ Symp . Fractals , MRSFallMtg . (89) - 5 - papers ! ! ! ] FUZZYICS = CATEGORYICS : [ LOCALITY ]- MORPHISM / CROSSOVER / AUTMATHCAT / DIM - CAT / ANTONYM- > (GLOBALITY) FUNCTOR / SYNONYM / concomitancetonoise = / Fluct . - Dissip . theorem / FUNCTOR / SYNONYM / equivalence / proportionalityto = > generalized - susceptibilitypower - spectrum [ FLAT / FUNCTIONLESS / WHITE ]- MORPHISM / CROSSOVER / AUTMATHCAT / DIM - CAT / ANTONYM- > HYPERBOLICITY/ZIPF-law INEVITABILITY) intersection with ONLY BEQS BEC).

  10. Optimization, Monotonicity and the Determination of Nash Equilibria — An Algorithmic Analysis

    NASA Astrophysics Data System (ADS)

    Lozovanu, D.; Pickl, S. W.; Weber, G.-W.

    2004-08-01

    This paper is concerned with the optimization of a nonlinear time-discrete model exploiting the special structure of the underlying cost game and the property of inverse matrices. The costs are interlinked by a system of linear inequalities. It is shown that, if the players cooperate, i.e., minimize the sum of all the costs, they achieve a Nash equilibrium. In order to determine Nash equilibria, the simplex method can be applied with respect to the dual problem. An introduction into the TEM model and its relationship to an economic Joint Implementation program is given. The equivalence problem is presented. The construction of the emission cost game and the allocation problem is explained. The assumption of inverse monotony for the matrices leads to a new result in the area of such allocation problems. A generalization of such problems is presented.

  11. Efficient moving target analysis for inverse synthetic aperture radar images via joint speeded-up robust features and regular moment

    NASA Astrophysics Data System (ADS)

    Yang, Hongxin; Su, Fulin

    2018-01-01

    We propose a moving target analysis algorithm using speeded-up robust features (SURF) and regular moment in inverse synthetic aperture radar (ISAR) image sequences. In our study, we first extract interest points from ISAR image sequences by SURF. Different from traditional feature point extraction methods, SURF-based feature points are invariant to scattering intensity, target rotation, and image size. Then, we employ a bilateral feature registering model to match these feature points. The feature registering scheme can not only search the isotropic feature points to link the image sequences but also reduce the error matching pairs. After that, the target centroid is detected by regular moment. Consequently, a cost function based on correlation coefficient is adopted to analyze the motion information. Experimental results based on simulated and real data validate the effectiveness and practicability of the proposed method.

  12. Research on Joint Parameter Inversion for an Integrated Underground Displacement 3D Measuring Sensor

    PubMed Central

    Shentu, Nanying; Qiu, Guohua; Li, Qing; Tong, Renyuan; Shentu, Nankai; Wang, Yanjie

    2015-01-01

    Underground displacement monitoring is a key means to monitor and evaluate geological disasters and geotechnical projects. There exist few practical instruments able to monitor subsurface horizontal and vertical displacements simultaneously due to monitoring invisibility and complexity. A novel underground displacement 3D measuring sensor had been proposed in our previous studies, and great efforts have been taken in the basic theoretical research of underground displacement sensing and measuring characteristics by virtue of modeling, simulation and experiments. This paper presents an innovative underground displacement joint inversion method by mixing a specific forward modeling approach with an approximate optimization inversion procedure. It can realize a joint inversion of underground horizontal displacement and vertical displacement for the proposed 3D sensor. Comparative studies have been conducted between the measured and inversed parameters of underground horizontal and vertical displacements under a variety of experimental and inverse conditions. The results showed that when experimentally measured horizontal displacements and vertical displacements are both varied within 0 ~ 30 mm, horizontal displacement and vertical displacement inversion discrepancies are generally less than 3 mm and 1 mm, respectively, under three kinds of simulated underground displacement monitoring circumstances. This implies that our proposed underground displacement joint inversion method is robust and efficient to predict the measuring values of underground horizontal and vertical displacements for the proposed sensor. PMID:25871714

  13. Joint two dimensional inversion of gravity and magnetotelluric data using correspondence maps

    NASA Astrophysics Data System (ADS)

    Carrillo Lopez, J.; Gallardo, L. A.

    2016-12-01

    Inverse problems in Earth sciences are inherently non-unique. To improve models and reduce the number of solutions we need to provide extra information. In geological context, this information could be a priori information, for example, geological information, well log data, smoothness, or actually, information of measures of different kind of data. Joint inversion provides an approach to improve the solution and reduce the errors due to suppositions of each method. To do that, we need a link between two or more models. Some approaches have been explored successfully in recent years. For example, Gallardo and Meju (2003), Gallardo and Meju (2004, 2011), and Gallardo et. al. (2012) used the directions of properties to measure the similarity between models minimizing their cross gradients. In this work, we proposed a joint iterative inversion method that use spatial distribution of properties as a link. Correspondence maps could be better characterizing specific Earth systems due they consider the relation between properties. We implemented a code in Fortran to do a two dimensional inversion of magnetotelluric and gravity data, which are two of the standard methods in geophysical exploration. Synthetic tests show the advantages of joint inversion using correspondence maps against separate inversion. Finally, we applied this technique to magnetotelluric and gravity data in the geothermal zone located in Cerro Prieto, México.

  14. Determination of consistent patterns of range of motion in the ankle joint with a computed tomography stress-test.

    PubMed

    Tuijthof, Gabriëlle Josephine Maria; Zengerink, Maartje; Beimers, Lijkele; Jonges, Remmet; Maas, Mario; van Dijk, Cornelis Niek; Blankevoort, Leendert

    2009-07-01

    Measuring the range of motion of the ankle joint can assist in accurate diagnosis of ankle laxity. A computed tomography-based stress-test (3D CT stress-test) was used that determines the three-dimensional position and orientation of tibial, calcaneal and talar bones. The goal was to establish a quantitative database of the normal ranges of motion of the talocrural and subtalar joints. A clinical case on suspected subtalar instability demonstrated the relevance the proposed method. The range of motion was measured for the ankle joints in vivo for 20 subjects using the 3D CT stress-test. Motion of the tibia and calcaneus relative to the talus for eight extreme foot positions were described by helical parameters. High consistency for finite helical axis orientation (n) and rotation (theta) was shown for: talocrural extreme dorsiflexion to extreme plantarflexion (root mean square direction deviation (eta) 5.3 degrees and theta: SD 11.0 degrees), talorucral and subtalar extreme combined eversion-dorsiflexion to combined inversion-plantarflexion (eta: 6.7 degrees , theta: SD 9.0 degrees and eta:6.3 degrees , theta: SD 5.1 degrees), and subtalar extreme inversion to extreme eversion (eta: 6.4 degrees, theta: SD 5.9 degrees). Nearly all dorsi--and plantarflexion occurs in the talocrural joint (theta: mean 63.3 degrees (SD 11 degrees)). The inversion and internal rotation components for extreme eversion to inversion were approximately three times larger for the subtalar joint (theta: mean 22.9 degrees and 29.1 degrees) than for the talocrural joint (theta: mean 8.8 degrees and 10.7 degrees). Comparison of the ranges of motion of the pathologic ankle joint with the healthy subjects showed an increased inversion and axial rotation in the talocrural joint instead of in the suspected subtalar joint. The proposed diagnostic technique and the acquired database of helical parameters of ankle joint ranges of motion are suitable to apply in clinical cases.

  15. Joint probabilistic determination of earthquake location and velocity structure: application to local and regional events

    NASA Astrophysics Data System (ADS)

    Beucler, E.; Haugmard, M.; Mocquet, A.

    2016-12-01

    The most widely used inversion schemes to locate earthquakes are based on iterative linearized least-squares algorithms and using an a priori knowledge of the propagation medium. When a small amount of observations is available for moderate events for instance, these methods may lead to large trade-offs between outputs and both the velocity model and the initial set of hypocentral parameters. We present a joint structure-source determination approach using Bayesian inferences. Monte-Carlo continuous samplings, using Markov chains, generate models within a broad range of parameters, distributed according to the unknown posterior distributions. The non-linear exploration of both the seismic structure (velocity and thickness) and the source parameters relies on a fast forward problem using 1-D travel time computations. The a posteriori covariances between parameters (hypocentre depth, origin time and seismic structure among others) are computed and explicitly documented. This method manages to decrease the influence of the surrounding seismic network geometry (sparse and/or azimuthally inhomogeneous) and a too constrained velocity structure by inferring realistic distributions on hypocentral parameters. Our algorithm is successfully used to accurately locate events of the Armorican Massif (western France), which is characterized by moderate and apparently diffuse local seismicity.

  16. Coupled land surface-subsurface hydrogeophysical inverse modeling to estimate soil organic carbon content and explore associated hydrological and thermal dynamics in the Arctic tundra

    NASA Astrophysics Data System (ADS)

    Phuong Tran, Anh; Dafflon, Baptiste; Hubbard, Susan S.

    2017-09-01

    Quantitative characterization of soil organic carbon (OC) content is essential due to its significant impacts on surface-subsurface hydrological-thermal processes and microbial decomposition of OC, which both in turn are important for predicting carbon-climate feedbacks. While such quantification is particularly important in the vulnerable organic-rich Arctic region, it is challenging to achieve due to the general limitations of conventional core sampling and analysis methods, and to the extremely dynamic nature of hydrological-thermal processes associated with annual freeze-thaw events. In this study, we develop and test an inversion scheme that can flexibly use single or multiple datasets - including soil liquid water content, temperature and electrical resistivity tomography (ERT) data - to estimate the vertical distribution of OC content. Our approach relies on the fact that OC content strongly influences soil hydrological-thermal parameters and, therefore, indirectly controls the spatiotemporal dynamics of soil liquid water content, temperature and their correlated electrical resistivity. We employ the Community Land Model to simulate nonisothermal surface-subsurface hydrological dynamics from the bedrock to the top of canopy, with consideration of land surface processes (e.g., solar radiation balance, evapotranspiration, snow accumulation and melting) and ice-liquid water phase transitions. For inversion, we combine a deterministic and an adaptive Markov chain Monte Carlo (MCMC) optimization algorithm to estimate a posteriori distributions of desired model parameters. For hydrological-thermal-to-geophysical variable transformation, the simulated subsurface temperature, liquid water content and ice content are explicitly linked to soil electrical resistivity via petrophysical and geophysical models. We validate the developed scheme using different numerical experiments and evaluate the influence of measurement errors and benefit of joint inversion on the estimation of OC and other parameters. We also quantify the propagation of uncertainty from the estimated parameters to prediction of hydrological-thermal responses. We find that, compared to inversion of single dataset (temperature, liquid water content or apparent resistivity), joint inversion of these datasets significantly reduces parameter uncertainty. We find that the joint inversion approach is able to estimate OC and sand content within the shallow active layer (top 0.3 m of soil) with high reliability. Due to the small variations of temperature and moisture within the shallow permafrost (here at about 0.6 m depth), the approach is unable to estimate OC with confidence. However, if the soil porosity is functionally related to the OC and mineral content, which is often observed in organic-rich Arctic soil, the uncertainty of OC estimate at this depth remarkably decreases. Our study documents the value of the new surface-subsurface, deterministic-stochastic inversion approach, as well as the benefit of including multiple types of data to estimate OC and associated hydrological-thermal dynamics.

  17. Intelligent inversion method for pre-stack seismic big data based on MapReduce

    NASA Astrophysics Data System (ADS)

    Yan, Xuesong; Zhu, Zhixin; Wu, Qinghua

    2018-01-01

    Seismic exploration is a method of oil exploration that uses seismic information; that is, according to the inversion of seismic information, the useful information of the reservoir parameters can be obtained to carry out exploration effectively. Pre-stack data are characterised by a large amount of data, abundant information, and so on, and according to its inversion, the abundant information of the reservoir parameters can be obtained. Owing to the large amount of pre-stack seismic data, existing single-machine environments have not been able to meet the computational needs of the huge amount of data; thus, the development of a method with a high efficiency and the speed to solve the inversion problem of pre-stack seismic data is urgently needed. The optimisation of the elastic parameters by using a genetic algorithm easily falls into a local optimum, which results in a non-obvious inversion effect, especially for the optimisation effect of the density. Therefore, an intelligent optimisation algorithm is proposed in this paper and used for the elastic parameter inversion of pre-stack seismic data. This algorithm improves the population initialisation strategy by using the Gardner formula and the genetic operation of the algorithm, and the improved algorithm obtains better inversion results when carrying out a model test with logging data. All of the elastic parameters obtained by inversion and the logging curve of theoretical model are fitted well, which effectively improves the inversion precision of the density. This algorithm was implemented with a MapReduce model to solve the seismic big data inversion problem. The experimental results show that the parallel model can effectively reduce the running time of the algorithm.

  18. FOREWORD: 5th International Workshop on New Computational Methods for Inverse Problems

    NASA Astrophysics Data System (ADS)

    Vourc'h, Eric; Rodet, Thomas

    2015-11-01

    This volume of Journal of Physics: Conference Series is dedicated to the scientific research presented during the 5th International Workshop on New Computational Methods for Inverse Problems, NCMIP 2015 (http://complement.farman.ens-cachan.fr/NCMIP_2015.html). This workshop took place at Ecole Normale Supérieure de Cachan, on May 29, 2015. The prior editions of NCMIP also took place in Cachan, France, firstly within the scope of ValueTools Conference, in May 2011, and secondly at the initiative of Institut Farman, in May 2012, May 2013 and May 2014. The New Computational Methods for Inverse Problems (NCMIP) workshop focused on recent advances in the resolution of inverse problems. Indeed, inverse problems appear in numerous scientific areas such as geophysics, biological and medical imaging, material and structure characterization, electrical, mechanical and civil engineering, and finances. The resolution of inverse problems consists of estimating the parameters of the observed system or structure from data collected by an instrumental sensing or imaging device. Its success firstly requires the collection of relevant observation data. It also requires accurate models describing the physical interactions between the instrumental device and the observed system, as well as the intrinsic properties of the solution itself. Finally, it requires the design of robust, accurate and efficient inversion algorithms. Advanced sensor arrays and imaging devices provide high rate and high volume data; in this context, the efficient resolution of the inverse problem requires the joint development of new models and inversion methods, taking computational and implementation aspects into account. During this one-day workshop, researchers had the opportunity to bring to light and share new techniques and results in the field of inverse problems. The topics of the workshop were: algorithms and computational aspects of inversion, Bayesian estimation, Kernel methods, learning methods, convex optimization, free discontinuity problems, metamodels, proper orthogonal decomposition, reduced models for the inversion, non-linear inverse scattering, image reconstruction and restoration, and applications (bio-medical imaging, non-destructive evaluation...). NCMIP 2015 was a one-day workshop held in May 2015 which attracted around 70 attendees. Each of the submitted papers has been reviewed by two reviewers. There have been 15 accepted papers. In addition, three international speakers were invited to present a longer talk. The workshop was supported by Institut Farman (ENS Cachan, CNRS) and endorsed by the following French research networks: GDR ISIS, GDR MIA, GDR MOA and GDR Ondes. The program committee acknowledges the following research laboratories: CMLA, LMT, LURPA and SATIE.

  19. Forward and inverse kinematics of double universal joint robot wrists

    NASA Technical Reports Server (NTRS)

    Williams, Robert L., II

    1991-01-01

    A robot wrist consisting of two universal joints can eliminate the wrist singularity problem found on many individual robots. Forward and inverse position and velocity kinematics are presented for such a wrist having three degrees of freedom. Denavit-Hartenberg parameters are derived to find the transforms required for the kinematic equations. The Omni-Wrist, a commercial double universal joint robot wrist, is studied in detail. There are four levels of kinematic parameters identified for this wrist; three forward and three inverse maps are presented for both position and velocity. These equations relate the hand coordinate frame to the wrist base frame. They are sufficient for control of the wrist standing alone. When the wrist is attached to a manipulator arm; the offset between the two universal joints complicates the solution of the overall kinematics problem. All wrist coordinate frame origins are not coincident, which prevents decoupling of position and orientation for manipulator inverse kinematics.

  20. Hybrid-dual-fourier tomographic algorithm for a fast three-dimensionial optical image reconstruction in turbid media

    NASA Technical Reports Server (NTRS)

    Alfano, Robert R. (Inventor); Cai, Wei (Inventor)

    2007-01-01

    A reconstruction technique for reducing computation burden in the 3D image processes, wherein the reconstruction procedure comprises an inverse and a forward model. The inverse model uses a hybrid dual Fourier algorithm that combines a 2D Fourier inversion with a 1D matrix inversion to thereby provide high-speed inverse computations. The inverse algorithm uses a hybrid transfer to provide fast Fourier inversion for data of multiple sources and multiple detectors. The forward model is based on an analytical cumulant solution of a radiative transfer equation. The accurate analytical form of the solution to the radiative transfer equation provides an efficient formalism for fast computation of the forward model.

  1. Sensitivity of Rayleigh wave ellipticity and implications for surface wave inversion

    NASA Astrophysics Data System (ADS)

    Cercato, Michele

    2018-04-01

    The use of Rayleigh wave ellipticity has gained increasing popularity in recent years for investigating earth structures, especially for near-surface soil characterization. In spite of its widespread application, the sensitivity of the ellipticity function to the soil structure has been rarely explored in a comprehensive and systematic manner. To this end, a new analytical method is presented for computing the sensitivity of Rayleigh wave ellipticity with respect to the structural parameters of a layered elastic half-space. This method takes advantage of the minor decomposition of the surface wave eigenproblem and is numerically stable at high frequency. This numerical procedure allowed to retrieve the sensitivity for typical near surface and crustal geological scenarios, pointing out the key parameters for ellipticity interpretation under different circumstances. On this basis, a thorough analysis is performed to assess how ellipticity data can efficiently complement surface wave dispersion information in a joint inversion algorithm. The results of synthetic and real-world examples are illustrated to analyse quantitatively the diagnostic potential of the ellipticity data with respect to the soil structure, focusing on the possible sources of misinterpretation in data inversion.

  2. The Sensitivity of Joint Inversions of Seismic and Geodynamic Data to Mantle Viscosity

    NASA Astrophysics Data System (ADS)

    Lu, C.; Grand, S. P.; Forte, A. M.; Simmons, N. A.

    2017-12-01

    Seismic tomography has mapped the existence of large scale mantle heterogeneities in recent years. However, the origin of these velocity anomalies in terms of chemical and thermal variations is still under debate due to the limitations of tomography. Joint inversion of seismic, geodynamic, and mineral physics observations has proven to be a powerful tool to decouple thermal and chemical effects in the deep mantle (Simmons et al. 2010). The approach initially attempts to find a model that can be explained assuming temperature controls lateral variations in mantle properties and then to consider more complicated lateral variations that account for the presence of chemical heterogeneity to further fit data. The geodynamic observations include Earth's free air gravity field, tectonic plate motions, dynamic topography and the excess ellipticity of the core. The sensitivity of the geodynamic observables to density anomalies, however, depends on an assumed radial mantle viscosity profile. Here we perform joint inversions of seismic and geodynamic data using a number of published viscosity profiles. The goal is to test the sensitivity of joint inversion results to mantle viscosity. For each viscosity model, geodynamic sensitivity kernels are calculated and used to jointly invert the geodynamic observations as well as a new shear wave data set for a model of density and seismic velocity. Also, compared with previous joint inversion studies, two major improvements have been made in our inversion. First, we use a nonlinear inversion to account for anelastic effects. Applying the very fast simulate annealing (VFSA) method, we let the elastic scaling factor and anelastic parameters from mineral physics measurements vary within their possible ranges and find the best fitting model assuming thermal variations are the cause of the heterogeneity. We also include an a priori subducting slab model into the starting model. Thus the geodynamic and seismic signatures of short wavelength subducting slabs are better accounted for in the inversions. Reference: Simmons, N. A., A. M. Forte, L. Boschi, and S. P. Grand (2010), GyPSuM: A joint tomographic model of mantle density and seismic wave speeds, Journal of Geophysical Research: Solid Earth, 115(B12), B12310

  3. Rayleigh wave nonlinear inversion based on the Firefly algorithm

    NASA Astrophysics Data System (ADS)

    Zhou, Teng-Fei; Peng, Geng-Xin; Hu, Tian-Yue; Duan, Wen-Sheng; Yao, Feng-Chang; Liu, Yi-Mou

    2014-06-01

    Rayleigh waves have high amplitude, low frequency, and low velocity, which are treated as strong noise to be attenuated in reflected seismic surveys. This study addresses how to identify useful shear wave velocity profile and stratigraphic information from Rayleigh waves. We choose the Firefly algorithm for inversion of surface waves. The Firefly algorithm, a new type of particle swarm optimization, has the advantages of being robust, highly effective, and allows global searching. This algorithm is feasible and has advantages for use in Rayleigh wave inversion with both synthetic models and field data. The results show that the Firefly algorithm, which is a robust and practical method, can achieve nonlinear inversion of surface waves with high resolution.

  4. Large Airborne Full Tensor Gradient Data Inversion Based on a Non-Monotone Gradient Method

    NASA Astrophysics Data System (ADS)

    Sun, Yong; Meng, Zhaohai; Li, Fengting

    2018-03-01

    Following the development of gravity gradiometer instrument technology, the full tensor gravity (FTG) data can be acquired on airborne and marine platforms. Large-scale geophysical data can be obtained using these methods, making such data sets a number of the "big data" category. Therefore, a fast and effective inversion method is developed to solve the large-scale FTG data inversion problem. Many algorithms are available to accelerate the FTG data inversion, such as conjugate gradient method. However, the conventional conjugate gradient method takes a long time to complete data processing. Thus, a fast and effective iterative algorithm is necessary to improve the utilization of FTG data. Generally, inversion processing is formulated by incorporating regularizing constraints, followed by the introduction of a non-monotone gradient-descent method to accelerate the convergence rate of FTG data inversion. Compared with the conventional gradient method, the steepest descent gradient algorithm, and the conjugate gradient algorithm, there are clear advantages of the non-monotone iterative gradient-descent algorithm. Simulated and field FTG data were applied to show the application value of this new fast inversion method.

  5. Nonlinear PP and PS joint inversion based on the exact Zoeppritz equations: a two-stage procedure

    NASA Astrophysics Data System (ADS)

    Zhi, Lixia; Chen, Shuangquan; Song, Baoshan; Li, Xiang-yang

    2018-04-01

    S-velocity and density are very important parameters in distinguishing lithology and estimating other petrophysical properties. A reliable estimate of S-velocity and density is very difficult to obtain, even from long-offset gather data. Joint inversion of PP and PS data provides a promising strategy for stabilizing and improving the results of inversion in estimating elastic parameters and density. For 2D or 3D inversion, the trace-by-trace strategy is still the most widely used method although it often suffers from a lack of clarity because of its high efficiency, which is due to parallel computing. This paper describes a two-stage inversion method for nonlinear PP and PS joint inversion based on the exact Zoeppritz equations. There are several advantages for our proposed methods as follows: (1) Thanks to the exact Zoeppritz equation, our joint inversion method is applicable for wide angle amplitude-versus-angle inversion; (2) The use of both P- and S-wave information can further enhance the stability and accuracy of parameter estimation, especially for the S-velocity and density; (3) The two-stage inversion procedure proposed in this paper can achieve a good compromise between efficiency and precision. On the one hand, the trace-by-trace strategy used in the first stage can be processed in parallel so that it has high computational efficiency. On the other hand, to deal with the indistinctness of and undesired disturbances to the inversion results obtained from the first stage, we apply the second stage—total variation (TV) regularization. By enforcing spatial and temporal constraints, the TV regularization stage deblurs the inversion results and leads to parameter estimation with greater precision. Notably, the computation consumption of the TV regularization stage can be ignored compared to the first stage because it is solved using the fast split Bregman iterations. Numerical examples using a well log and the Marmousi II model show that the proposed joint inversion is a reliable method capable of accurately estimating the density parameter as well as P-wave velocity and S-wave velocity, even when the seismic data is noisy with signal-to-noise ratio of 5.

  6. A study on characterization of stratospheric aerosol and gas parameters with the spacecraft solar occultation experiment

    NASA Technical Reports Server (NTRS)

    Chu, W. P.

    1977-01-01

    Spacecraft remote sensing of stratospheric aerosol and ozone vertical profiles using the solar occultation experiment has been analyzed. A computer algorithm has been developed in which a two step inversion of the simulated data can be performed. The radiometric data are first inverted into a vertical extinction profile using a linear inversion algorithm. Then the multiwavelength extinction profiles are solved with a nonlinear least square algorithm to produce aerosol and ozone vertical profiles. Examples of inversion results are shown illustrating the resolution and noise sensitivity of the inversion algorithms.

  7. Time-lapse joint AVO inversion using generalized linear method based on exact Zoeppritz equations

    NASA Astrophysics Data System (ADS)

    Zhi, L.; Gu, H.

    2017-12-01

    The conventional method of time-lapse AVO (Amplitude Versus Offset) inversion is mainly based on the approximate expression of Zoeppritz equations. Though the approximate expression is concise and convenient to use, it has certain limitations. For example, its application condition is that the difference of elastic parameters between the upper medium and lower medium is little and the incident angle is small. In addition, the inversion of density is not stable. Therefore, we develop the method of time-lapse joint AVO inversion based on exact Zoeppritz equations. In this method, we apply exact Zoeppritz equations to calculate the reflection coefficient of PP wave. And in the construction of objective function for inversion, we use Taylor expansion to linearize the inversion problem. Through the joint AVO inversion of seismic data in baseline survey and monitor survey, we can obtain P-wave velocity, S-wave velocity, density in baseline survey and their time-lapse changes simultaneously. We can also estimate the oil saturation change according to inversion results. Compared with the time-lapse difference inversion, the joint inversion has a better applicability. It doesn't need some assumptions and can estimate more parameters simultaneously. Meanwhile, by using the generalized linear method, the inversion is easily realized and its calculation amount is small. We use the Marmousi model to generate synthetic seismic records to test and analyze the influence of random noise. Without noise, all estimation results are relatively accurate. With the increase of noise, P-wave velocity change and oil saturation change are stable and less affected by noise. S-wave velocity change is most affected by noise. Finally we use the actual field data of time-lapse seismic prospecting to process and the results can prove the availability and feasibility of our method in actual situation.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lucas, D. D.; Yver Kwok, C.; Cameron-Smith, P.

    Emission rates of greenhouse gases (GHGs) entering into the atmosphere can be inferred using mathematical inverse approaches that combine observations from a network of stations with forward atmospheric transport models. Some locations for collecting observations are better than others for constraining GHG emissions through the inversion, but the best locations for the inversion may be inaccessible or limited by economic and other non-scientific factors. We present a method to design an optimal GHG observing network in the presence of multiple objectives that may be in conflict with each other. As a demonstration, we use our method to design a prototypemore » network of six stations to monitor summertime emissions in California of the potent GHG 1,1,1,2-tetrafluoroethane (CH 2FCF 3, HFC-134a). We use a multiobjective genetic algorithm to evolve network configurations that seek to jointly maximize the scientific accuracy of the inferred HFC-134a emissions and minimize the associated costs of making the measurements. The genetic algorithm effectively determines a set of "optimal" observing networks for HFC-134a that satisfy both objectives (i.e., the Pareto frontier). The Pareto frontier is convex, and clearly shows the tradeoffs between performance and cost, and the diminishing returns in trading one for the other. Without difficulty, our method can be extended to design optimal networks to monitor two or more GHGs with different emissions patterns, or to incorporate other objectives and constraints that are important in the practical design of atmospheric monitoring networks.« less

  9. Estimating net joint torques from kinesiological data using optimal linear system theory.

    PubMed

    Runge, C F; Zajac, F E; Allum, J H; Risher, D W; Bryson, A E; Honegger, F

    1995-12-01

    Net joint torques (NJT) are frequently computed to provide insights into the motor control of dynamic biomechanical systems. An inverse dynamics approach is almost always used, whereby the NJT are computed from 1) kinematic measurements (e.g., position of the segments), 2) kinetic measurements (e.g., ground reaction forces) that are, in effect, constraints defining unmeasured kinematic quantities based on a dynamic segmental model, and 3) numerical differentiation of the measured kinematics to estimate velocities and accelerations that are, in effect, additional constraints. Due to errors in the measurements, the segmental model, and the differentiation process, estimated NJT rarely produce the observed movement in a forward simulation when the dynamics of the segmental system are inherently unstable (e.g., human walking). Forward dynamic simulations are, however, essential to studies of muscle coordination. We have developed an alternative approach, using the linear quadratic follower (LQF) algorithm, which computes the NJT such that a stable simulation of the observed movement is produced and the measurements are replicated as well as possible. The LQF algorithm does not employ constraints depending on explicit differentiation of the kinematic data, but rather employs those depending on specification of a cost function, based on quantitative assumptions about data confidence. We illustrate the usefulness of the LQF approach by using it to estimate NJT exerted by standing humans perturbed by support-surface movements. We show that unless the number of kinematic and force variables recorded is sufficiently high, the confidence that can be placed in the estimates of the NJT, obtained by any method (e.g., LQF, or the inverse dynamics approach), may be unsatisfactorily low.

  10. Finite-frequency tomography using adjoint methods-Methodology and examples using membrane surface waves

    NASA Astrophysics Data System (ADS)

    Tape, Carl; Liu, Qinya; Tromp, Jeroen

    2007-03-01

    We employ adjoint methods in a series of synthetic seismic tomography experiments to recover surface wave phase-speed models of southern California. Our approach involves computing the Fréchet derivative for tomographic inversions via the interaction between a forward wavefield, propagating from the source to the receivers, and an `adjoint' wavefield, propagating from the receivers back to the source. The forward wavefield is computed using a 2-D spectral-element method (SEM) and a phase-speed model for southern California. A `target' phase-speed model is used to generate the `data' at the receivers. We specify an objective or misfit function that defines a measure of misfit between data and synthetics. For a given receiver, the remaining differences between data and synthetics are time-reversed and used as the source of the adjoint wavefield. For each earthquake, the interaction between the regular and adjoint wavefields is used to construct finite-frequency sensitivity kernels, which we call event kernels. An event kernel may be thought of as a weighted sum of phase-specific (e.g. P) banana-doughnut kernels, with weights determined by the measurements. The overall sensitivity is simply the sum of event kernels, which defines the misfit kernel. The misfit kernel is multiplied by convenient orthonormal basis functions that are embedded in the SEM code, resulting in the gradient of the misfit function, that is, the Fréchet derivative. A non-linear conjugate gradient algorithm is used to iteratively improve the model while reducing the misfit function. We illustrate the construction of the gradient and the minimization algorithm, and consider various tomographic experiments, including source inversions, structural inversions and joint source-structure inversions. Finally, we draw connections between classical Hessian-based tomography and gradient-based adjoint tomography.

  11. Joint inversion of high-frequency surface waves with fundamental and higher modes

    USGS Publications Warehouse

    Luo, Y.; Xia, J.; Liu, J.; Liu, Q.; Xu, S.

    2007-01-01

    Joint inversion of multimode surface waves for estimating the shear (S)-wave velocity has received much attention in recent years. In this paper, we first analyze sensitivity of phase velocities of multimodes of surface waves for a six-layer earth model, and then we invert surface-wave dispersion curves of the theoretical model and a real-world example. Sensitivity analysis shows that fundamental mode data are more sensitive to the S-wave velocities of shallow layers and are concentrated on a very narrow frequency band, while higher mode data are more sensitive to the parameters of relatively deeper layers and are distributed over a wider frequency band. These properties provide a foundation of using a multimode joint inversion to define S-wave velocities. Inversion results of both synthetic data and a real-world example demonstrate that joint inversion with the damped least-square method and the singular-value decomposition technique to invert high-frequency surface waves with fundamental and higher mode data simultaneously can effectively reduce the ambiguity and improve the accuracy of S-wave velocities. ?? 2007.

  12. Delineation of sediments below flood basalts by joint inversion of seismic and magnetotelluric data

    NASA Astrophysics Data System (ADS)

    Manglik, A.; Verma, Saurabh K.

    A one-dimensional joint-inversion (JI) scheme considering seismic reflection and refraction, and MT data is developed. Its efficacy to resolve low velocity conducting sediments below high velocity resistive flood basalts is tested for a representative geological model considering noisy, incomplete data. The JI is found to provide improved results in comparison to those obtained by individual seismic and MT inversions.

  13. Comparative evaluation between anatomic and non-anatomic lateral ligament reconstruction techniques in the ankle joint: A computational study.

    PubMed

    Purevsuren, Tserenchimed; Batbaatar, Myagmarbayar; Khuyagbaatar, Batbayar; Kim, Kyungsoo; Kim, Yoon Hyuk

    2018-03-12

    Biomechanical studies have indicated that the conventional non-anatomic reconstruction techniques for lateral ankle sprain (LAS) tend to restrict subtalar joint motion compared to intact ankle joints. Excessive restriction in subtalar motion may lead to chronic pain, functional difficulties, and development of osteoarthritis. Therefore, various anatomic surgical techniques to reconstruct both the anterior talofibular and calcaneofibular ligaments have been introduced. In this study, ankle joint stability was evaluated using multibody computational ankle joint model to assess two new anatomic reconstruction and three popular non-anatomic reconstruction techniques. An LAS injury, three popular non-anatomic reconstruction models (Watson-Jones, Evans, and Chrisman-Snook), and two common types of anatomic reconstruction models were developed based on the intact ankle model. The stability of ankle in both talocrural and subtalar joint were evaluated under anterior drawer test (150 N anterior force), inversion test (3 Nm inversion moment), internal rotational test (3 Nm internal rotation moment), and the combined loading test (9 Nm inversion and internal moment as well as 1800 N compressive force). Our overall results show that the two anatomic reconstruction techniques were superior to the non-anatomic reconstruction techniques in stabilizing both talocrural and subtalar joints. Restricted subtalar joint motion, which mainly observed in Watson-Jones and Chrisman-Snook techniques, was not shown in the anatomical reconstructions. Evans technique was beneficial for subtalar joint as it does not restrict subtalar motion, though Evans technique was insufficient for restoring talocrural joint inversion. The anatomical reconstruction techniques best recovered ankle stability.

  14. The in situ force in the calcaneofibular ligament and the contribution of this ligament to ankle joint stability.

    PubMed

    Kobayashi, Takuma; Yamakawa, Satoshi; Watanabe, Kota; Kimura, Kei; Suzuki, Daisuke; Otsubo, Hidenori; Teramoto, Atsushi; Fujimiya, Mineko; Fujie, Hiromichi; Yamashita, Toshihiko

    2016-12-01

    Numerous biomechanical studies of the lateral ankle ligaments have been reported; however, the isolated function of the calcaneofibular ligament has not been clarified. We hypothesize that the calcaneofibular ligament would stabilize the ankle joint complex under multidirectional loading, and that the in situ force in the calcaneofibular ligament would change in each flexed position. Using seven fresh frozen cadaveric lower extremities, the motions and forces of the intact ankle under multidirectional loading were recorded using a 6-degree-of-freedom robotic system. On repeating these intact ankle joint complex motions after the calcaneofibular ligament transection, the in situ force in the calcaneofibular ligament and the contribution of the calcaneofibular ligament to ankle joint complex stability were calculated. Finally, the motions of the calcaneofibular ligament-transected ankle joint complex were recorded. Under an inversion load, significant increases of inversion angle were observed in all the flexed positions following calcaneofibular ligament transection, and the calcaneofibular ligament accounted for 50%-70% of ankle joint complex stability during inversion. The in situ forces in the calcaneofibular ligament under an anterior force, inversion moment, and external rotation moment were larger in the dorsiflexed position than in the plantarflexed position. The calcaneofibular ligament plays a role in stabilizing the ankle joint complex to multidirectional loads and the role differs with load directions. The in situ force of the calcaneofibular ligament is larger at the dorsiflexed position. This ligament provides the primary restraint to the inversion ankle stability. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Receiver function HV ratio: a new measurement for reducing non-uniqueness of receiver function waveform inversion

    NASA Astrophysics Data System (ADS)

    Chong, Jiajun; Chu, Risheng; Ni, Sidao; Meng, Qingjun; Guo, Aizhi

    2018-02-01

    It is known that a receiver function has relatively weak constraint on absolute seismic wave velocity, and that joint inversion of the receiver function with surface wave dispersion has been widely applied to reduce the trade-off of velocity with interface depth. However, some studies indicate that the receiver function itself is capable for determining the absolute shear-wave velocity. In this study, we propose to measure the receiver function HV ratio which takes advantage of the amplitude information of the receiver function to constrain the shear-wave velocity. Numerical analysis indicates that the receiver function HV ratio is sensitive to the average shear-wave velocity in the depth range it samples, and can help to reduce the non-uniqueness of receiver function waveform inversion. A joint inversion scheme has been developed, and both synthetic tests and real data application proved the feasibility of the joint inversion.

  16. Medial joint line bone bruising at MRI complicating acute ankle inversion injury: what is its clinical significance?

    PubMed

    Chan, V O; Moran, D E; Shine, S; Eustace, S J

    2013-10-01

    To assess the incidence and clinical significance of medial joint line bone bruising following acute ankle inversion injury. Forty-five patients who underwent ankle magnetic resonance imaging (MRI) within 2 weeks of acute ankle inversion injury were included in this prospective study. Integrity of the lateral collateral ligament complex, presence of medial joint line bone bruising, tibio-talar joint effusion, and soft-tissue swelling were documented. Clinical follow-up at 6 months was carried out to determine the impact of injury on length of time out of work, delay in return to normal walking, delay in return to sports activity, and persistence of medial joint line pain. Thirty-seven patients had tears of the anterior talofibular ligament (ATFL). Twenty-six patients had medial joint line bone bruising with altered marrow signal at the medial aspect of the talus and congruent surface of the medial malleolus. A complete ATFL tear was seen in 92% of the patients with medial joint line bone bruising (p = 0.05). Patients with an ATFL tear and medial joint line bone bruising had a longer delay in return to normal walking (p = 0.0002), longer delay in return to sports activity (p = 0.0001), and persistent medial joint line pain (p = 0.0003). There was no statistically significant difference in outcome for the eight patients without ATFL tears. Medial joint line bone bruising following an acute ankle inversion injury was significantly associated with a complete ATFL tear, longer delay in the return to normal walking and sports activity, as well as persistent medial joint line pain. Its presence should prompt detailed assessment of the lateral collateral ligament complex, particularly the ATFL. Copyright © 2013 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  17. Fast myopic 2D-SIM super resolution microscopy with joint modulation pattern estimation

    NASA Astrophysics Data System (ADS)

    Orieux, François; Loriette, Vincent; Olivo-Marin, Jean-Christophe; Sepulveda, Eduardo; Fragola, Alexandra

    2017-12-01

    Super-resolution in structured illumination microscopy (SIM) is obtained through de-aliasing of modulated raw images, in which high frequencies are measured indirectly inside the optical transfer function. Usual approaches that use 9 or 15 images are often too slow for dynamic studies. Moreover, as experimental conditions change with time, modulation parameters must be estimated within the images. This paper tackles the problem of image reconstruction for fast super resolution in SIM, where the number of available raw images is reduced to four instead of nine or fifteen. Within an optimization framework, the solution is inferred via a joint myopic criterion for image and modulation (or acquisition) parameters, leading to what is frequently called a myopic or semi-blind inversion problem. The estimate is chosen as the minimizer of the nonlinear criterion, numerically calculated by means of a block coordinate optimization algorithm. The effectiveness of the proposed method is demonstrated for simulated and experimental examples. The results show precise estimation of the modulation parameters jointly with the reconstruction of the super resolution image. The method also shows its effectiveness for thick biological samples.

  18. Inferior olive mirrors joint dynamics to implement an inverse controller.

    PubMed

    Alvarez-Icaza, Rodrigo; Boahen, Kwabena

    2012-10-01

    To produce smooth and coordinated motion, our nervous systems need to generate precisely timed muscle activation patterns that, due to axonal conduction delay, must be generated in a predictive and feedforward manner. Kawato proposed that the cerebellum accomplishes this by acting as an inverse controller that modulates descending motor commands to predictively drive the spinal cord such that the musculoskeletal dynamics are canceled out. This and other cerebellar theories do not, however, account for the rich biophysical properties expressed by the olivocerebellar complex's various cell types, making these theories difficult to verify experimentally. Here we propose that a multizonal microcomplex's (MZMC) inferior olivary neurons use their subthreshold oscillations to mirror a musculoskeletal joint's underdamped dynamics, thereby achieving inverse control. We used control theory to map a joint's inverse model onto an MZMC's biophysics, and we used biophysical modeling to confirm that inferior olivary neurons can express the dynamics required to mirror biomechanical joints. We then combined both techniques to predict how experimentally injecting current into the inferior olive would affect overall motor output performance. We found that this experimental manipulation unmasked a joint's natural dynamics, as observed by motor output ringing at the joint's natural frequency, with amplitude proportional to the amount of current. These results support the proposal that the cerebellum-in particular an MZMC-is an inverse controller; the results also provide a biophysical implementation for this controller and allow one to make an experimentally testable prediction.

  19. Inverts permittivity and conductivity with structural constraint in GPR FWI based on truncated Newton method

    NASA Astrophysics Data System (ADS)

    Ren, Qianci

    2018-04-01

    Full waveform inversion (FWI) of ground penetrating radar (GPR) is a promising technique to quantitatively evaluate the permittivity and conductivity of near subsurface. However, these two parameters are simultaneously inverted in the GPR FWI, increasing the difficulty to obtain accurate inversion results for both parameters. In this study, I present a structural constrained GPR FWI procedure to jointly invert the two parameters, aiming to force a structural relationship between permittivity and conductivity in the process of model reconstruction. The structural constraint is enforced by a cross-gradient function. In this procedure, the permittivity and conductivity models are inverted alternately at each iteration and updated with hierarchical frequency components in the frequency domain. The joint inverse problem is solved by the truncated Newton method which considering the effect of Hessian operator and using the approximated solution of Newton equation to be the perturbation model in the updating process. The joint inversion procedure is tested by three synthetic examples. The results show that jointly inverting permittivity and conductivity in GPR FWI effectively increases the structural similarities between the two parameters, corrects the structures of parameter models, and significantly improves the accuracy of conductivity model, resulting in a better inversion result than the individual inversion.

  20. The whole space three-dimensional magnetotelluric inversion algorithm with static shift correction

    NASA Astrophysics Data System (ADS)

    Zhang, K.

    2016-12-01

    Base on the previous studies on the static shift correction and 3D inversion algorithms, we improve the NLCG 3D inversion method and propose a new static shift correction method which work in the inversion. The static shift correction method is based on the 3D theory and real data. The static shift can be detected by the quantitative analysis of apparent parameters (apparent resistivity and impedance phase) of MT in high frequency range, and completed correction with inversion. The method is an automatic processing technology of computer with 0 cost, and avoids the additional field work and indoor processing with good results.The 3D inversion algorithm is improved (Zhang et al., 2013) base on the NLCG method of Newman & Alumbaugh (2000) and Rodi & Mackie (2001). For the algorithm, we added the parallel structure, improved the computational efficiency, reduced the memory of computer and added the topographic and marine factors. So the 3D inversion could work in general PC with high efficiency and accuracy. And all the MT data of surface stations, seabed stations and underground stations can be used in the inversion algorithm. The verification and application example of 3D inversion algorithm is shown in Figure 1. From the comparison of figure 1, the inversion model can reflect all the abnormal bodies and terrain clearly regardless of what type of data (impedance/tipper/impedance and tipper). And the resolution of the bodies' boundary can be improved by using tipper data. The algorithm is very effective for terrain inversion. So it is very useful for the study of continental shelf with continuous exploration of land, marine and underground.The three-dimensional electrical model of the ore zone reflects the basic information of stratum, rock and structure. Although it cannot indicate the ore body position directly, the important clues are provided for prospecting work by the delineation of diorite pluton uplift range. The test results show that, the high quality of the data processing and efficient inversion method for electromagnetic method is an important guarantee for porphyry ore.

  1. Joint inversion of seismic refraction and resistivity data using layered models - applications to hydrogeology

    NASA Astrophysics Data System (ADS)

    Juhojuntti, N. G.; Kamm, J.

    2010-12-01

    We present a layered-model approach to joint inversion of shallow seismic refraction and resistivity (DC) data, which we believe is a seldom tested method of addressing the problem. This method has been developed as we believe that for shallow sedimentary environments (roughly <100 m depth) a model with a few layers and sharp layer boundaries better represents the subsurface than a smooth minimum-structure (grid) model. Due to the strong assumption our model parameterization implies on the subsurface, only a low number of well resolved model parameters has to be estimated, and provided that this assumptions holds our method can also be applied to other environments. We are using a least-squares inversion, with lateral smoothness constraints, allowing lateral variations in the seismic velocity and the resistivity but no vertical variations. One exception is a positive gradient in the seismic velocity in the uppermost layer in order to get diving rays (the refractions in the deeper layers are modeled as head waves). We assume no connection between seismic velocity and resistivity, and these parameters are allowed to vary individually within the layers. The layer boundaries are, however, common for both parameters. During the inversion lateral smoothing can be applied to the layer boundaries as well as to the seismic velocity and the resistivity. The number of layers is specified before the inversion, and typically we use models with three layers. Depending on the type of environment it is possible to apply smoothing either to the depth of the layer boundaries or to the thickness of the layers, although normally the former is used for shallow sedimentary environments. The smoothing parameters can be chosen independently for each layer. For the DC data we use a finite-difference algorithm to perform the forward modeling and to calculate the Jacobian matrix, while for the seismic data the corresponding entities are retrieved via ray-tracing, using components from the RAYINVR package. The modular layout of the code makes it straightforward to include other types of geophysical data, i.e. gravity. The code has been tested using synthetic examples with fairly simple 2D geometries, mainly for checking the validity of the calculations. The inversion generally converges towards the correct solution, although there could be stability problems if the starting model is too erroneous. We have also applied the code to field data from seismic refraction and multi-electrode resistivity measurements at typical sand-gravel groundwater reservoirs. The tests are promising, as the calculated depths agree fairly well with information from drilling and the velocity and resistivity values appear reasonable. Current work includes better regularization of the inversion as well as defining individual weight factors for the different datasets, as the present algorithm tends to constrain the depths mainly by using the seismic data. More complex synthetic examples will also be tested, including models addressing the seismic hidden-layer problem.

  2. General phase regularized reconstruction using phase cycling.

    PubMed

    Ong, Frank; Cheng, Joseph Y; Lustig, Michael

    2018-07-01

    To develop a general phase regularized image reconstruction method, with applications to partial Fourier imaging, water-fat imaging and flow imaging. The problem of enforcing phase constraints in reconstruction was studied under a regularized inverse problem framework. A general phase regularized reconstruction algorithm was proposed to enable various joint reconstruction of partial Fourier imaging, water-fat imaging and flow imaging, along with parallel imaging (PI) and compressed sensing (CS). Since phase regularized reconstruction is inherently non-convex and sensitive to phase wraps in the initial solution, a reconstruction technique, named phase cycling, was proposed to render the overall algorithm invariant to phase wraps. The proposed method was applied to retrospectively under-sampled in vivo datasets and compared with state of the art reconstruction methods. Phase cycling reconstructions showed reduction of artifacts compared to reconstructions without phase cycling and achieved similar performances as state of the art results in partial Fourier, water-fat and divergence-free regularized flow reconstruction. Joint reconstruction of partial Fourier + water-fat imaging + PI + CS, and partial Fourier + divergence-free regularized flow imaging + PI + CS were demonstrated. The proposed phase cycling reconstruction provides an alternative way to perform phase regularized reconstruction, without the need to perform phase unwrapping. It is robust to the choice of initial solutions and encourages the joint reconstruction of phase imaging applications. Magn Reson Med 80:112-125, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  3. A joint equalization algorithm in high speed communication systems

    NASA Astrophysics Data System (ADS)

    Hao, Xin; Lin, Changxing; Wang, Zhaohui; Cheng, Binbin; Deng, Xianjin

    2018-02-01

    This paper presents a joint equalization algorithm in high speed communication systems. This algorithm takes the advantages of traditional equalization algorithms to use pre-equalization and post-equalization. The pre-equalization algorithm takes the advantage of CMA algorithm, which is not sensitive to the frequency offset. Pre-equalization is located before the carrier recovery loop in order to make the carrier recovery loop a better performance and overcome most of the frequency offset. The post-equalization takes the advantage of MMA algorithm in order to overcome the residual frequency offset. This paper analyzes the advantages and disadvantages of several equalization algorithms in the first place, and then simulates the proposed joint equalization algorithm in Matlab platform. The simulation results shows the constellation diagrams and the bit error rate curve, both these results show that the proposed joint equalization algorithm is better than the traditional algorithms. The residual frequency offset is shown directly in the constellation diagrams. When SNR is 14dB, the bit error rate of the simulated system with the proposed joint equalization algorithm is 103 times better than CMA algorithm, 77 times better than MMA equalization, and 9 times better than CMA-MMA equalization.

  4. Unusual exostosis formation of the subtalar joint following an inversion ankle injury.

    PubMed

    Cisco, R W; Shaffer, M; Kuchler, L

    1993-01-01

    Exostosis formation following trauma isnot uncommon to the joints of the foot and ankle. The etiology and treatment of these boney lesions is well-documented in the literature. The following is a report of an unusual exostosis of the subtalar joint following inversion ankle injury. This case is unusual in respect to the formation of an adventitious articulation, the size of the lesion, and the pathology.

  5. Mapping soil salinity and a fresh-water intrusion in three-dimensions using a quasi-3d joint-inversion of DUALEM-421S and EM34 data

    NASA Astrophysics Data System (ADS)

    Zare, Ehsan; Huang, Jingyi; Koganti, Triven; Triantafilis, John

    2017-04-01

    In order to understand the drivers of topsoil salinization, the distribution and movement of salt in accordance with groundwater need mapping. In this study, we described a method to map the distribution of soil salinity, as measured by the electrical conductivity of a saturated soil-paste extract (ECe), and in 3-dimensions around a water storage reservoir in an irrigated field near Bourke, New South Wales, Australia. A quasi-3d electromagnetic conductivity image (EMCI) or model of the true electrical conductivity (sigma) was developed using 133 apparent electrical conductivity (ECa) measurements collected on a 50 m grid and using various coil arrays of DUALEM-421S and EM34 instruments. For the DUALEM-421S we considered ECa in horizontal coplanar (i.e., 1 mPcon, 2 mPcon and 4 mPcon) and vertical coplanar (i.e., 1 mHcon, 2 mHcon and 4 mHcon) arrays. For the EM34, three measurements in the horizontal mode (i.e., EM34-10H, EM34-20H and EM34-40H) were considered. We estimated σ using a quasi-3d joint-inversion algorithm (EM4Soil). The best correlation (R2 = 0.92) between σ and measured soil ECe was identified when a forward modelling (FS), inversion algorithm (S2) and damping factor (lambda = 0.2) were used and using both DUALEM-421 and EM34 data; but not including the 4 m coil arrays of the DUALEM-421S. A linear regression calibration model was used to predict ECe in 3-dimensions beneath the study field. The predicted ECe was consistent with previous studies and revealed the distribution of ECe and helped to infer a freshwater intrusion from a water storage reservoir at depth and as a function of its proximity to near-surface prior stream channels and buried paleochannels. It was concluded that this method can be applied elsewhere to map the soil salinity and water movement and provide guidance for improved land management.|

  6. Mapping soil salinity and a fresh-water intrusion in three-dimensions using a quasi-3d joint-inversion of DUALEM-421S and EM34 data.

    PubMed

    Huang, J; Koganti, T; Santos, F A Monteiro; Triantafilis, J

    2017-01-15

    In order to understand the drivers of topsoil salinization, the distribution and movement of salt in accordance with groundwater need mapping. In this study, we described a method to map the distribution of soil salinity, as measured by the electrical conductivity of a saturated soil-paste extract (EC e ), and in 3-dimensions around a water storage reservoir in an irrigated field near Bourke, New South Wales, Australia. A quasi-3d electromagnetic conductivity image (EMCI) or model of the true electrical conductivity (σ) was developed using 133 apparent electrical conductivity (EC a ) measurements collected on a 50m grid and using various coil arrays of DUALEM-421S and EM34 instruments. For the DUALEM-421S we considered EC a in horizontal coplanar (i.e., 1mPcon, 2mPcon and 4mPcon) and vertical coplanar (i.e., 1mHcon, 2mHcon and 4mHcon) arrays. For the EM34, three measurements in the horizontal mode (i.e., EM34-10H, EM34-20H and EM34-40H) were considered. We estimated σ using a quasi-3d joint-inversion algorithm (EM4Soil). The best correlation (R 2 =0.92) between σ and measured soil EC e was identified when a forward modelling (FS), inversion algorithm (S2) and damping factor (λ=0.2) were used and using both DUALEM-421 and EM34 data; but not including the 4m coil arrays of the DUALEM-421S. A linear regression calibration model was used to predict EC e in 3-dimensions beneath the study field. The predicted EC e was consistent with previous studies and revealed the distribution of EC e and helped to infer a freshwater intrusion from a water storage reservoir at depth and as a function of its proximity to near-surface prior stream channels and buried paleochannels. It was concluded that this method can be applied elsewhere to map the soil salinity and water movement and provide guidance for improved land management. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. A comparison of subtalar joint motion during anticipated medial cutting turns and level walking using a multi-segment foot model.

    PubMed

    Jenkyn, T R; Shultz, R; Giffin, J R; Birmingham, T B

    2010-02-01

    The weight-bearing in-vivo kinematics and kinetics of the talocrural joint, subtalar joint and joints of the foot were quantified using optical motion analysis. Twelve healthy subjects were studied during level walking and anticipated medial turns at self-selected pace. A multi-segment model of the foot using skin-mounted marker triads tracked four foot segments: the hindfoot, midfoot, lateral and medial forefoot. The lower leg and thigh were also tracked. Motion between each of the segments could occur in three degrees of rotational freedom, but only six inter-segmental motions were reported in this study: (1) talocrural dorsi-plantar-flexion, (2) subtalar inversion-eversion, (3) frontal plane hindfoot motion, (4) transverse plane hindfoot motion, (5) forefoot supination-pronation twisting and (6) the height-to-length ratio of the medial longitudinal arch. The motion at the subtalar joint during stance phase of walking (eversion then inversion) was reversed during a turning task (inversion then eversion). The external subtalar joint moment was also changed from a moderate eversion moment during walking to a larger inversion moment during the turn. The kinematics of the talocrural joint and the joints of the foot were similar between these two tasks. During a medial turn, the subtalar joint may act to maintain the motions in the foot and talocrural joint that occur during level walking. This is occurring despite the conspicuously different trajectory of the centre of mass of the body. This may allow the foot complex to maintain its function of energy absorption followed by energy return during stance phase that is best suited to level walking. Copyright 2009 Elsevier B.V. All rights reserved.

  8. An optimal resolved rate law for kindematically redundant manipulators

    NASA Technical Reports Server (NTRS)

    Bourgeois, B. J.

    1987-01-01

    The resolved rate law for a manipulator provides the instantaneous joint rates required to satisfy a given instantaneous hand motion. When the joint space has more degrees of freedom than the task space, the manipulator is kinematically redundant and the kinematic rate equations are underdetermined. These equations can be locally optimized, but the resulting pseudo-inverse solution was found to cause large joint rates in some case. A weighting matrix in the locally optimized (pseudo-inverse) solution is dynamically adjusted to control the joint motion as desired. Joint reach limit avoidance is demonstrated in a kinematically redundant planar arm model. The treatment is applicable to redundant manipulators with any number of revolute joints and to nonplanar manipulators.

  9. Learning Inverse Rig Mappings by Nonlinear Regression.

    PubMed

    Holden, Daniel; Saito, Jun; Komura, Taku

    2017-03-01

    We present a framework to design inverse rig-functions-functions that map low level representations of a character's pose such as joint positions or surface geometry to the representation used by animators called the animation rig. Animators design scenes using an animation rig, a framework widely adopted in animation production which allows animators to design character poses and geometry via intuitive parameters and interfaces. Yet most state-of-the-art computer animation techniques control characters through raw, low level representations such as joint angles, joint positions, or vertex coordinates. This difference often stops the adoption of state-of-the-art techniques in animation production. Our framework solves this issue by learning a mapping between the low level representations of the pose and the animation rig. We use nonlinear regression techniques, learning from example animation sequences designed by the animators. When new motions are provided in the skeleton space, the learned mapping is used to estimate the rig controls that reproduce such a motion. We introduce two nonlinear functions for producing such a mapping: Gaussian process regression and feedforward neural networks. The appropriate solution depends on the nature of the rig and the amount of data available for training. We show our framework applied to various examples including articulated biped characters, quadruped characters, facial animation rigs, and deformable characters. With our system, animators have the freedom to apply any motion synthesis algorithm to arbitrary rigging and animation pipelines for immediate editing. This greatly improves the productivity of 3D animation, while retaining the flexibility and creativity of artistic input.

  10. A Study of H-Reflexes in Subjects with Acute Ankle Inversion Injuries

    DTIC Science & Technology

    1996-12-09

    stress to the injured ankle at heel- strike .(57) Any increased inversion stress by way of joint loading in the presence of compromised joint...the present study, may play a role in decreasing the degree of calcaneal inversion just prior to heel- strike and minimize the stress on the lateral...Presentation: * Significant edema/ecchymosis on lateral and medial aspects of ankle. * Possible pitting edema on forefoot (several days post- injury

  11. Bilinear Inverse Problems: Theory, Algorithms, and Applications

    NASA Astrophysics Data System (ADS)

    Ling, Shuyang

    We will discuss how several important real-world signal processing problems, such as self-calibration and blind deconvolution, can be modeled as bilinear inverse problems and solved by convex and nonconvex optimization approaches. In Chapter 2, we bring together three seemingly unrelated concepts, self-calibration, compressive sensing and biconvex optimization. We show how several self-calibration problems can be treated efficiently within the framework of biconvex compressive sensing via a new method called SparseLift. More specifically, we consider a linear system of equations y = DAx, where the diagonal matrix D (which models the calibration error) is unknown and x is an unknown sparse signal. By "lifting" this biconvex inverse problem and exploiting sparsity in this model, we derive explicit theoretical guarantees under which both x and D can be recovered exactly, robustly, and numerically efficiently. In Chapter 3, we study the question of the joint blind deconvolution and blind demixing, i.e., extracting a sequence of functions [special characters omitted] from observing only the sum of their convolutions [special characters omitted]. In particular, for the special case s = 1, it becomes the well-known blind deconvolution problem. We present a non-convex algorithm which guarantees exact recovery under conditions that are competitive with convex optimization methods, with the additional advantage of being computationally much more efficient. We discuss several applications of the proposed framework in image processing and wireless communications in connection with the Internet-of-Things. In Chapter 4, we consider three different self-calibration models of practical relevance. We show how their corresponding bilinear inverse problems can be solved by both the simple linear least squares approach and the SVD-based approach. As a consequence, the proposed algorithms are numerically extremely efficient, thus allowing for real-time deployment. Explicit theoretical guarantees and stability theory are derived and the number of sampling complexity is nearly optimal (up to a poly-log factor). Applications in imaging sciences and signal processing are discussed and numerical simulations are presented to demonstrate the effectiveness and efficiency of our approach.

  12. Cryotherapy does not affect peroneal reaction following sudden inversion.

    PubMed

    Berg, Christine L; Hart, Joseph M; Palmieri-Smith, Riann; Cross, Kevin M; Ingersoll, Christopher D

    2007-11-01

    If ankle joint cryotherapy impairs the ability of the ankle musculature to counteract potentially injurious forces, the ankle is left vulnerable to injury. To compare peroneal reaction to sudden inversion following ankle joint cryotherapy. Repeated measures design with independent variables, treatment (cryotherapy and control), and time (baseline, immediately post treatment, 15 minutes post treatment, and 30 minutes post treatment). University research laboratory. Twenty-seven healthy volunteers. An ice bag was secured to the lateral ankle joint for 20 minutes. The onset and average root mean square amplitude of EMG activity in the peroneal muscles was calculated following the release of a trap door mechanism causing inversion. There was no statistically significant change from baseline for peroneal reaction time or average peroneal muscle activity at any post treatment time. Cryotherapy does not affect peroneal muscle reaction following sudden inversion perturbation.

  13. Joint inversion of fundamental and higher mode Rayleigh waves

    USGS Publications Warehouse

    Luo, Y.-H.; Xia, J.-H.; Liu, J.-P.; Liu, Q.-S.

    2008-01-01

    In this paper, we analyze the characteristics of the phase velocity of fundamental and higher mode Rayleigh waves in a six-layer earth model. The results show that fundamental mode is more sensitive to the shear velocities of shallow layers (< 7 m) and concentrated in a very narrow band (around 18 Hz) while higher modes are more sensitive to the parameters of relatively deeper layers and distributed over a wider frequency band. These properties provide a foundation of using a multi-mode joint inversion to define S-wave velocity. Inversion results of both synthetic data and a real-world example demonstrate that joint inversion with the damped least squares method and the SVD (Singular Value Decomposition) technique to invert Rayleigh waves of fundamental and higher modes can effectively reduce the ambiguity and improve the accuracy of inverted S-wave velocities.

  14. A review of ocean chlorophyll algorithms and primary production models

    NASA Astrophysics Data System (ADS)

    Li, Jingwen; Zhou, Song; Lv, Nan

    2015-12-01

    This paper mainly introduces the five ocean chlorophyll concentration inversion algorithm and 3 main models for computing ocean primary production based on ocean chlorophyll concentration. Through the comparison of five ocean chlorophyll inversion algorithm, sums up the advantages and disadvantages of these algorithm,and briefly analyzes the trend of ocean primary production model.

  15. FOREWORD: 4th International Workshop on New Computational Methods for Inverse Problems (NCMIP2014)

    NASA Astrophysics Data System (ADS)

    2014-10-01

    This volume of Journal of Physics: Conference Series is dedicated to the scientific contributions presented during the 4th International Workshop on New Computational Methods for Inverse Problems, NCMIP 2014 (http://www.farman.ens-cachan.fr/NCMIP_2014.html). This workshop took place at Ecole Normale Supérieure de Cachan, on May 23, 2014. The prior editions of NCMIP also took place in Cachan, France, firstly within the scope of ValueTools Conference, in May 2011 (http://www.ncmip.org/2011/), and secondly at the initiative of Institut Farman, in May 2012 and May 2013, (http://www.farman.ens-cachan.fr/NCMIP_2012.html), (http://www.farman.ens-cachan.fr/NCMIP_2013.html). The New Computational Methods for Inverse Problems (NCMIP) Workshop focused on recent advances in the resolution of inverse problems. Indeed, inverse problems appear in numerous scientific areas such as geophysics, biological and medical imaging, material and structure characterization, electrical, mechanical and civil engineering, and finances. The resolution of inverse problems consists of estimating the parameters of the observed system or structure from data collected by an instrumental sensing or imaging device. Its success firstly requires the collection of relevant observation data. It also requires accurate models describing the physical interactions between the instrumental device and the observed system, as well as the intrinsic properties of the solution itself. Finally, it requires the design of robust, accurate and efficient inversion algorithms. Advanced sensor arrays and imaging devices provide high rate and high volume data; in this context, the efficient resolution of the inverse problem requires the joint development of new models and inversion methods, taking computational and implementation aspects into account. During this one-day workshop, researchers had the opportunity to bring to light and share new techniques and results in the field of inverse problems. The topics of the workshop were: algorithms and computational aspects of inversion, Bayesian estimation, Kernel methods, learning methods, convex optimization, free discontinuity problems, metamodels, proper orthogonal decomposition, reduced models for the inversion, non-linear inverse scattering, image reconstruction and restoration, and applications (bio-medical imaging, non-destructive evaluation...). NCMIP 2014 was a one-day workshop held in May 2014 which attracted around sixty attendees. Each of the submitted papers has been reviewed by two reviewers. There have been nine accepted papers. In addition, three international speakers were invited to present a longer talk. The workshop was supported by Institut Farman (ENS Cachan, CNRS) and endorsed by the following French research networks (GDR ISIS, GDR MIA, GDR MOA, GDR Ondes). The program committee acknowledges the following research laboratories: CMLA, LMT, LURPA, SATIE. Eric Vourc'h and Thomas Rodet

  16. Effects on Subtalar Joint Stress Distribution After Cannulated Screw Insertion at Different Positions and Directions.

    PubMed

    Yuan, Cheng-song; Chen, Wan; Chen, Chen; Yang, Guang-hua; Hu, Chao; Tang, Kang-lai

    2015-01-01

    We investigated the effects on subtalar joint stress distribution after cannulated screw insertion at different positions and directions. After establishing a 3-dimensional geometric model of a normal subtalar joint, we analyzed the most ideal cannulated screw insertion position and approach for subtalar joint stress distribution and compared the differences in loading stress, antirotary strength, and anti-inversion/eversion strength among lateral-medial antiparallel screw insertion, traditional screw insertion, and ideal cannulated screw insertion. The screw insertion approach allowing the most uniform subtalar joint loading stress distribution was lateral screw insertion near the border of the talar neck plus medial screw insertion close to the ankle joint. For stress distribution uniformity, antirotary strength, and anti-inversion/eversion strength, lateral-medial antiparallel screw insertion was superior to traditional double-screw insertion. Compared with ideal cannulated screw insertion, slightly poorer stress distribution uniformity and better antirotary strength and anti-inversion/eversion strength were observed for lateral-medial antiparallel screw insertion. Traditional single-screw insertion was better than double-screw insertion for stress distribution uniformity but worse for anti-rotary strength and anti-inversion/eversion strength. Lateral-medial antiparallel screw insertion was slightly worse for stress distribution uniformity than was ideal cannulated screw insertion but superior to traditional screw insertion. It was better than both ideal cannulated screw insertion and traditional screw insertion for anti-rotary strength and anti-inversion/eversion strength. Lateral-medial antiparallel screw insertion is an approach with simple localization, convenient operation, and good safety. Copyright © 2015 American College of Foot and Ankle Surgeons. Published by Elsevier Inc. All rights reserved.

  17. Coupled land surface–subsurface hydrogeophysical inverse modeling to estimate soil organic carbon content and explore associated hydrological and thermal dynamics in the Arctic tundra

    DOE PAGES

    Tran, Anh Phuong; Dafflon, Baptiste; Hubbard, Susan S.

    2017-09-06

    Quantitative characterization of soil organic carbon (OC) content is essential due to its significant impacts on surface–subsurface hydrological–thermal processes and microbial decomposition of OC, which both in turn are important for predicting carbon–climate feedbacks. While such quantification is particularly important in the vulnerable organic-rich Arctic region, it is challenging to achieve due to the general limitations of conventional core sampling and analysis methods, and to the extremely dynamic nature of hydrological–thermal processes associated with annual freeze–thaw events. In this study, we develop and test an inversion scheme that can flexibly use single or multiple datasets – including soil liquid watermore » content, temperature and electrical resistivity tomography (ERT) data – to estimate the vertical distribution of OC content. Our approach relies on the fact that OC content strongly influences soil hydrological–thermal parameters and, therefore, indirectly controls the spatiotemporal dynamics of soil liquid water content, temperature and their correlated electrical resistivity. We employ the Community Land Model to simulate nonisothermal surface–subsurface hydrological dynamics from the bedrock to the top of canopy, with consideration of land surface processes (e.g., solar radiation balance, evapotranspiration, snow accumulation and melting) and ice–liquid water phase transitions. For inversion, we combine a deterministic and an adaptive Markov chain Monte Carlo (MCMC) optimization algorithm to estimate a posteriori distributions of desired model parameters. For hydrological–thermal-to-geophysical variable transformation, the simulated subsurface temperature, liquid water content and ice content are explicitly linked to soil electrical resistivity via petrophysical and geophysical models. We validate the developed scheme using different numerical experiments and evaluate the influence of measurement errors and benefit of joint inversion on the estimation of OC and other parameters. We also quantify the propagation of uncertainty from the estimated parameters to prediction of hydrological–thermal responses. We find that, compared to inversion of single dataset (temperature, liquid water content or apparent resistivity), joint inversion of these datasets significantly reduces parameter uncertainty. We find that the joint inversion approach is able to estimate OC and sand content within the shallow active layer (top 0.3 m of soil) with high reliability. Due to the small variations of temperature and moisture within the shallow permafrost (here at about 0.6 m depth), the approach is unable to estimate OC with confidence. However, if the soil porosity is functionally related to the OC and mineral content, which is often observed in organic-rich Arctic soil, the uncertainty of OC estimate at this depth remarkably decreases. Our study documents the value of the new surface–subsurface, deterministic–stochastic inversion approach, as well as the benefit of including multiple types of data to estimate OC and associated hydrological–thermal dynamics.« less

  18. Coupled land surface–subsurface hydrogeophysical inverse modeling to estimate soil organic carbon content and explore associated hydrological and thermal dynamics in the Arctic tundra

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tran, Anh Phuong; Dafflon, Baptiste; Hubbard, Susan S.

    Quantitative characterization of soil organic carbon (OC) content is essential due to its significant impacts on surface–subsurface hydrological–thermal processes and microbial decomposition of OC, which both in turn are important for predicting carbon–climate feedbacks. While such quantification is particularly important in the vulnerable organic-rich Arctic region, it is challenging to achieve due to the general limitations of conventional core sampling and analysis methods, and to the extremely dynamic nature of hydrological–thermal processes associated with annual freeze–thaw events. In this study, we develop and test an inversion scheme that can flexibly use single or multiple datasets – including soil liquid watermore » content, temperature and electrical resistivity tomography (ERT) data – to estimate the vertical distribution of OC content. Our approach relies on the fact that OC content strongly influences soil hydrological–thermal parameters and, therefore, indirectly controls the spatiotemporal dynamics of soil liquid water content, temperature and their correlated electrical resistivity. We employ the Community Land Model to simulate nonisothermal surface–subsurface hydrological dynamics from the bedrock to the top of canopy, with consideration of land surface processes (e.g., solar radiation balance, evapotranspiration, snow accumulation and melting) and ice–liquid water phase transitions. For inversion, we combine a deterministic and an adaptive Markov chain Monte Carlo (MCMC) optimization algorithm to estimate a posteriori distributions of desired model parameters. For hydrological–thermal-to-geophysical variable transformation, the simulated subsurface temperature, liquid water content and ice content are explicitly linked to soil electrical resistivity via petrophysical and geophysical models. We validate the developed scheme using different numerical experiments and evaluate the influence of measurement errors and benefit of joint inversion on the estimation of OC and other parameters. We also quantify the propagation of uncertainty from the estimated parameters to prediction of hydrological–thermal responses. We find that, compared to inversion of single dataset (temperature, liquid water content or apparent resistivity), joint inversion of these datasets significantly reduces parameter uncertainty. We find that the joint inversion approach is able to estimate OC and sand content within the shallow active layer (top 0.3 m of soil) with high reliability. Due to the small variations of temperature and moisture within the shallow permafrost (here at about 0.6 m depth), the approach is unable to estimate OC with confidence. However, if the soil porosity is functionally related to the OC and mineral content, which is often observed in organic-rich Arctic soil, the uncertainty of OC estimate at this depth remarkably decreases. Our study documents the value of the new surface–subsurface, deterministic–stochastic inversion approach, as well as the benefit of including multiple types of data to estimate OC and associated hydrological–thermal dynamics.« less

  19. A 3D inversion for all-space magnetotelluric data with static shift correction

    NASA Astrophysics Data System (ADS)

    Zhang, Kun

    2017-04-01

    Base on the previous studies on the static shift correction and 3D inversion algorithms, we improve the NLCG 3D inversion method and propose a new static shift correction method which work in the inversion. The static shift correction method is based on the 3D theory and real data. The static shift can be detected by the quantitative analysis of apparent parameters (apparent resistivity and impedance phase) of MT in high frequency range, and completed correction with inversion. The method is an automatic processing technology of computer with 0 cost, and avoids the additional field work and indoor processing with good results. The 3D inversion algorithm is improved (Zhang et al., 2013) base on the NLCG method of Newman & Alumbaugh (2000) and Rodi & Mackie (2001). For the algorithm, we added the parallel structure, improved the computational efficiency, reduced the memory of computer and added the topographic and marine factors. So the 3D inversion could work in general PC with high efficiency and accuracy. And all the MT data of surface stations, seabed stations and underground stations can be used in the inversion algorithm.

  20. Recursive partitioned inversion of large (1500 x 1500) symmetric matrices

    NASA Technical Reports Server (NTRS)

    Putney, B. H.; Brownd, J. E.; Gomez, R. A.

    1976-01-01

    A recursive algorithm was designed to invert large, dense, symmetric, positive definite matrices using small amounts of computer core, i.e., a small fraction of the core needed to store the complete matrix. The described algorithm is a generalized Gaussian elimination technique. Other algorithms are also discussed for the Cholesky decomposition and step inversion techniques. The purpose of the inversion algorithm is to solve large linear systems of normal equations generated by working geodetic problems. The algorithm was incorporated into a computer program called SOLVE. In the past the SOLVE program has been used in obtaining solutions published as the Goddard earth models.

  1. Correlation-based regularization and gradient operators for (joint) inversion on unstructured meshes

    NASA Astrophysics Data System (ADS)

    Jordi, Claudio; Doetsch, Joseph; Günther, Thomas; Schmelzbach, Cedric; Robertsson, Johan

    2017-04-01

    When working with unstructured meshes for geophysical inversions, special attention should be paid to the design of the operators that are used for regularizing the inverse problem and coupling of different property models in joint inversions. Regularization constraints for inversions on unstructured meshes are often defined in a rather ad-hoc manner and usually only involve the cell to which the operator is applied and its direct neighbours. Similarly, most structural coupling operators for joint inversion, such as the popular cross-gradients operator, are only defined in the direct neighbourhood of a cell. As a result, the regularization and coupling length scales and strength of these operators depend on the discretization as well as cell sizes and shape. Especially for unstructured meshes, where the cell sizes vary throughout the model domain, the dependency of the operator on the discretization may lead to artefacts. Designing operators that are based on a spatial correlation model allows to define correlation length scales over which an operator acts (called footprint), reducing the dependency on the discretization and the effects of variable cell sizes. Moreover, correlation-based operators can accommodate for expected anisotropy by using different length scales in horizontal and vertical directions. Correlation-based regularization operators also known as stochastic regularization operators have already been successfully applied to inversions on regular grids. Here, we formulate stochastic operators for unstructured meshes and apply them in 2D surface and 3D cross-well electrical resistivity tomography data inversion examples of layered media. Especially for the synthetic cross-well example, improved inversion results are achieved when stochastic regularization is used instead of a classical smoothness constraint. For the case of cross-gradients operators for joint inversion, the correlation model is used to define the footprint of the operator and weigh the contributions of the property values that are used to calculate the cross-gradients. In a first series of synthetic-data tests, we examined the mesh dependency of the cross-gradients operators. Compared to operators that are only defined in the direct neighbourhood of a cell, the dependency on the cell size of the cross-gradients calculation is markedly reduced when using operators with larger footprints. A second test with synthetic models focussed on the effect of small-scale variabilities of the parameter value on the cross-gradients calculation. Small-scale variabilities that are superimposed on a global trend of the property value can potentially degrade the cross-gradients calculation and destabilize joint inversion. We observe that the cross-gradients from operators with footprints larger than the length scale of the variabilities are less affected compared to operators with a small footprint. In joint inversions on unstructured meshes, we thus expect the correlation-based coupling operators to ensure robust coupling on a physically meaningful scale.

  2. Designing optimal greenhouse gas observing networks that consider performance and cost

    DOE PAGES

    Lucas, D. D.; Yver Kwok, C.; Cameron-Smith, P.; ...

    2015-06-16

    Emission rates of greenhouse gases (GHGs) entering into the atmosphere can be inferred using mathematical inverse approaches that combine observations from a network of stations with forward atmospheric transport models. Some locations for collecting observations are better than others for constraining GHG emissions through the inversion, but the best locations for the inversion may be inaccessible or limited by economic and other non-scientific factors. We present a method to design an optimal GHG observing network in the presence of multiple objectives that may be in conflict with each other. As a demonstration, we use our method to design a prototypemore » network of six stations to monitor summertime emissions in California of the potent GHG 1,1,1,2-tetrafluoroethane (CH 2FCF 3, HFC-134a). We use a multiobjective genetic algorithm to evolve network configurations that seek to jointly maximize the scientific accuracy of the inferred HFC-134a emissions and minimize the associated costs of making the measurements. The genetic algorithm effectively determines a set of "optimal" observing networks for HFC-134a that satisfy both objectives (i.e., the Pareto frontier). The Pareto frontier is convex, and clearly shows the tradeoffs between performance and cost, and the diminishing returns in trading one for the other. Without difficulty, our method can be extended to design optimal networks to monitor two or more GHGs with different emissions patterns, or to incorporate other objectives and constraints that are important in the practical design of atmospheric monitoring networks.« less

  3. [Influence of Restricting the Ankle Joint Complex Motions on Gait Stability of Human Body].

    PubMed

    Li, Yang; Zhang, Junxia; Su, Hailong; Wang, Xinting; Zhang, Yan

    2016-10-01

    The purpose of this study is to determine how restricting inversion-eversion and pronation-supination motions of the ankle joint complex influences the stability of human gait.The experiment was carried out on a slippery level ground walkway.Spatiotemporal gait parameter,kinematics and kinetics data as well as utilized coefficient of friction(UCOF)were compared between two conditions,i.e.with restriction of the ankle joint complex inversion-eversion and pronation-supination motions(FIXED)and without restriction(FREE).The results showed that FIXED could lead to a significant increase in velocity and stride length and an obvious decrease in double support time.Furthermore,FIXED might affect the motion angle range of knee joint and ankle joint in the sagittal plane.In FIXED condition,UCOF was significantly increased,which could lead to an increase of slip probability and a decrease of gait stability.Hence,in the design of a walker,bipedal robot or prosthetic,the structure design which is used to achieve the ankle joint complex inversion-eversion and pronation-supination motions should be implemented.

  4. Joint Retrieval Of Surface Reflectance And Aerosol Properties: Application To MSG/SEVIRI in the framework of the aerosol_cci project

    NASA Astrophysics Data System (ADS)

    Luffarelli, Marta; Govaerts, Yves; Goossens, Cedric

    2017-04-01

    A new versatile algorithm for the joint retrieval of surface reflectance and aerosol properties has been developed and tested at Rayference. This algorithm, named Combined Inversion of Surface and Aerosols (CISAR), includes a fast physically-based Radiative Transfer Model (RTM) accounting for the surface reflectance anisotropy and its coupling with aerosol scattering. This RTM explicitly solves the radiative transfer equation during the inversion process, without relying on pre-calculated integrals stored in LUT, allowing for a continuous variation of the state variables in the solution space. The inversion is based on a Optimal Estimation (OE) approach, which seeks for the best balance between the information coming from the observation and the a priori information. The a priori information is any additional knowledge on the observed system and it can concern the magnitude of the state variable or constraints on temporal and spectral variability. Both observations and priori information are provided with the corresponding uncertainty. For each processed spectral band, CISAR delivers the surface Bidirectional Reflectance Factor (BRF) and aerosol optical thickness, discriminating the effects of small and large particles. It also provides the associated uncertainty covariance matrix for every processed pixels. In the framework of the ESA aerosol_cci project, CISAR is applied on TOA BRF acquired by SEVIRI onboard Meteosat Second Generation (MSG) in the VIS0.6, VIS0.8 and NIR1.6 spectral bands. SEVIRI observations are accumulated during several days to document the surface anisotropy and minimize the impact of clouds. While surface radiative properties are supposed constant during this accumulation period, aerosol properties are derived on an hourly basis. The information content of each MSG/SEVIRI band will be provided based on the analysis of the posterior uncertainty covariance matrix. The analysis will demonstrate in particular the capability of CISAR to decouple the fraction of TOA BRF signal coming from the surface from the one originating from the aerosols. The results of the algorithm are compared with independent data sets of AOD and surface reflectance. Comparison with ground observations from the AERONET network shows a good agreement between these data. The surface reflectance evaluation is performed comparing white-sky albedo retrieved by CISAR with the MODIS surface product. This evaluation shows a very good consistency. The retrieved aerosol optical depth is consistent also in term of spatial distribution, being comparable in terms of geographical location and intensity.

  5. An optimal resolved rate law for kinematically redundant manipulators

    NASA Technical Reports Server (NTRS)

    Bourgeois, B. J.

    1987-01-01

    The resolved rate law for a manipulator provides the instantaneous joint rates required to satisfy a given instantaneous hand motion. When the joint space has more degrees of freedom than the task space, the manipulator is kinematically redundant and the kinematic rate equations are underdetermined. These equations can be locally optimized, but the resulting pseudo-inverse solution has been found to cause large joint rates in some cases. A weighting matrix in the locally optimized (pseudo-inverse) solution is dynamically adjusted to control the joint motion as desired. Joint reach limit avoidance is demonstrated in a kinematically redundant planar arm model. The treatment is applicable to redundant manipulators with any number of revolute joints and to non-planar manipulators.

  6. Iterative algorithm for joint zero diagonalization with application in blind source separation.

    PubMed

    Zhang, Wei-Tao; Lou, Shun-Tian

    2011-07-01

    A new iterative algorithm for the nonunitary joint zero diagonalization of a set of matrices is proposed for blind source separation applications. On one hand, since the zero diagonalizer of the proposed algorithm is constructed iteratively by successive multiplications of an invertible matrix, the singular solutions that occur in the existing nonunitary iterative algorithms are naturally avoided. On the other hand, compared to the algebraic method for joint zero diagonalization, the proposed algorithm requires fewer matrices to be zero diagonalized to yield even better performance. The extension of the algorithm to the complex and nonsquare mixing cases is also addressed. Numerical simulations on both synthetic data and blind source separation using time-frequency distributions illustrate the performance of the algorithm and provide a comparison to the leading joint zero diagonalization schemes.

  7. Black hole algorithm for determining model parameter in self-potential data

    NASA Astrophysics Data System (ADS)

    Sungkono; Warnana, Dwa Desa

    2018-01-01

    Analysis of self-potential (SP) data is increasingly popular in geophysical method due to its relevance in many cases. However, the inversion of SP data is often highly nonlinear. Consequently, local search algorithms commonly based on gradient approaches have often failed to find the global optimum solution in nonlinear problems. Black hole algorithm (BHA) was proposed as a solution to such problems. As the name suggests, the algorithm was constructed based on the black hole phenomena. This paper investigates the application of BHA to solve inversions of field and synthetic self-potential (SP) data. The inversion results show that BHA accurately determines model parameters and model uncertainty. This indicates that BHA is highly potential as an innovative approach for SP data inversion.

  8. Improvements of Travel-time Tomography Models from Joint Inversion of Multi-channel and Wide-angle Seismic Data

    NASA Astrophysics Data System (ADS)

    Begović, Slaven; Ranero, César; Sallarès, Valentí; Meléndez, Adrià; Grevemeyer, Ingo

    2016-04-01

    Commonly multichannel seismic reflection (MCS) and wide-angle seismic (WAS) data are modeled and interpreted with different approaches. Conventional travel-time tomography models using solely WAS data lack the resolution to define the model properties and, particularly, the geometry of geologic boundaries (reflectors) with the required accuracy, specially in the shallow complex upper geological layers. We plan to mitigate this issue by combining these two different data sets, specifically taking advantage of the high redundancy of multichannel seismic (MCS) data, integrated with wide-angle seismic (WAS) data into a common inversion scheme to obtain higher-resolution velocity models (Vp), decrease Vp uncertainty and improve the geometry of reflectors. To do so, we have adapted the tomo2d and tomo3d joint refraction and reflection travel time tomography codes (Korenaga et al, 2000; Meléndez et al, 2015) to deal with streamer data and MCS acquisition geometries. The scheme results in a joint travel-time tomographic inversion based on integrated travel-time information from refracted and reflected phases from WAS data and reflected identified in the MCS common depth point (CDP) or shot gathers. To illustrate the advantages of a common inversion approach we have compared the modeling results for synthetic data sets using two different travel-time inversion strategies: We have produced seismic velocity models and reflector geometries following typical refraction and reflection travel-time tomographic strategy modeling just WAS data with a typical acquisition geometry (one OBS each 10 km). Second, we performed joint inversion of two types of seismic data sets, integrating two coincident data sets consisting of MCS data collected with a 8 km-long streamer and the WAS data into a common inversion scheme. Our synthetic results of the joint inversion indicate a 5-10 times smaller ray travel-time misfit in the deeper parts of the model, compared to models obtained using just wide-angle seismic data. As expected, there is an important improvement in the definition of the reflector geometry, which in turn, allows to improve the accuracy of the velocity retrieval just above and below the reflector. To test the joint inversion approach with real data, we combined wide-angle (WAS) seismic and coincident multichannel seismic reflection (MCS) data acquired in the northern Chile subduction zone into a common inversion scheme to obtain a higher-resolution information of upper plate and inter-plate boundary.

  9. 3-D CSEM data inversion algorithm based on simultaneously active multiple transmitters concept

    NASA Astrophysics Data System (ADS)

    Dehiya, Rahul; Singh, Arun; Gupta, Pravin Kumar; Israil, Mohammad

    2017-05-01

    We present an algorithm for efficient 3-D inversion of marine controlled-source electromagnetic data. The efficiency is achieved by exploiting the redundancy in data. The data redundancy is reduced by compressing the data through stacking of the response of transmitters which are in close proximity. This stacking is equivalent to synthesizing the data as if the multiple transmitters are simultaneously active. The redundancy in data, arising due to close transmitter spacing, has been studied through singular value analysis of the Jacobian formed in 1-D inversion. This study reveals that the transmitter spacing of 100 m, typically used in marine data acquisition, does result in redundancy in the data. In the proposed algorithm, the data are compressed through stacking which leads to both computational advantage and reduction in noise. The performance of the algorithm for noisy data is demonstrated through the studies on two types of noise, viz., uncorrelated additive noise and correlated non-additive noise. It is observed that in case of uncorrelated additive noise, up to a moderately high (10 percent) noise level the algorithm addresses the noise as effectively as the traditional full data inversion. However, when the noise level in the data is high (20 percent), the algorithm outperforms the traditional full data inversion in terms of data misfit. Similar results are obtained in case of correlated non-additive noise and the algorithm performs better if the level of noise is high. The inversion results of a real field data set are also presented to demonstrate the robustness of the algorithm. The significant computational advantage in all cases presented makes this algorithm a better choice.

  10. The effects of a semi-rigid ankle brace on a simulated isolated subtalar joint instability.

    PubMed

    Choisne, Julie; Hoch, Matthew C; Bawab, Sebastian; Alexander, Ian; Ringleb, Stacie I

    2013-12-01

    Subtalar joint instability is hypothesized to occur after injuries to the calcaneofibular ligament (CFL) in isolation or in combination with the cervical and the talocalcaneal interosseous ligaments. A common treatment for hindfoot instability is the application of an ankle brace. However, the ability of an ankle brace to promote subtalar joint stability is not well established. We assessed the kinematics of the subtalar joint, ankle, and hindfoot in the presence of isolated subtalar instability, investigated the effect of bracing in a CFL deficient foot and with a total rupture of the intrinsic ligaments, and evaluated how maximum inversion range of motion is affected by the position of the ankle in the sagittal plane. Kinematics from nine cadaveric feet were collected with the foot placed in neutral, dorsiflexion, and plantar flexion. Motion was applied with and without a brace on an intact foot and after sequentially sectioning the CFL and the intrinsic ligaments. Isolated CFL sectioning increased ankle joint inversion, while sectioning the CFL and intrinsic ligaments affected subtalar joint stability. The brace limited inversion at the subtalar and ankle joints. Additionally, examining the foot in dorsiflexion reduced ankle and subtalar joint motion. © 2013 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.

  11. A Full Dynamic Compound Inverse Method for output-only element-level system identification and input estimation from earthquake response signals

    NASA Astrophysics Data System (ADS)

    Pioldi, Fabio; Rizzi, Egidio

    2016-08-01

    This paper proposes a new output-only element-level system identification and input estimation technique, towards the simultaneous identification of modal parameters, input excitation time history and structural features at the element-level by adopting earthquake-induced structural response signals. The method, named Full Dynamic Compound Inverse Method (FDCIM), releases strong assumptions of earlier element-level techniques, by working with a two-stage iterative algorithm. Jointly, a Statistical Average technique, a modification process and a parameter projection strategy are adopted at each stage to achieve stronger convergence for the identified estimates. The proposed method works in a deterministic way and is completely developed in State-Space form. Further, it does not require continuous- to discrete-time transformations and does not depend on initialization conditions. Synthetic earthquake-induced response signals from different shear-type buildings are generated to validate the implemented procedure, also with noise-corrupted cases. The achieved results provide a necessary condition to demonstrate the effectiveness of the proposed identification method.

  12. Network-Physics(NP) Bec DIGITAL(#)-VULNERABILITY Versus Fault-Tolerant Analog

    NASA Astrophysics Data System (ADS)

    Alexander, G. K.; Hathaway, M.; Schmidt, H. E.; Siegel, E.

    2011-03-01

    Siegel[AMS Joint Mtg.(2002)-Abs.973-60-124] digits logarithmic-(Newcomb(1881)-Weyl(1914; 1916)-Benford(1938)-"NeWBe"/"OLDbe")-law algebraic-inversion to ONLY BEQS BEC:Quanta/Bosons= digits: Synthesis reveals EMP-like SEVERE VULNERABILITY of ONLY DIGITAL-networks(VS. FAULT-TOLERANT ANALOG INvulnerability) via Barabasi "Network-Physics" relative-``statics''(VS.dynamics-[Willinger-Alderson-Doyle(Not.AMS(5/09)]-]critique); (so called)"Quantum-computing is simple-arithmetic(sans division/ factorization); algorithmic-complexities: INtractibility/ UNdecidability/ INefficiency/NONcomputability / HARDNESS(so MIScalled) "noise"-induced-phase-transitions(NITS) ACCELERATION: Cook-Levin theorem Reducibility is Renormalization-(Semi)-Group fixed-points; number-Randomness DEFINITION via WHAT? Query(VS. Goldreich[Not.AMS(02)] How? mea culpa)can ONLY be MBCS "hot-plasma" versus digit-clumping NON-random BEC; Modular-arithmetic Congruences= Signal X Noise PRODUCTS = clock-model; NON-Shor[Physica A,341,586(04)] BEC logarithmic-law inversion factorization:Watkins number-thy. U stat.-phys.); P=/=NP TRIVIAL Proof: Euclid!!! [(So Miscalled) computational-complexity J-O obviation via geometry.

  13. Two-dimensional frequency-domain acoustic full-waveform inversion with rugged topography

    NASA Astrophysics Data System (ADS)

    Zhang, Qian-Jiang; Dai, Shi-Kun; Chen, Long-Wei; Li, Kun; Zhao, Dong-Dong; Huang, Xing-Xing

    2015-09-01

    We studied finite-element-method-based two-dimensional frequency-domain acoustic FWI under rugged topography conditions. The exponential attenuation boundary condition suitable for rugged topography is proposed to solve the cutoff boundary problem as well as to consider the requirement of using the same subdivision grid in joint multifrequency inversion. The proposed method introduces the attenuation factor, and by adjusting it, acoustic waves are sufficiently attenuated in the attenuation layer to minimize the cutoff boundary effect. Based on the law of exponential attenuation, expressions for computing the attenuation factor and the thickness of attenuation layers are derived for different frequencies. In multifrequency-domain FWI, the conjugate gradient method is used to solve equations in the Gauss-Newton algorithm and thus minimize the computation cost in calculating the Hessian matrix. In addition, the effect of initial model selection and frequency combination on FWI is analyzed. Examples using numerical simulations and FWI calculations are used to verify the efficiency of the proposed method.

  14. RANDOMNESS of Numbers DEFINITION(QUERY:WHAT? V HOW?) ONLY Via MAXWELL-BOLTZMANN CLASSICAL-Statistics(MBCS) Hot-Plasma VS. Digits-Clumping Log-Law NON-Randomness Inversion ONLY BOSE-EINSTEIN QUANTUM-Statistics(BEQS) .

    NASA Astrophysics Data System (ADS)

    Siegel, Z.; Siegel, Edward Carl-Ludwig

    2011-03-01

    RANDOMNESS of Numbers cognitive-semantics DEFINITION VIA Cognition QUERY: WHAT???, NOT HOW?) VS. computer-``science" mindLESS number-crunching (Harrel-Sipser-...) algorithmics Goldreich "PSEUDO-randomness"[Not.AMS(02)] mea-culpa is ONLY via MAXWELL-BOLTZMANN CLASSICAL-STATISTICS(NOT FDQS!!!) "hot-plasma" REPULSION VERSUS Newcomb(1881)-Weyl(1914;1916)-Benford(1938) "NeWBe" logarithmic-law digit-CLUMPING/ CLUSTERING NON-Randomness simple Siegel[AMS Joint.Mtg.(02)-Abs. # 973-60-124] algebraic-inversion to THE QUANTUM and ONLY BEQS preferentially SEQUENTIALLY lower-DIGITS CLUMPING/CLUSTERING with d = 0 BEC, is ONLY VIA Siegel-Baez FUZZYICS=CATEGORYICS (SON OF TRIZ)/"Category-Semantics"(C-S), latter intersection/union of Lawvere(1964)-Siegel(1964)] category-theory (matrix: MORPHISMS V FUNCTORS) "+" cognitive-semantics'' (matrix: ANTONYMS V SYNONYMS) yields Siegel-Baez FUZZYICS=CATEGORYICS/C-S tabular list-format matrix truth-table analytics: MBCS RANDOMNESS TRUTH/EMET!!!

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Di, Zichao; Chen, Si; Hong, Young Pyo

    X-ray fluorescence tomography is based on the detection of fluorescence x-ray photons produced following x-ray absorption while a specimen is rotated; it provides information on the 3D distribution of selected elements within a sample. One limitation in the quality of sample recovery is the separation of elemental signals due to the finite energy resolution of the detector. Another limitation is the effect of self-absorption, which can lead to inaccurate results with dense samples. To recover a higher quality elemental map, we combine x-ray fluorescence detection with a second data modality: conventional x-ray transmission tomography using absorption. By using these combinedmore » signals in a nonlinear optimization-based approach, we demonstrate the benefit of our algorithm on real experimental data and obtain an improved quantitative reconstruction of the spatial distribution of dominant elements in the sample. Furthermore, compared with single-modality inversion based on x-ray fluorescence alone, this joint inversion approach reduces ill-posedness and should result in improved elemental quantification and better correction of self-absorption.« less

  16. Optimization of the Inverse Algorithm for Estimating the Optical Properties of Biological Materials Using Spatially-resolved Diffuse Reflectance Technique

    USDA-ARS?s Scientific Manuscript database

    Determination of the optical properties from intact biological materials based on diffusion approximation theory is a complicated inverse problem, and it requires proper implementation of inverse algorithm, instrumentation, and experiment. This work was aimed at optimizing the procedure of estimatin...

  17. Comparison result of inversion of gravity data of a fault by particle swarm optimization and Levenberg-Marquardt methods.

    PubMed

    Toushmalani, Reza

    2013-01-01

    The purpose of this study was to compare the performance of two methods for gravity inversion of a fault. First method [Particle swarm optimization (PSO)] is a heuristic global optimization method and also an optimization algorithm, which is based on swarm intelligence. It comes from the research on the bird and fish flock movement behavior. Second method [The Levenberg-Marquardt algorithm (LM)] is an approximation to the Newton method used also for training ANNs. In this paper first we discussed the gravity field of a fault, then describes the algorithms of PSO and LM And presents application of Levenberg-Marquardt algorithm, and a particle swarm algorithm in solving inverse problem of a fault. Most importantly the parameters for the algorithms are given for the individual tests. Inverse solution reveals that fault model parameters are agree quite well with the known results. A more agreement has been found between the predicted model anomaly and the observed gravity anomaly in PSO method rather than LM method.

  18. Joint stability characteristics of the ankle complex after lateral ligamentous injury, part I: a laboratory comparison using arthrometric measurement.

    PubMed

    Kovaleski, John E; Heitman, Robert J; Gurchiek, Larry R; Hollis, J M; Liu, Wei; Pearsall, Albert W

    2014-01-01

    The mechanical property of stiffness may be important to investigating how lateral ankle ligament injury affects the behavior of the viscoelastic properties of the ankle complex. A better understanding of injury effects on tissue elastic characteristics in relation to joint laxity could be obtained from cadaveric study. To biomechanically determine the laxity and stiffness characteristics of the cadaver ankle complex before and after simulated injury to the anterior talofibular ligament (ATFL) and calcaneofibular ligament (CFL) during anterior drawer and inversion loading. Cross-sectional study. University research laboratory. Seven fresh-frozen cadaver ankle specimens. All ankles underwent loading before and after simulated lateral ankle injury using an ankle arthrometer. The dependent variables were anterior displacement, anterior end-range stiffness, inversion rotation, and inversion end-range stiffness. Isolated ATFL and combined ATFL and CFL sectioning resulted in increased anterior displacement but not end-range stiffness when compared with the intact ankle. With inversion loading, combined ATFL and CFL sectioning resulted in increased range of motion and decreased end-range stiffness when compared with the intact and ATFL-sectioned ankles. The absence of change in anterior end-range stiffness between the intact and ligament-deficient ankles indicated bony and other soft tissues functioned to maintain stiffness after pathologic joint displacement, whereas inversion loading of the CFL-deficient ankle after pathologic joint displacement indicated the ankle complex was less stiff when supported only by the secondary joint structures.

  19. Recursive mass matrix factorization and inversion: An operator approach to open- and closed-chain multibody dynamics

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.; Kreutz, K.

    1988-01-01

    This report advances a linear operator approach for analyzing the dynamics of systems of joint-connected rigid bodies.It is established that the mass matrix M for such a system can be factored as M=(I+H phi L)D(I+H phi L) sup T. This yields an immediate inversion M sup -1=(I-H psi L) sup T D sup -1 (I-H psi L), where H and phi are given by known link geometric parameters, and L, psi and D are obtained recursively by a spatial discrete-step Kalman filter and by the corresponding Riccati equation associated with this filter. The factors (I+H phi L) and (I-H psi L) are lower triangular matrices which are inverses of each other, and D is a diagonal matrix. This factorization and inversion of the mass matrix leads to recursive algortihms for forward dynamics based on spatially recursive filtering and smoothing. The primary motivation for advancing the operator approach is to provide a better means to formulate, analyze and understand spatial recursions in multibody dynamics. This is achieved because the linear operator notation allows manipulation of the equations of motion using a very high-level analytical framework (a spatial operator algebra) that is easy to understand and use. Detailed lower-level recursive algorithms can readily be obtained for inspection from the expressions involving spatial operators. The report consists of two main sections. In Part 1, the problem of serial chain manipulators is analyzed and solved. Extensions to a closed-chain system formed by multiple manipulators moving a common task object are contained in Part 2. To retain ease of exposition in the report, only these two types of multibody systems are considered. However, the same methods can be easily applied to arbitrary multibody systems formed by a collection of joint-connected regid bodies.

  20. Towards adjoint-based inversion of time-dependent mantle convection with nonlinear viscosity

    NASA Astrophysics Data System (ADS)

    Li, Dunzhu; Gurnis, Michael; Stadler, Georg

    2017-04-01

    We develop and study an adjoint-based inversion method for the simultaneous recovery of initial temperature conditions and viscosity parameters in time-dependent mantle convection from the current mantle temperature and historic plate motion. Based on a realistic rheological model with temperature-dependent and strain-rate-dependent viscosity, we formulate the inversion as a PDE-constrained optimization problem. The objective functional includes the misfit of surface velocity (plate motion) history, the misfit of the current mantle temperature, and a regularization for the uncertain initial condition. The gradient of this functional with respect to the initial temperature and the uncertain viscosity parameters is computed by solving the adjoint of the mantle convection equations. This gradient is used in a pre-conditioned quasi-Newton minimization algorithm. We study the prospects and limitations of the inversion, as well as the computational performance of the method using two synthetic problems, a sinking cylinder and a realistic subduction model. The subduction model is characterized by the migration of a ridge toward a trench whereby both plate motions and subduction evolve. The results demonstrate: (1) for known viscosity parameters, the initial temperature can be well recovered, as in previous initial condition-only inversions where the effective viscosity was given; (2) for known initial temperature, viscosity parameters can be recovered accurately, despite the existence of trade-offs due to ill-conditioning; (3) for the joint inversion of initial condition and viscosity parameters, initial condition and effective viscosity can be reasonably recovered, but the high dimension of the parameter space and the resulting ill-posedness may limit recovery of viscosity parameters.

  1. The Krafla International Testbed (KMT): Ground Truth for the New Magma Geophysics

    NASA Astrophysics Data System (ADS)

    Brown, L. D.; Kim, D.; Malin, P. E.; Eichelberger, J. C.

    2017-12-01

    Recent developments in geophysics such as large N seismic arrays , 4D (time lapse) subsurface imaging and joint inversion algorithms represent fresh approaches to delineating and monitoring magma in the subsurface. Drilling at Krafla, both past and proposed, are unique opportunities to quantitatively corroborate and calibrate these new technologies. For example, dense seismic arrays are capable of passive imaging of magma systems with resolutions comparable to that achieved by more expensive (and often logistically impractical) controlled source surveys such as those used in oil exploration. Fine details of the geometry of magma lenses, feeders and associated fluid bearing fracture systems on the scale of meters to tens of meters are now realistic targets for surface seismic surveys using ambient energy sources, as are detection of their temporal variations. Joint inversions, for example of seismic and MT measurements, offer the promise of tighter quantitative constraints on the physical properties of the various components of magma and related geothermal systems imaged by geophysics. However, the accuracy of such techniques will remain captive to academic debate without testing against real world targets that have been directly sampled. Thus application of these new techniques to both guide future drilling at Krafla and to be calibrated against the resulting borehole observations of magma are an important step forward in validating geophysics for magma studies in general.

  2. Probabilistic inversion with graph cuts: Application to the Boise Hydrogeophysical Research Site

    NASA Astrophysics Data System (ADS)

    Pirot, Guillaume; Linde, Niklas; Mariethoz, Grégoire; Bradford, John H.

    2017-02-01

    Inversion methods that build on multiple-point statistics tools offer the possibility to obtain model realizations that are not only in agreement with field data, but also with conceptual geological models that are represented by training images. A recent inversion approach based on patch-based geostatistical resimulation using graph cuts outperforms state-of-the-art multiple-point statistics methods when applied to synthetic inversion examples featuring continuous and discontinuous property fields. Applications of multiple-point statistics tools to field data are challenging due to inevitable discrepancies between actual subsurface structure and the assumptions made in deriving the training image. We introduce several amendments to the original graph cut inversion algorithm and present a first-ever field application by addressing porosity estimation at the Boise Hydrogeophysical Research Site, Boise, Idaho. We consider both a classical multi-Gaussian and an outcrop-based prior model (training image) that are in agreement with available porosity data. When conditioning to available crosshole ground-penetrating radar data using Markov chain Monte Carlo, we find that the posterior realizations honor overall both the characteristics of the prior models and the geophysical data. The porosity field is inverted jointly with the measurement error and the petrophysical parameters that link dielectric permittivity to porosity. Even though the multi-Gaussian prior model leads to posterior realizations with higher likelihoods, the outcrop-based prior model shows better convergence. In addition, it offers geologically more realistic posterior realizations and it better preserves the full porosity range of the prior.

  3. An efficient approach for inverse kinematics and redundancy resolution scheme of hyper-redundant manipulators

    NASA Astrophysics Data System (ADS)

    Chembuly, V. V. M. J. Satish; Voruganti, Hari Kumar

    2018-04-01

    Hyper redundant manipulators have a large number of degrees of freedom (DOF) than the required to perform a given task. Additional DOF of manipulators provide the flexibility to work in highly cluttered environment and in constrained workspaces. Inverse kinematics (IK) of hyper-redundant manipulators is complicated due to large number of DOF and these manipulators have multiple IK solutions. The redundancy gives a choice of selecting best solution out of multiple solutions based on certain criteria such as obstacle avoidance, singularity avoidance, joint limit avoidance and joint torque minimization. This paper focuses on IK solution and redundancy resolution of hyper-redundant manipulator using classical optimization approach. Joint positions are computed by optimizing various criteria for a serial hyper redundant manipulators while traversing different paths in the workspace. Several cases are addressed using this scheme to obtain the inverse kinematic solution while optimizing the criteria like obstacle avoidance, joint limit avoidance.

  4. Does Talocrural Joint-Thrust Manipulation Improve Outcomes After Inversion Ankle Sprain?

    PubMed

    Krueger, Brett; Becker, Laura; Leemkuil, Greta; Durall, Christopher

    2015-08-01

    Clinical Scenario: Ankle sprains account for roughly 10% of sport-related injuries in the active population. The majority of these injuries occur from excessive ankle inversion, leading to lateral ligamentous injury. In addition to pain and swelling, limitations in ankle range of motion (ROM) and self-reported function are common findings. These limitations are thought to be due in part to loss of mobility in the talocrural joint. Accordingly, some investigators have reported using high-velocity, low-amplitude thrust-manipulation techniques directed at the talocrural joint to address deficits in dorsiflexion (DF) ROM and function. This review was conducted to ascertain the impact of talocrural joint-thrust manipulation (TJM) on DF ROM, self-reported function, and pain in patients with a history of ankle sprain. Focused Clinical Question: In patients with a history of inversion ankle sprain, does TJM improve outcomes in DF ROM, self-reported function, and/or pain?

  5. 2D joint inversion of dc and scalar audio-magnetotelluric data in the evaluation of low enthalpy geothermal fields

    NASA Astrophysics Data System (ADS)

    Monteiro Santos, Fernando A.; Afonso, António R. Andrade; Dupis, André

    2007-03-01

    Audio-magnetotelluric (AMT) and resistivity (dc) surveys are often used in environmental, hydrological and geothermal evaluation. The separate interpretation of those geophysical data sets assuming two-dimensional models frequently produces ambiguous results. The joint inversion of AMT and dc data is advocated by several authors as an efficient method for reducing the ambiguity inherent to each of those methods. This paper presents results obtained from the two-dimensional joint inversion of dipole-dipole and scalar AMT data acquired in a low enthalpy geothermal field situated in a graben. The joint inverted models show a better definition of shallow and deep structures. The results show that the extension of the benefits using joint inversion depends on the number and spacing of the AMT sites. The models obtained from experimental data display a low resistivity zone (<20 Ω m) in the central part of the graben that was correlated with the geothermal reservoir. The resistivity distribution models were used to estimate the distribution of the porosity in the geothermal reservoir applying two different approaches and considering the clay minerals effect. The results suggest that the maximum porosity of the reservoir is not uniform and might be in the range of 12% to 24%.

  6. Lithological and Surface Geometry Joint Inversions Using Multi-Objective Global Optimization Methods

    NASA Astrophysics Data System (ADS)

    Lelièvre, Peter; Bijani, Rodrigo; Farquharson, Colin

    2016-04-01

    Geologists' interpretations about the Earth typically involve distinct rock units with contacts (interfaces) between them. In contrast, standard minimum-structure geophysical inversions are performed on meshes of space-filling cells (typically prisms or tetrahedra) and recover smoothly varying physical property distributions that are inconsistent with typical geological interpretations. There are several approaches through which mesh-based minimum-structure geophysical inversion can help recover models with some of the desired characteristics. However, a more effective strategy may be to consider two fundamentally different types of inversions: lithological and surface geometry inversions. A major advantage of these two inversion approaches is that joint inversion of multiple types of geophysical data is greatly simplified. In a lithological inversion, the subsurface is discretized into a mesh and each cell contains a particular rock type. A lithological model must be translated to a physical property model before geophysical data simulation. Each lithology may map to discrete property values or there may be some a priori probability density function associated with the mapping. Through this mapping, lithological inverse problems limit the parameter domain and consequently reduce the non-uniqueness from that presented by standard mesh-based inversions that allow physical property values on continuous ranges. Furthermore, joint inversion is greatly simplified because no additional mathematical coupling measure is required in the objective function to link multiple physical property models. In a surface geometry inversion, the model comprises wireframe surfaces representing contacts between rock units. This parameterization is then fully consistent with Earth models built by geologists, which in 3D typically comprise wireframe contact surfaces of tessellated triangles. As for the lithological case, the physical properties of the units lying between the contact surfaces are set to a priori values. The inversion is tasked with calculating the geometry of the contact surfaces instead of some piecewise distribution of properties in a mesh. Again, no coupling measure is required and joint inversion is simplified. Both of these inverse problems involve high nonlinearity and discontinuous or non-obtainable derivatives. They can also involve the existence of multiple minima. Hence, one can not apply the standard descent-based local minimization methods used to solve typical minimum-structure inversions. Instead, we are applying Pareto multi-objective global optimization (PMOGO) methods, which generate a suite of solutions that minimize multiple objectives (e.g. data misfits and regularization terms) in a Pareto-optimal sense. Providing a suite of models, as opposed to a single model that minimizes a weighted sum of objectives, allows a more complete assessment of the possibilities and avoids the often difficult choice of how to weight each objective. While there are definite advantages to PMOGO joint inversion approaches, the methods come with significantly increased computational requirements. We are researching various strategies to ameliorate these computational issues including parallelization and problem dimension reduction.

  7. A model reduction approach to numerical inversion for a parabolic partial differential equation

    NASA Astrophysics Data System (ADS)

    Borcea, Liliana; Druskin, Vladimir; Mamonov, Alexander V.; Zaslavsky, Mikhail

    2014-12-01

    We propose a novel numerical inversion algorithm for the coefficients of parabolic partial differential equations, based on model reduction. The study is motivated by the application of controlled source electromagnetic exploration, where the unknown is the subsurface electrical resistivity and the data are time resolved surface measurements of the magnetic field. The algorithm presented in this paper considers inversion in one and two dimensions. The reduced model is obtained with rational interpolation in the frequency (Laplace) domain and a rational Krylov subspace projection method. It amounts to a nonlinear mapping from the function space of the unknown resistivity to the small dimensional space of the parameters of the reduced model. We use this mapping as a nonlinear preconditioner for the Gauss-Newton iterative solution of the inverse problem. The advantage of the inversion algorithm is twofold. First, the nonlinear preconditioner resolves most of the nonlinearity of the problem. Thus the iterations are less likely to get stuck in local minima and the convergence is fast. Second, the inversion is computationally efficient because it avoids repeated accurate simulations of the time-domain response. We study the stability of the inversion algorithm for various rational Krylov subspaces, and assess its performance with numerical experiments.

  8. Rapid processing of data based on high-performance algorithms for solving inverse problems and 3D-simulation of the tsunami and earthquakes

    NASA Astrophysics Data System (ADS)

    Marinin, I. V.; Kabanikhin, S. I.; Krivorotko, O. I.; Karas, A.; Khidasheli, D. G.

    2012-04-01

    We consider new techniques and methods for earthquake and tsunami related problems, particularly - inverse problems for the determination of tsunami source parameters, numerical simulation of long wave propagation in soil and water and tsunami risk estimations. In addition, we will touch upon the issue of database management and destruction scenario visualization. New approaches and strategies, as well as mathematical tools and software are to be shown. The long joint investigations by researchers of the Institute of Mathematical Geophysics and Computational Mathematics SB RAS and specialists from WAPMERR and Informap have produced special theoretical approaches, numerical methods, and software tsunami and earthquake modeling (modeling of propagation and run-up of tsunami waves on coastal areas), visualization, risk estimation of tsunami, and earthquakes. Algorithms are developed for the operational definition of the origin and forms of the tsunami source. The system TSS numerically simulates the source of tsunami and/or earthquakes and includes the possibility to solve the direct and the inverse problem. It becomes possible to involve advanced mathematical results to improve models and to increase the resolution of inverse problems. Via TSS one can construct maps of risks, the online scenario of disasters, estimation of potential damage to buildings and roads. One of the main tools for the numerical modeling is the finite volume method (FVM), which allows us to achieve stability with respect to possible input errors, as well as to achieve optimum computing speed. Our approach to the inverse problem of tsunami and earthquake determination is based on recent theoretical results concerning the Dirichlet problem for the wave equation. This problem is intrinsically ill-posed. We use the optimization approach to solve this problem and SVD-analysis to estimate the degree of ill-posedness and to find the quasi-solution. The software system we developed is intended to create technology «no frost», realizing a steady stream of direct and inverse problems: solving the direct problem, the visualization and comparison with observed data, to solve the inverse problem (correction of the model parameters). The main objective of further work is the creation of a workstation operating emergency tool that could be used by an emergency duty person in real time.

  9. On the joint inversion of geophysical data for models of the coupled core-mantle system

    NASA Technical Reports Server (NTRS)

    Voorhies, Coerte V.

    1991-01-01

    Joint inversion of magnetic, earth rotation, geoid, and seismic data for a unified model of the coupled core-mantle system is proposed and shown to be possible. A sample objective function is offered and simplified by targeting results from independent inversions and summary travel time residuals instead of original observations. These data are parameterized in terms of a very simple, closed model of the topographically coupled core-mantle system. Minimization of the simplified objective function leads to a nonlinear inverse problem; an iterative method for solution is presented. Parameterization and method are emphasized; numerical results are not presented.

  10. Preventive lateral ligament tester (PLLT): a novel method to evaluate mechanical properties of lateral ankle joint ligaments in the intact ankle.

    PubMed

    Best, Raymond; Böhle, Caroline; Mauch, Frieder; Brüggemann, Peter G

    2016-04-01

    To construct and evaluate an ankle arthrometer that registers inversion joint deflection at standardized inversion loads and that, moreover, allows conclusions about the mechanical strain of intact ankle joint ligaments at these loads. Twelve healthy ankles and 12 lower limb cadaver specimens were tested in a self-developed measuring device monitoring passive ankle inversion movement (Inv-ROM) at standardized application of inversion loads of 5, 10 and 15 N. To adjust in vivo and in vitro conditions, the muscular inactivity of the evertor muscles was assured by EMG in vivo. Preliminary, test-retest and trial-to-trial reliabilities were tested in vivo. To detect lateral ligament strain, the cadaveric calcaneofibular ligament was instrumented with a buckle transducer. After post-test harvesting of the ligament with its bony attachments, previously obtained resistance strain gauge results were then transferred to tensile loads, mounting the specimens with their buckle transducers into a hydraulic material testing machine. ICC reliability considering the Inv-ROM and torsional stiffness varied between 0.80 and 0.90. Inv-ROM ranged from 15.3° (±7.3°) at 5 N to 28.3° (±7.6) at 15 N. The different tests revealed a CFL tensile load of 31.9 (±14.0) N at 5 N, 51.0 (±15.8) at 10 N and 75.4 (±21.3) N at 15 N inversion load. A highly reliable arthrometer was constructed allowing not only the accurate detection of passive joint deflections at standardized inversion loads but also reveals some objective conclusions of the intact CFL properties in correlation with the individual inversion deflections. The detection of individual joint deflections at predefined loads in correlation with the knowledge of tensile ligament loads in the future could enable more individual preventive measures, e.g., in high-level athletes.

  11. Accessing the uncertainties of seismic velocity and anisotropy structure of Northern Great Plains using a transdimensional Bayesian approach

    NASA Astrophysics Data System (ADS)

    Gao, C.; Lekic, V.

    2017-12-01

    Seismic imaging utilizing complementary seismic data provides unique insight on the formation, evolution and current structure of continental lithosphere. While numerous efforts have improved the resolution of seismic structure, the quantification of uncertainties remains challenging due to the non-linearity and the non-uniqueness of geophysical inverse problem. In this project, we use a reverse jump Markov chain Monte Carlo (rjMcMC) algorithm to incorporate seismic observables including Rayleigh and Love wave dispersion, Ps and Sp receiver function to invert for shear velocity (Vs), compressional velocity (Vp), density, and radial anisotropy of the lithospheric structure. The Bayesian nature and the transdimensionality of this approach allow the quantification of the model parameter uncertainties while keeping the models parsimonious. Both synthetic test and inversion of actual data for Ps and Sp receiver functions are performed. We quantify the information gained in different inversions by calculating the Kullback-Leibler divergence. Furthermore, we explore the ability of Rayleigh and Love wave dispersion data to constrain radial anisotropy. We show that when multiple types of model parameters (Vsv, Vsh, and Vp) are inverted simultaneously, the constraints on radial anisotropy are limited by relatively large data uncertainties and trade-off strongly with Vp. We then perform joint inversion of the surface wave dispersion (SWD) and Ps, Sp receiver functions, and show that the constraints on both isotropic Vs and radial anisotropy are significantly improved. To achieve faster convergence of the rjMcMC, we propose a progressive inclusion scheme, and invert SWD measurements and receiver functions from about 400 USArray stations in the Northern Great Plains. We start by only using SWD data due to its fast convergence rate. We then use the average of the ensemble as a starting model for the joint inversion, which is able to resolve distinct seismic signatures of geological structures including the trans-Hudson orogen, Wyoming craton and Yellowstone hotspot. Various analyses are done to access the uncertainties of the seismic velocities and Moho depths. We also address the importance of careful data processing of receiver functions by illustrating artifacts due to unmodelled sediment reverberations.

  12. NLSE: Parameter-Based Inversion Algorithm

    NASA Astrophysics Data System (ADS)

    Sabbagh, Harold A.; Murphy, R. Kim; Sabbagh, Elias H.; Aldrin, John C.; Knopp, Jeremy S.

    Chapter 11 introduced us to the notion of an inverse problem and gave us some examples of the value of this idea to the solution of realistic industrial problems. The basic inversion algorithm described in Chap. 11 was based upon the Gauss-Newton theory of nonlinear least-squares estimation and is called NLSE in this book. In this chapter we will develop the mathematical background of this theory more fully, because this algorithm will be the foundation of inverse methods and their applications during the remainder of this book. We hope, thereby, to introduce the reader to the application of sophisticated mathematical concepts to engineering practice without introducing excessive mathematical sophistication.

  13. Using the in-line component for fixed-wing EM 1D inversion

    NASA Astrophysics Data System (ADS)

    Smiarowski, Adam

    2015-09-01

    Numerous authors have discussed the utility of multicomponent measurements. Generally speaking, for a vertical-oriented dipole source, the measured vertical component couples to horizontal planar bodies while the horizontal in-line component couples best to vertical planar targets. For layered-earth cases, helicopter EM systems have little or no in-line component response and as a result much of the in-line signal is due to receiver coil rotation and appears as noise. In contrast to this, the in-line component of a fixed-wing airborne electromagnetic (AEM) system with large transmitter-receiver offset can be substantial, exceeding the vertical component in conductive areas. This paper compares the in-line and vertical response of a fixed-wing airborne electromagnetic (AEM) system using a half-space model and calculates sensitivity functions. The a posteriori inversion model parameter uncertainty matrix is calculated for a bathymetry model (conductive layer over more resistive half-space) for two inversion cases; use of vertical component alone is compared to joint inversion of vertical and in-line components. The joint inversion is able to better resolve model parameters. An example is then provided using field data from a bathymetry survey to compare the joint inversion to vertical component only inversion. For each inversion set, the difference between the inverted water depth and ship-measured bathymetry is calculated. The result is in general agreement with that expected from the a posteriori inversion model parameter uncertainty calculation.

  14. Rupture process of the 2009 L'Aquila, central Italy, earthquake, from the separate and joint inversion of Strong Motion, GPS and DInSAR data.

    NASA Astrophysics Data System (ADS)

    Cirella, A.; Piatanesi, A.; Tinti, E.; Chini, M.; Cocco, M.

    2012-04-01

    In this study, we investigate the rupture history of the April 6th 2009 (Mw 6.1) L'Aquila normal faulting earthquake by using a nonlinear inversion of strong motion, GPS and DInSAR data. We use a two-stage non-linear inversion technique. During the first stage, an algorithm based on the heat-bath simulated annealing generates an ensemble of models that efficiently sample the good data-fitting regions of parameter space. In the second stage the algorithm performs a statistical analysis of the ensemble providing us the best-fitting model, the average model, the associated standard deviation and coefficient of variation. This technique, rather than simply looking at the best model, extracts the most stable features of the earthquake rupture that are consistent with the data and gives an estimate of the variability of each model parameter. The application to the 2009 L'Aquila main-shock shows that both the separate and joint inversion solutions reveal a complex rupture process and a heterogeneous slip distribution. Slip is concentrated in two main asperities: a smaller shallow patch of slip located up-dip from the hypocenter and a second deeper and larger asperity located southeastward along strike direction. The key feature of the source process emerging from our inverted models concerns the rupture history, which is characterized by two distinct stages. The first stage begins with rupture initiation and with a modest moment release lasting nearly 0.9 seconds, which is followed by a sharp increase in slip velocity and rupture speed located 2 km up-dip from the nucleation. During this first stage the rupture front propagated up-dip from the hypocenter at relatively high (˜ 4.0 km/s), but still sub-shear, rupture velocity. The second stage starts nearly 2 seconds after nucleation and it is characterized by the along strike rupture propagation. The largest and deeper asperity fails during this stage of the rupture process. The rupture velocity is larger in the up-dip than in the along-strike direction. The up-dip and along-strike rupture propagation are separated in time and associated with a Mode II and a Mode III crack, respectively. Our results show that the 2009 L'Aquila earthquake featured a very complex rupture, with strong spatial and temporal heterogeneities suggesting a strong frictional and/or structural control of the rupture process.

  15. Joint Seismic-Geodetic Algorithm for Finite-Fault Detection and Slip Inversion in the West Coast ShakeAlert System

    NASA Astrophysics Data System (ADS)

    Smith, D. E.; Felizardo, C.; Minson, S. E.; Boese, M.; Langbein, J. O.; Murray, J. R.

    2016-12-01

    Finite-fault source algorithms can greatly benefit earthquake early warning (EEW) systems. Estimates of finite-fault parameters provide spatial information, which can significantly improve real-time shaking calculations and help with disaster response. In this project, we have focused on integrating a finite-fault seismic-geodetic algorithm into the West Coast ShakeAlert framework. The seismic part is FinDer 2, a C++ version of the algorithm developed by Böse et al. (2012). It interpolates peak ground accelerations and calculates the best fault length and strike from template matching. The geodetic part is a C++ version of BEFORES, the algorithm developed by Minson et al. (2014) that uses a Bayesian methodology to search for the most probable slip distribution on a fault of unknown orientation. Ultimately, these two will be used together where FinDer generates a Bayesian prior for BEFORES via the methodology of Minson et al. (2015), and the joint solution will generate estimates of finite-fault extent, strike, dip, best slip distribution, and magnitude. We have created C++ versions of both FinDer and BEFORES using open source libraries and have developed a C++ Application Protocol Interface (API) for them both. Their APIs allow FinDer and BEFORES to contribute to the ShakeAlert system via an open source messaging system, ActiveMQ. FinDer has been receiving real-time data, detecting earthquakes, and reporting messages on the development system for several months. We are also testing FinDer extensively with Earthworm tankplayer files. BEFORES has been tested with ActiveMQ messaging in the ShakeAlert framework, and works off a FinDer trigger. We are finishing the FinDer-BEFORES connections in this framework, and testing this system via seismic-geodetic tankplayer files. This will include actual and simulated data.

  16. Angular dependence of multiangle dynamic light scattering for particle size distribution inversion using a self-adapting regularization algorithm

    NASA Astrophysics Data System (ADS)

    Li, Lei; Yu, Long; Yang, Kecheng; Li, Wei; Li, Kai; Xia, Min

    2018-04-01

    The multiangle dynamic light scattering (MDLS) technique can better estimate particle size distributions (PSDs) than single-angle dynamic light scattering. However, determining the inversion range, angular weighting coefficients, and scattering angle combination is difficult but fundamental to the reconstruction for both unimodal and multimodal distributions. In this paper, we propose a self-adapting regularization method called the wavelet iterative recursion nonnegative Tikhonov-Phillips-Twomey (WIRNNT-PT) algorithm. This algorithm combines a wavelet multiscale strategy with an appropriate inversion method and could self-adaptively optimize several noteworthy issues containing the choices of the weighting coefficients, the inversion range and the optimal inversion method from two regularization algorithms for estimating the PSD from MDLS measurements. In addition, the angular dependence of the MDLS for estimating the PSDs of polymeric latexes is thoroughly analyzed. The dependence of the results on the number and range of measurement angles was analyzed in depth to identify the optimal scattering angle combination. Numerical simulations and experimental results for unimodal and multimodal distributions are presented to demonstrate both the validity of the WIRNNT-PT algorithm and the angular dependence of MDLS and show that the proposed algorithm with a six-angle analysis in the 30-130° range can be satisfactorily applied to retrieve PSDs from MDLS measurements.

  17. Optimization of CO2 Surface Flux using GOSAT Total Column CO2: First Results for 2009-2010

    NASA Astrophysics Data System (ADS)

    Basu, S.; Houweling, S.

    2011-12-01

    Constraining surface flux estimates of CO2 using satellite measurements has been one of the long-standing goals of the atmospheric inverse modeling community. We present the first results of inverting GOSAT total column CO2 measurements for obtaining global monthly CO2 flux maps over one year (June 2009 to May 2010). We use the SRON RemoTeC retrieval of CO2 for our inversions. The SRON retrieval has been shown to have no bias when compared to TCCON total column measurements, and latitudinal gradients of the retrieved CO2 are consistent with gradients deduced from the surface flask network [Butz et al, 2011]. This makes this retrieval an ideal candidate for atmospheric inversions, which are highly sensitive to spurious gradients. Our inversion system is analogous to the CarbonTracker (CT) data assimilation system; it is initialized with the prior CO2 fluxes of CT, and uses the same atmospheric transport model, i.e., TM5. The two major differences are (a) we add GOSAT CO2 data to the inversion in addition to flask data, and (b) we use a 4DVAR optimization system instead of a Kalman filter. We compare inversions using (a) only GOSAT total column CO2 measurements, (b) only surface flask CO2 measurements, and (c) the joint data set of GOSAT and surface flask measurements. We validate GOSAT-only inversions against the NOAA surface flask network and joint inversions against CONTRAIL and other aircraft campaigns. We see that inverted fluxes from a GOSAT-only inversion are consistent with fluxes from a stations-only inversion, reaffirming the low biases in SRON retrievals. From the joint inversion, we estimate the amount of added constraints upon adding GOSAT total column measurements to existing surface layer measurements.

  18. [Study of inversion and classification of particle size distribution under dependent model algorithm].

    PubMed

    Sun, Xiao-Gang; Tang, Hong; Yuan, Gui-Bin

    2008-05-01

    For the total light scattering particle sizing technique, an inversion and classification method was proposed with the dependent model algorithm. The measured particle system was inversed simultaneously by different particle distribution functions whose mathematic model was known in advance, and then classified according to the inversion errors. The simulation experiments illustrated that it is feasible to use the inversion errors to determine the particle size distribution. The particle size distribution function was obtained accurately at only three wavelengths in the visible light range with the genetic algorithm, and the inversion results were steady and reliable, which decreased the number of multi wavelengths to the greatest extent and increased the selectivity of light source. The single peak distribution inversion error was less than 5% and the bimodal distribution inversion error was less than 10% when 5% stochastic noise was put in the transmission extinction measurement values at two wavelengths. The running time of this method was less than 2 s. The method has advantages of simplicity, rapidity, and suitability for on-line particle size measurement.

  19. Determination of elastic moduli from measured acoustic velocities.

    PubMed

    Brown, J Michael

    2018-06-01

    Methods are evaluated in solution of the inverse problem associated with determination of elastic moduli for crystals of arbitrary symmetry from elastic wave velocities measured in many crystallographic directions. A package of MATLAB functions provides a robust and flexible environment for analysis of ultrasonic, Brillouin, or Impulsive Stimulated Light Scattering datasets. Three inverse algorithms are considered: the gradient-based methods of Levenberg-Marquardt and Backus-Gilbert, and a non-gradient-based (Nelder-Mead) simplex approach. Several data types are considered: body wave velocities alone, surface wave velocities plus a side constraint on X-ray-diffraction-based axes compressibilities, or joint body and surface wave velocities. The numerical algorithms are validated through comparisons with prior published results and through analysis of synthetic datasets. Although all approaches succeed in finding low-misfit solutions, the Levenberg-Marquardt method consistently demonstrates effectiveness and computational efficiency. However, linearized gradient-based methods, when applied to a strongly non-linear problem, may not adequately converge to the global minimum. The simplex method, while slower, is less susceptible to being trapped in local misfit minima. A "multi-start" strategy (initiate searches from more than one initial guess) provides better assurance that global minima have been located. Numerical estimates of parameter uncertainties based on Monte Carlo simulations are compared to formal uncertainties based on covariance calculations. Copyright © 2018 Elsevier B.V. All rights reserved.

  20. Imaging Crustal Structure with Waveform and HV Ratio of Body-wave Receiver Function

    NASA Astrophysics Data System (ADS)

    Chong, J.; Chu, R.; Ni, S.; Meng, Q.; Guo, A.

    2017-12-01

    It is known that receiver function has less constraint on the absolute velocity, and joint inversion of receiver function and surface wave dispersion has been widely applied to reduce the non-uniqueness of velocity and interface depth. However, some studies indicate that the receiver function itself is capable for determining the absolute shear wave velocity. In this study, we propose to measure the receiver function HV ratio which takes advantage of the amplitude information of the radial and vertical receiver functions to constrain the shear-wave velocity. Numerical analysis indicates that the receiver function HV ratio is sensitive to the average shear wave velocity in the depth range it samples, and can help to reduce the non-uniqueness of receiver function waveform inversion. A joint inversion scheme has been developed, and both synthetic tests and real data application proved the feasibility of the joint inversion. The method has been applied to the dense seismic array of ChinArray program in SE Tibet during the time period from August 2011 to August 2012 in SE Tibet (ChinArray-Himalaya, 2011). The measurements of receiver function HV ratio reveals the lateral variation of the tectonics in of the study region. And main features of the velocity structure imagined by the new joint inversion method are consistent with previous studies. KEYWORDS: receiver function HV ratio, receiver function waveform inversion, crustal structure ReferenceChinArray-Himalaya. 2011. China Seismic Array waveform data of Himalaya Project. Institute of Geophysics, China Earthquake Administration. doi:10.12001/ChinArray.Data. Himalaya. Jiajun Chong, Risheng Chu*, Sidao Ni, Qingjun Meng, Aizhi Guo, 2017. Receiver Function HV Ratio, a New Measurement for Reducing Non-uniqueness of Receiver Function Waveform Inversion. (under revision)

  1. Seismic waveform tomography with shot-encoding using a restarted L-BFGS algorithm.

    PubMed

    Rao, Ying; Wang, Yanghua

    2017-08-17

    In seismic waveform tomography, or full-waveform inversion (FWI), one effective strategy used to reduce the computational cost is shot-encoding, which encodes all shots randomly and sums them into one super shot to significantly reduce the number of wavefield simulations in the inversion. However, this process will induce instability in the iterative inversion regardless of whether it uses a robust limited-memory BFGS (L-BFGS) algorithm. The restarted L-BFGS algorithm proposed here is both stable and efficient. This breakthrough ensures, for the first time, the applicability of advanced FWI methods to three-dimensional seismic field data. In a standard L-BFGS algorithm, if the shot-encoding remains unchanged, it will generate a crosstalk effect between different shots. This crosstalk effect can only be suppressed by employing sufficient randomness in the shot-encoding. Therefore, the implementation of the L-BFGS algorithm is restarted at every segment. Each segment consists of a number of iterations; the first few iterations use an invariant encoding, while the remainder use random re-coding. This restarted L-BFGS algorithm balances the computational efficiency of shot-encoding, the convergence stability of the L-BFGS algorithm, and the inversion quality characteristic of random encoding in FWI.

  2. Performance Evaluation of Glottal Inverse Filtering Algorithms Using a Physiologically Based Articulatory Speech Synthesizer

    DTIC Science & Technology

    2017-01-05

    1 Performance Evaluation of Glottal Inverse Filtering Algorithms Using a Physiologically Based Articulatory Speech Synthesizer Yu-Ren Chien, Daryush...D. Mehta, Member, IEEE, Jón Guðnason, Matías Zañartu, Member, IEEE, and Thomas F. Quatieri, Fellow, IEEE Abstract—Glottal inverse filtering aims to...of inverse filtering performance has been challenging due to the practical difficulty in measuring the true glottal signals while speech signals are

  3. Joint Smoothed l₀-Norm DOA Estimation Algorithm for Multiple Measurement Vectors in MIMO Radar.

    PubMed

    Liu, Jing; Zhou, Weidong; Juwono, Filbert H

    2017-05-08

    Direction-of-arrival (DOA) estimation is usually confronted with a multiple measurement vector (MMV) case. In this paper, a novel fast sparse DOA estimation algorithm, named the joint smoothed l 0 -norm algorithm, is proposed for multiple measurement vectors in multiple-input multiple-output (MIMO) radar. To eliminate the white or colored Gaussian noises, the new method first obtains a low-complexity high-order cumulants based data matrix. Then, the proposed algorithm designs a joint smoothed function tailored for the MMV case, based on which joint smoothed l 0 -norm sparse representation framework is constructed. Finally, for the MMV-based joint smoothed function, the corresponding gradient-based sparse signal reconstruction is designed, thus the DOA estimation can be achieved. The proposed method is a fast sparse representation algorithm, which can solve the MMV problem and perform well for both white and colored Gaussian noises. The proposed joint algorithm is about two orders of magnitude faster than the l 1 -norm minimization based methods, such as l 1 -SVD (singular value decomposition), RV (real-valued) l 1 -SVD and RV l 1 -SRACV (sparse representation array covariance vectors), and achieves better DOA estimation performance.

  4. A quantitative comparison of soil moisture inversion algorithms

    NASA Technical Reports Server (NTRS)

    Zyl, J. J. van; Kim, Y.

    2001-01-01

    This paper compares the performance of four bare surface radar soil moisture inversion algorithms in the presence of measurement errors. The particular errors considered include calibration errors, system thermal noise, local topography and vegetation cover.

  5. Shear wave velocity model beneath CBJI station West Java, Indonesia from joint inversion of teleseismic receiver functions and surface wave dispersion

    NASA Astrophysics Data System (ADS)

    Simanungkalit, R. H.; Anggono, T.; Syuhada; Amran, A.; Supriyanto

    2018-03-01

    Earthquake signal observations around the world allow seismologists to obtain the information of internal structure of the Earth especially the Earth’s crust. In this study, we used joint inversion of receiver functions and surface wave group velocities to investigate crustal structure beneath CBJI station in West Java, Indonesia. Receiver function were calculated from earthquakes with magnitude more than 5 and at distance 30°-90°. Surface wave group velocities were calculated using frequency time analysis from earthquakes at distance of 30°- 40°. We inverted shear wave velocity model beneath the station by conducting joint inversion from receiver functions and surface wave dispersions. We suggest that the crustal thickness beneath CBJI station, West Java, Indonesia is about 35 km.

  6. Inversion of particle-size distribution from angular light-scattering data with genetic algorithms.

    PubMed

    Ye, M; Wang, S; Lu, Y; Hu, T; Zhu, Z; Xu, Y

    1999-04-20

    A stochastic inverse technique based on a genetic algorithm (GA) to invert particle-size distribution from angular light-scattering data is developed. This inverse technique is independent of any given a priori information of particle-size distribution. Numerical tests show that this technique can be successfully applied to inverse problems with high stability in the presence of random noise and low susceptibility to the shape of distributions. It has also been shown that the GA-based inverse technique is more efficient in use of computing time than the inverse Monte Carlo method recently developed by Ligon et al. [Appl. Opt. 35, 4297 (1996)].

  7. Joint inversion of 3-D seismic, gravimetric and magnetotelluric data for sub-basalt imaging in the Faroe-Shetland Basin

    NASA Astrophysics Data System (ADS)

    Heincke, B.; Moorkamp, M.; Jegen, M.; Hobbs, R. W.

    2012-12-01

    Imaging of sub-basalt sediments with reflection seismic techniques is limited due to absorption, scattering and transmission effects and the presence of peg-leg multiples. Although many of the difficulties facing conventional seismic profiles can be overcome by recording long offset data resolution of sub-basalt sediments in seismic sections is typically still largely restricted. Therefore multi-parametric approaches in general and joint inversion strategies in particular (e.g. Colombo et al., 2008, Jordan et al., 2012) are considered as alternative to gain additional information from sub-basalt structures. Here, we combine in a 3-D joint inversion first-arrival time tomography, FTG gravity and MT data to identify the base basalt and resolve potential sediments underneath. For sub-basalt exploration the three methods complement each other such that the null space is reduced and significantly better resolved models can be obtained than would be possible by the individual methods: The seismic data gives a robust model for the supra-basalt sediments whilst the gravity field is dominated by the high density basalt and basement features. The MT on the other hand is sensitive to the conductivity in both the supra- and sub-basalt sediments. We will present preliminary individual and joint inversion result for a FTG, seismic and MT data set located in the Faroe-Shetland basin. Because the investigated area is rather large (~75 x 40 km) and the individual data sets are relatively huge, we use a joint inversion framework (see Moorkamp et al., 2011) which is designed to handle large amount of data/model parameters. This program has moreover the options to link the individual parameter models either petrophysically using fixed parameter relationships or structurally using the cross-gradient approach. The seismic data set consists of a pattern of 8 intersecting wide-angle seismic profiles with maximum offsets of up to ~24 km. The 3-D gravity data set (size :~ 30 x 30 km) is collected along parallel lines by a shipborne gradiometer and the marine MT data set is composed of 41 stations that are distributed over the whole investigation area. Logging results from a borehole located in the central part of the investigation area enable us to derive parameter relationships between seismic velocities, resistivities and densities that are adequately describe the rock property behaviors of both the basaltic lava flows and sedimentary layers in this region. In addition, a 3-D reflection seismic survey covering the central part allows us to incorporate the top of basalt and other features as constraints in the joint inversions and to evaluate the quality of the final results. Literature: D. Colombo, M. Mantovani, S. Hallinan, M. Virgilio, 2008. Sub-basalt depth imaging using simultaneous joint inversion of seismic and electromagnetic (MT) data: a CRB field study. SEG Expanded Abstract, Las Vegas, USA, 2674-2678. M. Jordan, J. Ebbing, M. Brönner, J. Kamm , Z. Du, P. Eliasson, 2012. Joint Inversion for Improved Sub-salt and Sub-basalt Imaging with Application to the More Margin. EAGE Expanded Abstracts, Copenhagen, DK. M. Moorkamp, B. Heincke, M. Jegen, A.W.Roberts, R.W. Hobbs, 2011. A framework for 3-D joint inversion of MT, gravity and seismic refraction data. Geophysical Journal International, 184, 477-493.

  8. Crustal Stress and Strain Distribution in Sicily (Southern Italy) from Joint Analysis of Seismicity and Geodetic Data

    NASA Astrophysics Data System (ADS)

    Presti, D.; Neri, G.; Aloisi, M.; Cannavo, F.; Orecchio, B.; Palano, M.; Siligato, G.; Totaro, C.

    2014-12-01

    An updated database of earthquake focal mechanisms is compiled for the Sicilian region (southern Italy) and surrounding off-shore areas where the Nubia-Eurasia convergence coexists with the very-slow residual rollback of the Ionian subducting slab. High-quality solutions selected from literature and catalogs have been integrated with new solutions estimated in the present work using the Cut And Paste (CAP) waveform inversion method. In the CAP algorithm (Zhao and Helmberger, 1994; Zhu and Helmberger, 1996), each waveform is broken up into Pnl and surface wave segments, which are weighted differently during the inversion procedure. Integration of the new solutions with the ones selected from literature and official catalogs led us to collect a database consisting exclusively of waveform inversion data relative to earthquakes with minimum magnitude 2.6. The seismicity and focal mechanism distributions have been compared with crustal motion and strain data coming from GNSS analyses. For this purpose GNSS-based observations collected over the investigated area by episodic measurements (1994-2013) as well as continuous monitoring (since 2006) were processed by the GAMIT/GLOBK software packages (Herring et al., 2010) following the approach described in Palano et al. (2011). To adequately investigate the crustal deformation pattern, the estimated GNSS velocities were aligned to a fixed Eurasian reference frame. The good agreement found between seismic and geodetic information contributes to better define seismotectonic domains characterized by different kinematics. Moving from the available geophysical information and from an early application of FEM algorithms, we have also started to investigate stress/strain fields in the crust of the study area including depth dependence and relationships with rupture of the main seismogenic structures.

  9. Joint inversion of acoustic and resistivity data for the estimation of gas hydrate concentration

    USGS Publications Warehouse

    Lee, Myung W.

    2002-01-01

    Downhole log measurements, such as acoustic or electrical resistivity logs, are frequently used to estimate in situ gas hydrate concentrations in the pore space of sedimentary rocks. Usually the gas hydrate concentration is estimated separately based on each log measurement. However, measurements are related to each other through the gas hydrate concentration, so the gas hydrate concentrations can be estimated by jointly inverting available logs. Because the magnitude of slowness of acoustic and resistivity values differs by more than an order of magnitude, a least-squares method, weighted by the inverse of the observed values, is attempted. Estimating the resistivity of connate water and gas hydrate concentration simultaneously is problematic, because the resistivity of connate water is independent of acoustics. In order to overcome this problem, a coupling constant is introduced in the Jacobian matrix. In the use of different logs to estimate gas hydrate concentration, a joint inversion of different measurements is preferred to the averaging of each inversion result.

  10. Amplitude inversion of the 2D analytic signal of magnetic anomalies through the differential evolution algorithm

    NASA Astrophysics Data System (ADS)

    Ekinci, Yunus Levent; Özyalın, Şenol; Sındırgı, Petek; Balkaya, Çağlayan; Göktürkler, Gökhan

    2017-12-01

    In this work, analytic signal amplitude (ASA) inversion of total field magnetic anomalies has been achieved by differential evolution (DE) which is a population-based evolutionary metaheuristic algorithm. Using an elitist strategy, the applicability and effectiveness of the proposed inversion algorithm have been evaluated through the anomalies due to both hypothetical model bodies and real isolated geological structures. Some parameter tuning studies relying mainly on choosing the optimum control parameters of the algorithm have also been performed to enhance the performance of the proposed metaheuristic. Since ASAs of magnetic anomalies are independent of both ambient field direction and the direction of magnetization of the causative sources in a two-dimensional (2D) case, inversions of synthetic noise-free and noisy single model anomalies have produced satisfactory solutions showing the practical applicability of the algorithm. Moreover, hypothetical studies using multiple model bodies have clearly showed that the DE algorithm is able to cope with complicated anomalies and some interferences from neighbouring sources. The proposed algorithm has then been used to invert small- (120 m) and large-scale (40 km) magnetic profile anomalies of an iron deposit (Kesikköprü-Bala, Turkey) and a deep-seated magnetized structure (Sea of Marmara, Turkey), respectively to determine depths, geometries and exact origins of the source bodies. Inversion studies have yielded geologically reasonable solutions which are also in good accordance with the results of normalized full gradient and Euler deconvolution techniques. Thus, we propose the use of DE not only for the amplitude inversion of 2D analytical signals of magnetic profile anomalies having induced or remanent magnetization effects but also the low-dimensional data inversions in geophysics. A part of this paper was presented as an abstract at the 2nd International Conference on Civil and Environmental Engineering, 8-10 May 2017, Cappadocia-Nevşehir (Turkey).

  11. The Crust and Upper Mantle Structure of the Iranian Plateau from Joint Waveform Tomography Imaging of Body and Surface Waves

    NASA Astrophysics Data System (ADS)

    Roecker, S. W.; Priestley, K. F.; Tatar, M.

    2014-12-01

    The Iranian Plateau forms a broad zone of deformation between the colliding Arabian and Eurasian plates. The convergence is accommodated in the Zagros Mountains of SW Iran, the Alborz Mountains of northern Iran, and the Kopeh Dagh Mountains of NE Iran. These deforming belts are separated by relatively aseismic depressions such as the Lut Block. It has been suggested that the Arabia-Eurasia collision is similar to the Indo-Eurasia collision but at a early point of development and therefore, it may provide clues to our understanding of the earlier stages of the continent-continent collision process. We present results of the analysis of seismic data collected along two NE-SW trending transects across the Iranian Plateau. The first profile extends from near Bushere on the Persian Gulf coast to near to the Iran-Turkmenistan border north of Mashad, and consists of seismic recordings along the SW portion of the line in 2000-2001 and recording along the NE portion of the line in 2003 and 2006-2008. The second profile extends from near the Iran-Iraq border near the Dezfel embayment to the south Caspian Sea coast north of Tehran. We apply the combined 2.5D finite element waveform tomography algorithm of Baker and Roecker [2014] to jointly invert teleseismic body and surface waves to determine the elastic wavespeed structures of these areas. The joint inversion of these different types of waves affords similar types of advantages that are common to combined surface wave dispersion/receiver function inversions in compensating for intrinsic weaknesses in horizontal and vertical resolution capabilities. We compare results recovered from a finite difference approach to document the effects of various assumptions related to their application, such as the inclusion of topography, on the models recovered. We also apply several different inverse methods, starting with simple gradient techniques to the more sophisticated pseudo-Hessian or L-BFGS approach, and find that the latter are generally more robust. Modeling of receiver functions and surface wave dispersion prior to the analysis is shown to be an efficacious way to generate starting models for this analysis.

  12. Micro-seismic waveform matching inversion based on gravitational search algorithm and parallel computation

    NASA Astrophysics Data System (ADS)

    Jiang, Y.; Xing, H. L.

    2016-12-01

    Micro-seismic events induced by water injection, mining activity or oil/gas extraction are quite informative, the interpretation of which can be applied for the reconstruction of underground stress and monitoring of hydraulic fracturing progress in oil/gas reservoirs. The source characterises and locations are crucial parameters that required for these purposes, which can be obtained through the waveform matching inversion (WMI) method. Therefore it is imperative to develop a WMI algorithm with high accuracy and convergence speed. Heuristic algorithm, as a category of nonlinear method, possesses a very high convergence speed and good capacity to overcome local minimal values, and has been well applied for many areas (e.g. image processing, artificial intelligence). However, its effectiveness for micro-seismic WMI is still poorly investigated; very few literatures exits that addressing this subject. In this research an advanced heuristic algorithm, gravitational search algorithm (GSA) , is proposed to estimate the focal mechanism (angle of strike, dip and rake) and source locations in three dimension. Unlike traditional inversion methods, the heuristic algorithm inversion does not require the approximation of green function. The method directly interacts with a CPU parallelized finite difference forward modelling engine, and updating the model parameters under GSA criterions. The effectiveness of this method is tested with synthetic data form a multi-layered elastic model; the results indicate GSA can be well applied on WMI and has its unique advantages. Keywords: Micro-seismicity, Waveform matching inversion, gravitational search algorithm, parallel computation

  13. Using a derivative-free optimization method for multiple solutions of inverse transport problems

    DOE PAGES

    Armstrong, Jerawan C.; Favorite, Jeffrey A.

    2016-01-14

    Identifying unknown components of an object that emits radiation is an important problem for national and global security. Radiation signatures measured from an object of interest can be used to infer object parameter values that are not known. This problem is called an inverse transport problem. An inverse transport problem may have multiple solutions and the most widely used approach for its solution is an iterative optimization method. This paper proposes a stochastic derivative-free global optimization algorithm to find multiple solutions of inverse transport problems. The algorithm is an extension of a multilevel single linkage (MLSL) method where a meshmore » adaptive direct search (MADS) algorithm is incorporated into the local phase. Furthermore, numerical test cases using uncollided fluxes of discrete gamma-ray lines are presented to show the performance of this new algorithm.« less

  14. Iterative algorithms for a non-linear inverse problem in atmospheric lidar

    NASA Astrophysics Data System (ADS)

    Denevi, Giulia; Garbarino, Sara; Sorrentino, Alberto

    2017-08-01

    We consider the inverse problem of retrieving aerosol extinction coefficients from Raman lidar measurements. In this problem the unknown and the data are related through the exponential of a linear operator, the unknown is non-negative and the data follow the Poisson distribution. Standard methods work on the log-transformed data and solve the resulting linear inverse problem, but neglect to take into account the noise statistics. In this study we show that proper modelling of the noise distribution can improve substantially the quality of the reconstructed extinction profiles. To achieve this goal, we consider the non-linear inverse problem with non-negativity constraint, and propose two iterative algorithms derived using the Karush-Kuhn-Tucker conditions. We validate the algorithms with synthetic and experimental data. As expected, the proposed algorithms out-perform standard methods in terms of sensitivity to noise and reliability of the estimated profile.

  15. Satellite Imagery Analysis for Nighttime Temperature Inversion Clouds

    NASA Technical Reports Server (NTRS)

    Kawamoto, K.; Minnis, P.; Arduini, R.; Smith, W., Jr.

    2001-01-01

    Clouds play important roles in the climate system. Their optical and microphysical properties, which largely determine their radiative property, need to be investigated. Among several measurement means, satellite remote sensing seems to be the most promising. Since most of the cloud algorithms proposed so far are daytime use which utilizes solar radiation, Minnis et al. (1998) developed a nighttime use one using 3.7-, 11 - and 12-microns channels. Their algorithm, however, has a drawback that is not able to treat temperature inversion cases. We update their algorithm, incorporating new parameterization by Arduini et al. (1999) which is valid for temperature inversion cases. This updated algorithm has been applied to GOES satellite data and reasonable retrieval results were obtained.

  16. Lateral ligament repair and reconstruction restore neither contact mechanics of the ankle joint nor motion patterns of the hindfoot.

    PubMed

    Prisk, Victor R; Imhauser, Carl W; O'Loughlin, Padhraig F; Kennedy, John G

    2010-10-20

    Ankle sprains may damage both the lateral ligaments of the hindfoot and the osteochondral tissue of the ankle joint. When nonoperative treatment fails, operative approaches are indicated to restore both native motion patterns at the hindfoot and ankle joint contact mechanics. The goal of the present study was to determine the effect of lateral ligament injury, repair, and reconstruction on ankle joint contact mechanics and hindfoot motion patterns. Eight cadaveric specimens were tested with use of robotic technology to apply combined compressive (200-N) and inversion (4.5-Nm) loads to the hindfoot at 0° and 20° of plantar flexion. Contact mechanics at the ankle joint were simultaneously measured. A repeated-measures experiment was designed with use of the intact condition as control, with the other conditions including sectioned anterior talofibular and calcaneofibular ligaments, the Broström and Broström-Gould repairs, and graft reconstruction. Ligament sectioning decreased contact area and caused a medial and anterior shift in the center of pressure with inversion loads relative to those with the intact condition. There were no significant differences in inversion or coupled axial rotation with inversion between the Broström repair and the intact condition; however, medial translation of the center of pressure remained elevated after the Broström repair relative to the intact condition. The Gould modification of the Broström procedure provided additional support to the hindfoot relative to the Broström repair, reducing inversion and axial rotation with inversion beyond that of intact ligaments. There were no significant differences in center-of-pressure excursion patterns between the Broström-Gould repair and the intact ligament condition, but this repair increased contact area beyond that with the ligaments intact. Graft reconstruction more closely restored inversion motion than did the Broström-Gould repair at 20° of plantar flexion but limited coupled axial rotation. Graft reconstruction also increased contact areas beyond the lateral ligament-deficient conditions but altered center-of-pressure excursion patterns relative to the intact condition. No lateral ankle ligament reconstruction completely restored native contact mechanics of the ankle joint and hindfoot motion patterns.

  17. Fast polar decomposition of an arbitrary matrix

    NASA Technical Reports Server (NTRS)

    Higham, Nicholas J.; Schreiber, Robert S.

    1988-01-01

    The polar decomposition of an m x n matrix A of full rank, where m is greater than or equal to n, can be computed using a quadratically convergent algorithm. The algorithm is based on a Newton iteration involving a matrix inverse. With the use of a preliminary complete orthogonal decomposition the algorithm can be extended to arbitrary A. How to use the algorithm to compute the positive semi-definite square root of a Hermitian positive semi-definite matrix is described. A hybrid algorithm which adaptively switches from the matrix inversion based iteration to a matrix multiplication based iteration due to Kovarik, and to Bjorck and Bowie is formulated. The decision when to switch is made using a condition estimator. This matrix multiplication rich algorithm is shown to be more efficient on machines for which matrix multiplication can be executed 1.5 times faster than matrix inversion.

  18. Nonlinear inversion of potential-field data using a hybrid-encoding genetic algorithm

    USGS Publications Warehouse

    Chen, C.; Xia, J.; Liu, J.; Feng, G.

    2006-01-01

    Using a genetic algorithm to solve an inverse problem of complex nonlinear geophysical equations is advantageous because it does not require computer gradients of models or "good" initial models. The multi-point search of a genetic algorithm makes it easier to find the globally optimal solution while avoiding falling into a local extremum. As is the case in other optimization approaches, the search efficiency for a genetic algorithm is vital in finding desired solutions successfully in a multi-dimensional model space. A binary-encoding genetic algorithm is hardly ever used to resolve an optimization problem such as a simple geophysical inversion with only three unknowns. The encoding mechanism, genetic operators, and population size of the genetic algorithm greatly affect search processes in the evolution. It is clear that improved operators and proper population size promote the convergence. Nevertheless, not all genetic operations perform perfectly while searching under either a uniform binary or a decimal encoding system. With the binary encoding mechanism, the crossover scheme may produce more new individuals than with the decimal encoding. On the other hand, the mutation scheme in a decimal encoding system will create new genes larger in scope than those in the binary encoding. This paper discusses approaches of exploiting the search potential of genetic operations in the two encoding systems and presents an approach with a hybrid-encoding mechanism, multi-point crossover, and dynamic population size for geophysical inversion. We present a method that is based on the routine in which the mutation operation is conducted in the decimal code and multi-point crossover operation in the binary code. The mix-encoding algorithm is called the hybrid-encoding genetic algorithm (HEGA). HEGA provides better genes with a higher probability by a mutation operator and improves genetic algorithms in resolving complicated geophysical inverse problems. Another significant result is that final solution is determined by the average model derived from multiple trials instead of one computation due to the randomness in a genetic algorithm procedure. These advantages were demonstrated by synthetic and real-world examples of inversion of potential-field data. ?? 2005 Elsevier Ltd. All rights reserved.

  19. Correlations among pelvic positions and differences in lower extremity joint angles during walking in female university students.

    PubMed

    Cho, Misuk

    2015-06-01

    [Purpose] This study aimed to identify correlations among pelvic positions and differences in lower extremity joint angles during walking in female university students. [Subjects] Thirty female university students were enrolled and their pelvic positions and differences in lower extremity joint angles were measured. [Methods] Pelvic position, pelvic torsion, and pelvic rotation were assessed using the BackMapper. In addition, motion analysis was performed to derive differences between left and right flexion, abduction, and external rotation ranges of hip joints; flexion, abduction, and external rotation ranges of knee joints; and dorsiflexion, inversion, and abduction ranges of ankle joints, according to X, Y, and Z-axes. [Results] Pelvic position was found to be positively correlated with differences between left and right hip flexion (r=0.51), hip abduction (r=0.62), knee flexion (r=0.45), knee abduction (r=0.42), and ankle inversion (r=0.38). In addition, the difference between left and right hip abduction showed a positive correlation with difference between left and right ankle dorsiflexion (r=0.64). Moreover, differences between left and right knee flexion exhibited positive correlations with differences between left and right knee abduction (r=0.41) and ankle inversion (r=0.45). [Conclusion] Bilateral pelvic tilt angles are important as they lead to bilateral differences in lower extremity joint angles during walking.

  20. Validation Studies of the Accuracy of Various SO2 Gas Retrievals in the Thermal InfraRed (8-14 μm)

    NASA Astrophysics Data System (ADS)

    Gabrieli, A.; Wright, R.; Lucey, P. G.; Porter, J. N.; Honniball, C.; Garbeil, H.; Wood, M.

    2016-12-01

    Quantifying hazardous SO2 in the atmosphere and in volcanic plumes is important for public health and volcanic eruption prediction. Remote sensing measurements of spectral radiance of plumes contain information on the abundance of SO2. However, in order to convert such measurements into SO2 path-concentrations, reliable inversion algorithms are needed. Various techniques can be employed to derive SO2 path-concentrations. The first approach employs a Partial Least Square Regression model trained using MODTRAN5 simulations for a variety of plume and atmospheric conditions. Radiances at many spectral wavelengths (8-14 μm) were used in the algorithm. The second algorithm uses measurements inside and outside the SO2 plume. Measurements in the plume-free region (background sky) make it possible to remove background atmospheric conditions and any instrumental effects. After atmospheric and instrumental effects are removed, MODTRAN5 is used to fit the SO2 spectral feature and obtain SO2 path-concentrations. The two inversion algorithms described above can be compared with the inversion algorithm for SO2 retrievals developed by Prata and Bernardo (2014). Their approach employs three wavelengths to characterize the plume temperature, the atmospheric background, and the SO2 path-concentration. The accuracy of these various techniques requires further investigation in terms of the effects of different atmospheric background conditions. Validating these inversion algorithms is challenging because ground truth measurements are very difficult. However, if the three separate inversion algorithms provide similar SO2 path-concentrations for actual measurements with various background conditions, then this increases confidence in the results. Measurements of sky radiance when looking through SO2 filled gas cells were collected with a Thermal Hyperspectral Imager (THI) under various atmospheric background conditions. These data were processed using the three inversion approaches, which were tested for convergence on the known SO2 gas cell path-concentrations. For this study, the inversion algorithms were modified to account for the gas cell configuration. Results from these studies will be presented, as well as results from SO2 gas plume measurements at Kīlauea volcano, Hawai'i.

  1. Inferring Muscle-Tendon Unit Power from Ankle Joint Power during the Push-Off Phase of Human Walking: Insights from a Multiarticular EMG-Driven Model

    PubMed Central

    2016-01-01

    Introduction Inverse dynamics joint kinetics are often used to infer contributions from underlying groups of muscle-tendon units (MTUs). However, such interpretations are confounded by multiarticular (multi-joint) musculature, which can cause inverse dynamics to over- or under-estimate net MTU power. Misestimation of MTU power could lead to incorrect scientific conclusions, or to empirical estimates that misguide musculoskeletal simulations, assistive device designs, or clinical interventions. The objective of this study was to investigate the degree to which ankle joint power overestimates net plantarflexor MTU power during the Push-off phase of walking, due to the behavior of the flexor digitorum and hallucis longus (FDHL)–multiarticular MTUs crossing the ankle and metatarsophalangeal (toe) joints. Methods We performed a gait analysis study on six healthy participants, recording ground reaction forces, kinematics, and electromyography (EMG). Empirical data were input into an EMG-driven musculoskeletal model to estimate ankle power. This model enabled us to parse contributions from mono- and multi-articular MTUs, and required only one scaling and one time delay factor for each subject and speed, which were solved for based on empirical data. Net plantarflexing MTU power was computed by the model and quantitatively compared to inverse dynamics ankle power. Results The EMG-driven model was able to reproduce inverse dynamics ankle power across a range of gait speeds (R2 ≥ 0.97), while also providing MTU-specific power estimates. We found that FDHL dynamics caused ankle power to slightly overestimate net plantarflexor MTU power, but only by ~2–7%. Conclusions During Push-off, FDHL MTU dynamics do not substantially confound the inference of net plantarflexor MTU power from inverse dynamics ankle power. However, other methodological limitations may cause inverse dynamics to overestimate net MTU power; for instance, due to rigid-body foot assumptions. Moving forward, the EMG-driven modeling approach presented could be applied to understand other tasks or larger multiarticular MTUs. PMID:27764110

  2. Evaluation of joint position sense measured by inversion angle replication error in patients with an osteochondral lesion of the talus.

    PubMed

    Nakasa, Tomoyuki; Adachi, Nobuo; Shibuya, Hayatoshi; Okuhara, Atsushi; Ochi, Mitsuo

    2013-01-01

    The etiology of the osteochondral lesion of the talar dome (OLT) remains unclear. A joint position sense deficit of the ankle is reported to be a possible cause of ankle disorder. Repeated contact of the articular surface of the talar dome with the plafond during inversion might be a cause of OLT. The aim of the present study was to evaluate the joint position sense deficit by measuring the replication error of the inversion angle in patients with OLT. The replication error, which is the difference between the index angle and replication angle in inversion, was measured in 15 patients with OLT. The replication error in 15 healthy volunteers was evaluated as a control group. The side to side differences of the replication errors between the patients with OLT and healthy volunteers and the replication errors in each angle between the involved and uninvolved ankle in the patients with OLT were investigated. Finally, the side to side differences of the replication errors between the patients with OLT with a traumatic and nontraumatic history were compared. The side to side difference in the patients with OLT (1.3° ± 0.2°) was significantly greater than that in the healthy subjects (0.4° ± 0.7°) (p ≤ .05). Significant differences were found between the involved and uninvolved sides at 10°, 15°, 20°, and 25° in the patients with OLT. No significant difference (p > .05) was found between the patients with traumatic and nontraumatic OLT. The present study found that the patients with OLT have a joint position sense deficit during inversion movement, regardless of a traumatic history. Although various factors for the etiology of OLT have been reported, the joint position sense deficit in inversion might be a cause of OLT. Copyright © 2013 American College of Foot and Ankle Surgeons. Published by Elsevier Inc. All rights reserved.

  3. Inferring Muscle-Tendon Unit Power from Ankle Joint Power during the Push-Off Phase of Human Walking: Insights from a Multiarticular EMG-Driven Model.

    PubMed

    Honert, Eric C; Zelik, Karl E

    2016-01-01

    Inverse dynamics joint kinetics are often used to infer contributions from underlying groups of muscle-tendon units (MTUs). However, such interpretations are confounded by multiarticular (multi-joint) musculature, which can cause inverse dynamics to over- or under-estimate net MTU power. Misestimation of MTU power could lead to incorrect scientific conclusions, or to empirical estimates that misguide musculoskeletal simulations, assistive device designs, or clinical interventions. The objective of this study was to investigate the degree to which ankle joint power overestimates net plantarflexor MTU power during the Push-off phase of walking, due to the behavior of the flexor digitorum and hallucis longus (FDHL)-multiarticular MTUs crossing the ankle and metatarsophalangeal (toe) joints. We performed a gait analysis study on six healthy participants, recording ground reaction forces, kinematics, and electromyography (EMG). Empirical data were input into an EMG-driven musculoskeletal model to estimate ankle power. This model enabled us to parse contributions from mono- and multi-articular MTUs, and required only one scaling and one time delay factor for each subject and speed, which were solved for based on empirical data. Net plantarflexing MTU power was computed by the model and quantitatively compared to inverse dynamics ankle power. The EMG-driven model was able to reproduce inverse dynamics ankle power across a range of gait speeds (R2 ≥ 0.97), while also providing MTU-specific power estimates. We found that FDHL dynamics caused ankle power to slightly overestimate net plantarflexor MTU power, but only by ~2-7%. During Push-off, FDHL MTU dynamics do not substantially confound the inference of net plantarflexor MTU power from inverse dynamics ankle power. However, other methodological limitations may cause inverse dynamics to overestimate net MTU power; for instance, due to rigid-body foot assumptions. Moving forward, the EMG-driven modeling approach presented could be applied to understand other tasks or larger multiarticular MTUs.

  4. Ankle taping can reduce external ankle joint moments during drop landings on a tilted surface.

    PubMed

    Sato, Nahoko; Nunome, Hiroyuki; Hopper, Luke S; Ikegami, Yasuo

    2017-09-20

    Ankle taping is commonly used to prevent ankle sprains. However, kinematic assessments investigating the biomechanical effects of ankle taping have provided inconclusive results. This study aimed to determine the effect of ankle taping on the external ankle joint moments during a drop landing on a tilted surface at 25°. Twenty-five participants performed landings on a tilted force platform that caused ankle inversion with and without ankle taping. Landing kinematics were captured using a motion capture system. External ankle inversion moment, the angular impulse due to the medio-lateral and vertical components of ground reaction force (GRF) and their moment arm lengths about the ankle joint were analysed. The foot plantar inclination relative to the ground was assessed. In the taping condition, the foot plantar inclination and ankle inversion angular impulse were reduced significantly compared to that of the control. The only component of the external inversion moment to change significantly in the taped condition was a shortened medio-lateral GRF moment arm length. It can be assumed that the ankle taping altered the foot plantar inclination relative to the ground, thereby shortening the moment arm of medio-lateral GRF that resulted in the reduced ankle inversion angular impulse.

  5. Common Structure in Different Physical Properties: Electrical Conductivity and Surface Waves Phase Velocity

    NASA Astrophysics Data System (ADS)

    Mandolesi, E.; Jones, A. G.; Roux, E.; Lebedev, S.

    2009-12-01

    Recently different studies were undertaken on the correlation between diverse geophysical datasets. Magnetotelluric (MT) data are used to map the electrical conductivity structure behind the Earth, but one of the problems in MT method is the lack in resolution in mapping zones beneath a region of high conductivity. Joint inversion of different datasets in which a common structure is recognizable reduces non-uniqueness and may improve the quality of interpretation when different dataset are sensitive to different physical properties with an underlined common structure. A common structure is recognized if the change of physical properties occur at the same spatial locations. Common structure may be recognized in 1D inversion of seismic and MT datasets, and numerous authors show that also 2D common structure may drive to an improvement of inversion quality while dataset are jointly inverted. In this presentation a tool to constrain MT 2D inversion with phase velocity of surface wave seismic data (SW) is proposed and is being developed and tested on synthetic data. Results obtained suggest that a joint inversion scheme could be applied with success along a section profile for which data are compatible with a 2D MT model.

  6. Inversion group (IG) fitting: A new T1 mapping method for modified look-locker inversion recovery (MOLLI) that allows arbitrary inversion groupings and rest periods (including no rest period).

    PubMed

    Sussman, Marshall S; Yang, Issac Y; Fok, Kai-Ho; Wintersperger, Bernd J

    2016-06-01

    The Modified Look-Locker Inversion Recovery (MOLLI) technique is used for T1 mapping in the heart. However, a drawback of this technique is that it requires lengthy rest periods in between inversion groupings to allow for complete magnetization recovery. In this work, a new MOLLI fitting algorithm (inversion group [IG] fitting) is presented that allows for arbitrary combinations of inversion groupings and rest periods (including no rest period). Conventional MOLLI algorithms use a three parameter fitting model. In IG fitting, the number of parameters is two plus the number of inversion groupings. This increased number of parameters permits any inversion grouping/rest period combination. Validation was performed through simulation, phantom, and in vivo experiments. IG fitting provided T1 values with less than 1% discrepancy across a range of inversion grouping/rest period combinations. By comparison, conventional three parameter fits exhibited up to 30% discrepancy for some combinations. The one drawback with IG fitting was a loss of precision-approximately 30% worse than the three parameter fits. IG fitting permits arbitrary inversion grouping/rest period combinations (including no rest period). The cost of the algorithm is a loss of precision relative to conventional three parameter fits. Magn Reson Med 75:2332-2340, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  7. Riemann–Hilbert problem approach for two-dimensional flow inverse scattering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agaltsov, A. D., E-mail: agalets@gmail.com; Novikov, R. G., E-mail: novikov@cmap.polytechnique.fr; IEPT RAS, 117997 Moscow

    2014-10-15

    We consider inverse scattering for the time-harmonic wave equation with first-order perturbation in two dimensions. This problem arises in particular in the acoustic tomography of moving fluid. We consider linearized and nonlinearized reconstruction algorithms for this problem of inverse scattering. Our nonlinearized reconstruction algorithm is based on the non-local Riemann–Hilbert problem approach. Comparisons with preceding results are given.

  8. Mechanical stability of the subtalar joint after lateral ligament sectioning and ankle brace application: a biomechanical experimental study.

    PubMed

    Kamiya, Tomoaki; Kura, Hideji; Suzuki, Daisuke; Uchiyama, Eiichi; Fujimiya, Mineko; Yamashita, Toshihiko

    2009-12-01

    The roles of each ligament supporting the subtalar joint have not been clarified despite several biomechanical studies. The effects of ankle braces on subtalar instability have not been shown. The ankle brace has a partial effect on restricting excessive motion of the subtalar joint. Controlled laboratory study. Ten normal fresh-frozen cadaveric specimens were used. The angular motions of the talus were measured via a magnetic tracking system. The specimens were tested while inversion and eversion forces, as well as internal and external rotation torques, were applied. The calcaneofibular ligament, cervical ligament, and interosseous talocalcaneal ligament were sectioned sequentially, and the roles of each ligament, as well as the stabilizing effects of the ankle brace, were examined. Complete sectioning of the ligaments increased the angle between the talus and calcaneus in the frontal plane to 51.7 degrees + or - 11.8 degrees compared with 35.7 degrees + or - 6.0 degrees in the intact state when inversion force was applied. There was a statistically significant difference in the angles between complete sectioning of the ligaments and after application of the brace (34.1 degrees + or - 7.3 degrees ) when inversion force was applied. On the other hand, significant differences in subtalar rotation were not found between complete sectioning of the ligaments and application of the brace when internal and external rotational torques were applied. The ankle brace limited inversion of the subtalar joint, but it did not restrict motion after application of internal or external rotational torques. In cases of severe ankle sprains involving the calcaneofibular ligament, cervical ligament, and interosseous talocalcaneal ligament injuries, application of an ankle brace might be less effective in limiting internal-external rotational instabilities than in cases of inversion instabilities in the subtalar joint. An improvement in the design of the brace is needed to restore better rotational stability in the subtalar joint.

  9. Density Imaging of Puy de Dôme Volcano by Joint Inversion of Muographic and Gravimetric Data

    NASA Astrophysics Data System (ADS)

    Barnoud, A.; Niess, V.; Le Ménédeu, E.; Cayol, V.; Carloganu, C.

    2016-12-01

    We aim at jointly inverting high density muographic and gravimetric data to robustly infer the density structure of volcanoes. We use the puy de Dôme volcano in France as a proof of principle since high quality data sets are available for both muography and gravimetry. Gravimetric inversion and muography are independent methods that provide an estimation of density distributions. On the one hand, gravimetry allows to reconstruct 3D density variations by inversion. This process is well known to be ill-posed and intrinsically non unique, thus it requires additional constraints (eg. a priori density model). On the other hand, muography provides a direct measurement of 2D mean densities (radiographic images) from the detection of high energy atmospheric muons crossing the volcanic edifice. 3D density distributions can be computed from several radiographic images, but the number of images is generally limited by field constraints and by the limited number of available telescopes. Thus, muon tomography is also ill-posed in practice.In the case of the puy de Dôme volcano, the density structures inferred from gravimetric data (Portal et al. 2016) and from muographic data (Le Ménédeu et al. 2016) show a qualitative agreement but cannot be compared quantitatively. Because each method has different intrinsic resolutions due to the physics (Jourde et al., 2015), the joint inversion is expected to improve the robustness of the inversion. Such joint inversion has already been applied in a volcanic context (Nishiyama et al., 2013).Volcano muography requires state-of-art, high-resolution and large-scale muon detectors (Ambrosino et al., 2015). Instrumental uncertainties and systematic errors may constitute an important limitation for muography and should not be overlooked. For instance, low-energy muons are detected together with ballistic high-energy muons, decreasing the measured value of the mean density closed to the topography.Here, we jointly invert the gravimetric and muographic data to characterize the 3D density distribution of the puy de Dôme volcano. We attempt to precisely identify and estimate the different uncertainties and systematic errors so that they can be accounted for in the inversion scheme.

  10. Fast Nonlinear Generalized Inversion of Gravity Data with Application to the Three-Dimensional Crustal Density Structure of Sichuan Basin, Southwest China

    NASA Astrophysics Data System (ADS)

    Wang, Jun; Meng, Xiaohong; Li, Fang

    2017-11-01

    Generalized inversion is one of the important steps in the quantitative interpretation of gravity data. With appropriate algorithm and parameters, it gives a view of the subsurface which characterizes different geological bodies. However, generalized inversion of gravity data is time consuming due to the large amount of data points and model cells adopted. Incorporating of various prior information as constraints deteriorates the above situation. In the work discussed in this paper, a method for fast nonlinear generalized inversion of gravity data is proposed. The fast multipole method is employed for forward modelling. The inversion objective function is established with weighted data misfit function along with model objective function. The total objective function is solved by a dataspace algorithm. Moreover, depth weighing factor is used to improve depth resolution of the result, and bound constraint is incorporated by a transfer function to limit the model parameters in a reliable range. The matrix inversion is accomplished by a preconditioned conjugate gradient method. With the above algorithm, equivalent density vectors can be obtained, and interpolation is performed to get the finally density model on the fine mesh in the model domain. Testing on synthetic gravity data demonstrated that the proposed method is faster than conventional generalized inversion algorithm to produce an acceptable solution for gravity inversion problem. The new developed inversion method was also applied for inversion of the gravity data collected over Sichuan basin, southwest China. The established density structure in this study helps understanding the crustal structure of Sichuan basin and provides reference for further oil and gas exploration in this area.

  11. A direct method for nonlinear ill-posed problems

    NASA Astrophysics Data System (ADS)

    Lakhal, A.

    2018-02-01

    We propose a direct method for solving nonlinear ill-posed problems in Banach-spaces. The method is based on a stable inversion formula we explicitly compute by applying techniques for analytic functions. Furthermore, we investigate the convergence and stability of the method and prove that the derived noniterative algorithm is a regularization. The inversion formula provides a systematic sensitivity analysis. The approach is applicable to a wide range of nonlinear ill-posed problems. We test the algorithm on a nonlinear problem of travel-time inversion in seismic tomography. Numerical results illustrate the robustness and efficiency of the algorithm.

  12. A gradient based algorithm to solve inverse plane bimodular problems of identification

    NASA Astrophysics Data System (ADS)

    Ran, Chunjiang; Yang, Haitian; Zhang, Guoqing

    2018-02-01

    This paper presents a gradient based algorithm to solve inverse plane bimodular problems of identifying constitutive parameters, including tensile/compressive moduli and tensile/compressive Poisson's ratios. For the forward bimodular problem, a FE tangent stiffness matrix is derived facilitating the implementation of gradient based algorithms, for the inverse bimodular problem of identification, a two-level sensitivity analysis based strategy is proposed. Numerical verification in term of accuracy and efficiency is provided, and the impacts of initial guess, number of measurement points, regional inhomogeneity, and noisy data on the identification are taken into accounts.

  13. Effect of Direct Ligament Repair and Tenodesis Reconstruction on Simulated Subtalar Joint Instability.

    PubMed

    Choisne, Julie; Hoch, Matthew C; Alexander, Ian; Ringleb, Stacie I

    2017-03-01

    Subtalar instability is associated with up to 80% of patients presenting with chronic ankle instability but is often not considered in the diagnosis or treatment. Operative procedures to repair ankle instability have shown good clinical results, but the effects of these reconstruction procedures on isolated subtalar instability are not well understood. The goal of this study was to investigate the effect of the Gould modification of the Broström procedure and a new tenodesis reconstruction procedure on ankle and subtalar joint kinematics after simulating a subtalar injury. Kinematic data were collected on 7 cadaveric ankles during inversion through the range of ankle flexion and during internal rotation. Testing was performed on the intact foot; after sectioning the calcaneofibular ligament, cervical ligament, and interosseous talocalcaneal ligament; after the Gould modification of the Broström procedure was performed; and after tenodesis was performed and sutures from the Gould modification removed. The Gould modification of the Broström procedure significantly decreased subtalar and ankle inversion motion and subtalar internal rotation compared to the unstable condition. The tenodesis method restricted internal rotation at the subtalar joint and ankle inversion compared to the intact state. Both operative procedures improved stability of the ankle complex, but tenodesis was unable to restore subtalar inversion and restricted ankle inversion in maximum plantarflexion. The Gould modification of Broström ligament repair may be a favorable operative procedure for the restoration of subtalar and ankle joint kinematics.

  14. Joint inversion of geophysical data using petrophysical clustering and facies deformation wth the level set technique

    NASA Astrophysics Data System (ADS)

    Revil, A.

    2015-12-01

    Geological expertise and petrophysical relationships can be brought together to provide prior information while inverting multiple geophysical datasets. The merging of such information can result in more realistic solution in the distribution of the model parameters, reducing ipse facto the non-uniqueness of the inverse problem. We consider two level of heterogeneities: facies, described by facies boundaries and heteroegenities inside each facies determined by a correlogram. In this presentation, we pose the geophysical inverse problem in terms of Gaussian random fields with mean functions controlled by petrophysical relationships and covariance functions controlled by a prior geological cross-section, including the definition of spatial boundaries for the geological facies. The petrophysical relationship problem is formulated as a regression problem upon each facies. The inversion of the geophysical data is performed in a Bayesian framework. We demonstrate the usefulness of this strategy using a first synthetic case for which we perform a joint inversion of gravity and galvanometric resistivity data with the stations located at the ground surface. The joint inversion is used to recover the density and resistivity distributions of the subsurface. In a second step, we consider the possibility that the facies boundaries are deformable and their shapes are inverted as well. We use the level set approach to perform such deformation preserving prior topological properties of the facies throughout the inversion. With the help of prior facies petrophysical relationships and topological characteristic of each facies, we make posterior inference about multiple geophysical tomograms based on their corresponding geophysical data misfits. The method is applied to a second synthetic case showing that we can recover the heterogeneities inside the facies, the mean values for the petrophysical properties, and, to some extent, the facies boundaries using the 2D joint inversion of gravity and galvanometric resistivity data. For this 2D synthetic example, we note that the position of the facies are well-recovered except far from the ground surfce where the sensitivity is too low. The figure shows the evolution of the shape of the facies during the inversion itertion by iteration.

  15. Comparison of trend analyses for Umkehr data using new and previous inversion algorithms

    NASA Technical Reports Server (NTRS)

    Reinsel, Gregory C.; Tam, Wing-Kuen; Ying, Lisa H.

    1994-01-01

    Ozone vertical profile Umkehr data for layers 3-9 obtained from 12 stations, using both previous and new inversion algorithms, were analyzed for trends. The trends estimated for the Umkehr data from the two algorithms were compared using two data periods, 1968-1991 and 1977-1991. Both nonseasonal and seasonal trend models were fitted. The overall annual trends are found to be significantly negative, of the order of -5% per decade, for layers 7 and 8 using both inversion algorithms. The largest negative trends occur in these layers under the new algorithm, whereas in the previous algorithm the most negative trend occurs in layer 9. The trend estimates, both annual and seasonal, are substantially different between the two algorithms mainly for layers 3, 4, and 9, where trends from the new algorithm data are about 2% per decade less negative, with less appreciable differences in layers 7 and 8. The trend results from the two data periods are similar, except for layer 3 where trends become more negative, by about -2% per decade, for 1977-1991.

  16. Three-dimensional joint inversion for magnetotelluric resistivity and static shift distributions in complex media

    NASA Astrophysics Data System (ADS)

    Sasaki, Yutaka; Meju, Max A.

    2006-05-01

    Accurate interpretation of magnetotelluric (MT) data in the presence of static shift arising from near-surface inhomogeneities is an unresolved problem in three-dimensional (3-D) inversion. While it is well known in 1-D and 2-D studies that static shift can lead to erroneous interpretation, how static shift can influence the result of 3-D inversion is not fully understood and is relevant to improved subsurface analysis. Using the synthetic data generated from 3-D models with randomly distributed heterogeneous overburden and elongate homogeneous overburden that are consistent with geological observations, this paper examines the effects of near-surface inhomogeneity on the accuracy of 3-D inversion models. It is found that small-scale and shallow depth structures are severely distorted while the large-scale structure is marginally distorted in 3-D inversion not accounting for static shift; thus the erroneous near-surface structure does degrade the reconstruction of smaller-scale structure at any depth. However, 3-D joint inversion for resistivity and static shift significantly reduces the artifacts caused by static shifts and improves the overall resolution, irrespective of whether a zero-sum or Gaussian distribution of static shifts is assumed. The 3-D joint inversion approach works equally well for situations where the shallow bodies are of small size or long enough to allow some induction such that the effects of near-surface inhomogeneity are manifested as a frequency-dependent shift rather than a constant shift.

  17. Joint Inversion of Vp, Vs, and Resistivity at SAFOD

    NASA Astrophysics Data System (ADS)

    Bennington, N. L.; Zhang, H.; Thurber, C. H.; Bedrosian, P. A.

    2010-12-01

    Seismic and resistivity models at SAFOD have been derived from separate inversions that show significant spatial similarity between the main model features. Previous work [Zhang et al., 2009] used cluster analysis to make lithologic inferences from trends in the seismic and resistivity models. We have taken this one step further by developing a joint inversion scheme that uses the cross-gradient penalty function to achieve structurally similar Vp, Vs, and resistivity images that adequately fit the seismic and magnetotelluric MT data without forcing model similarity where none exists. The new inversion code, tomoDDMT, merges the seismic inversion code tomoDD [Zhang and Thurber, 2003] and the MT inversion code Occam2DMT [Constable et al., 1987; deGroot-Hedlin and Constable, 1990]. We are exploring the utility of the cross-gradients penalty function in improving models of fault-zone structure at SAFOD on the San Andreas Fault in the Parkfield, California area. Two different sets of end-member starting models are being tested. One set is the separately inverted Vp, Vs, and resistivity models. The other set consists of simple, geologically based block models developed from borehole information at the SAFOD drill site and a simplified version of features seen in geophysical models at Parkfield. For both starting models, our preliminary results indicate that the inversion produces a converging solution with resistivity, seismic, and cross-gradient misfits decreasing over successive iterations. We also compare the jointly inverted Vp, Vs, and resistivity models to borehole information from SAFOD to provide a "ground truth" comparison.

  18. Influence of ankle joint plantarflexion and dorsiflexion on lateral ankle sprain: A computational study.

    PubMed

    Purevsuren, Tserenchimed; Kim, Kyungsoo; Batbaatar, Myagmarbayar; Lee, SuKyoung; Kim, Yoon Hyuk

    2018-05-01

    Understanding the mechanism of injury involved in lateral ankle sprain is essential to prevent injury, to establish surgical repair and reconstruction, and to plan reliable rehabilitation protocols. Most studies for lateral ankle sprain posit that ankle inversion, internal rotation, and plantarflexion are involved in the mechanism of injury. However, recent studies indicated that ankle dorsiflexion also plays an important role in the lateral ankle sprain mechanism. In this study, the contributions of ankle plantarflexion and dorsiflexion on the ankle joint were evaluated under complex combinations of internal and inversion moments. A multibody ankle joint model including 24 ligaments was developed and validated against two experimental cadaveric studies. The effects of ankle plantarflexion (up to 60°) and dorsiflexion (up to 30°) on the lateral ankle sprain mechanism under ankle inversion moment coupled with internal rotational moment were investigated using the validated model. Lateral ankle sprain injuries can occur during ankle dorsiflexion, in which the calcaneofibular ligament and anterior talofibular ligament tears may occur associated with excessive inversion and internal rotational moment, respectively. Various combinations of inversion and internal moment may lead to anterior talofibular ligament injuries at early ankle plantarflexion, while the inversion moment acts as a primary factor to tear the anterior talofibular ligament in early plantarflexion. It is better to consider inversion and internal rotation as primary factors of the lateral ankle sprain mechanism, while plantarflexion or dorsiflexion can be secondary factor. This information will help to clarify the lateral ankle sprain mechanism of injury.

  19. Quantifying the influences of spectral resolution on uncertainty in leaf trait estimates through a Bayesian approach to RTM inversion

    DOE PAGES

    Shiklomanov, Alexey N.; Dietze, Michael C.; Viskari, Toni; ...

    2016-06-09

    The remote monitoring of plant canopies is critically needed for understanding of terrestrial ecosystem mechanics and biodiversity as well as capturing the short- to long-term responses of vegetation to disturbance and climate change. A variety of orbital, sub-orbital, and field instruments have been used to retrieve optical spectral signals and to study different vegetation properties such as plant biochemistry, nutrient cycling, physiology, water status, and stress. Radiative transfer models (RTMs) provide a mechanistic link between vegetation properties and observed spectral features, and RTM spectral inversion is a useful framework for estimating these properties from spectral data. However, existing approaches tomore » RTM spectral inversion are typically limited by the inability to characterize uncertainty in parameter estimates. Here, we introduce a Bayesian algorithm for the spectral inversion of the PROSPECT 5 leaf RTM that is distinct from past approaches in two important ways: First, the algorithm only uses reflectance and does not require transmittance observations, which have been plagued by a variety of measurement and equipment challenges. Second, the output is not a point estimate for each parameter but rather the joint probability distribution that includes estimates of parameter uncertainties and covariance structure. We validated our inversion approach using a database of leaf spectra together with measurements of equivalent water thickness (EWT) and leaf dry mass per unit area (LMA). The parameters estimated by our inversion were able to accurately reproduce the observed reflectance (RMSE VIS = 0.0063, RMSE NIR-SWIR = 0.0098) and transmittance (RMSE VIS = 0.0404, RMSE NIR-SWIR = 0.0551) for both broadleaved and conifer species. Inversion estimates of EWT and LMA for broadleaved species agreed well with direct measurements (CV EWT = 18.8%, CV LMA = 24.5%), while estimates for conifer species were less accurate (CV EWT = 53.2%, CV LMA = 63.3%). To examine the influence of spectral resolution on parameter uncertainty, we simulated leaf reflectance as observed by ten common remote sensing platforms with varying spectral configurations and performed a Bayesian inversion on the resulting spectra. We found that full-range hyperspectral platforms were able to retrieve all parameters accurately and precisely, while the parameter estimates of multispectral platforms were much less precise and prone to bias at high and low values. We also observed that variations in the width and location of spectral bands influenced the shape of the covariance structure of parameter estimates. Lastly, our Bayesian spectral inversion provides a powerful and versatile framework for future RTM development and single- and multi-instrumental remote sensing of vegetation.« less

  20. Quantifying the influences of spectral resolution on uncertainty in leaf trait estimates through a Bayesian approach to RTM inversion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shiklomanov, Alexey N.; Dietze, Michael C.; Viskari, Toni

    The remote monitoring of plant canopies is critically needed for understanding of terrestrial ecosystem mechanics and biodiversity as well as capturing the short- to long-term responses of vegetation to disturbance and climate change. A variety of orbital, sub-orbital, and field instruments have been used to retrieve optical spectral signals and to study different vegetation properties such as plant biochemistry, nutrient cycling, physiology, water status, and stress. Radiative transfer models (RTMs) provide a mechanistic link between vegetation properties and observed spectral features, and RTM spectral inversion is a useful framework for estimating these properties from spectral data. However, existing approaches tomore » RTM spectral inversion are typically limited by the inability to characterize uncertainty in parameter estimates. Here, we introduce a Bayesian algorithm for the spectral inversion of the PROSPECT 5 leaf RTM that is distinct from past approaches in two important ways: First, the algorithm only uses reflectance and does not require transmittance observations, which have been plagued by a variety of measurement and equipment challenges. Second, the output is not a point estimate for each parameter but rather the joint probability distribution that includes estimates of parameter uncertainties and covariance structure. We validated our inversion approach using a database of leaf spectra together with measurements of equivalent water thickness (EWT) and leaf dry mass per unit area (LMA). The parameters estimated by our inversion were able to accurately reproduce the observed reflectance (RMSE VIS = 0.0063, RMSE NIR-SWIR = 0.0098) and transmittance (RMSE VIS = 0.0404, RMSE NIR-SWIR = 0.0551) for both broadleaved and conifer species. Inversion estimates of EWT and LMA for broadleaved species agreed well with direct measurements (CV EWT = 18.8%, CV LMA = 24.5%), while estimates for conifer species were less accurate (CV EWT = 53.2%, CV LMA = 63.3%). To examine the influence of spectral resolution on parameter uncertainty, we simulated leaf reflectance as observed by ten common remote sensing platforms with varying spectral configurations and performed a Bayesian inversion on the resulting spectra. We found that full-range hyperspectral platforms were able to retrieve all parameters accurately and precisely, while the parameter estimates of multispectral platforms were much less precise and prone to bias at high and low values. We also observed that variations in the width and location of spectral bands influenced the shape of the covariance structure of parameter estimates. Lastly, our Bayesian spectral inversion provides a powerful and versatile framework for future RTM development and single- and multi-instrumental remote sensing of vegetation.« less

  1. Lithospheric architecture of NE China from joint Inversions of receiver functions and surface wave dispersion through Bayesian optimisation

    NASA Astrophysics Data System (ADS)

    Sebastian, Nita; Kim, Seongryong; Tkalčić, Hrvoje; Sippl, Christian

    2017-04-01

    The purpose of this study is to develop an integrated inference on the lithospheric structure of NE China using three passive seismic networks comprised of 92 stations. The NE China plain consists of complex lithospheric domains characterised by the co-existence of complex geodynamic processes such as crustal thinning, active intraplate cenozoic volcanism and low velocity anomalies. To estimate lithospheric structures with greater detail, we chose to perform the joint inversion of independent data sets such as receiver functions and surface wave dispersion curves (group and phase velocity). We perform a joint inversion based on principles of Bayesian transdimensional optimisation techniques (Kim etal., 2016). Unlike in the previous studies of NE China, the complexity of the model is determined from the data in the first stage of the inversion, and the data uncertainty is computed based on Bayesian statistics in the second stage of the inversion. The computed crustal properties are retrieved from an ensemble of probable models. We obtain major structural inferences with well constrained absolute velocity estimates, which are vital for inferring properties of the lithosphere and bulk crustal Vp/Vs ratio. The Vp/Vs estimate obtained from joint inversions confirms the high Vp/Vs ratio ( 1.98) obtained using the H-Kappa method beneath some stations. Moreover, we could confirm the existence of a lower crustal velocity beneath several stations (eg: station SHS) within the NE China plain. Based on these findings we attempt to identify a plausible origin for structural complexity. We compile a high-resolution 3D image of the lithospheric architecture of the NE China plain.

  2. Joint inversion of marine seismic AVA and CSEM data using statistical rock-physics models and Markov random fields: Stochastic inversion of AVA and CSEM data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, J.; Hoversten, G.M.

    2011-09-15

    Joint inversion of seismic AVA and CSEM data requires rock-physics relationships to link seismic attributes to electrical properties. Ideally, we can connect them through reservoir parameters (e.g., porosity and water saturation) by developing physical-based models, such as Gassmann’s equations and Archie’s law, using nearby borehole logs. This could be difficult in the exploration stage because information available is typically insufficient for choosing suitable rock-physics models and for subsequently obtaining reliable estimates of the associated parameters. The use of improper rock-physics models and the inaccuracy of the estimates of model parameters may cause misleading inversion results. Conversely, it is easy tomore » derive statistical relationships among seismic and electrical attributes and reservoir parameters from distant borehole logs. In this study, we develop a Bayesian model to jointly invert seismic AVA and CSEM data for reservoir parameter estimation using statistical rock-physics models; the spatial dependence of geophysical and reservoir parameters are carried out by lithotypes through Markov random fields. We apply the developed model to a synthetic case, which simulates a CO{sub 2} monitoring application. We derive statistical rock-physics relations from borehole logs at one location and estimate seismic P- and S-wave velocity ratio, acoustic impedance, density, electrical resistivity, lithotypes, porosity, and water saturation at three different locations by conditioning to seismic AVA and CSEM data. Comparison of the inversion results with their corresponding true values shows that the correlation-based statistical rock-physics models provide significant information for improving the joint inversion results.« less

  3. Accommodating Chromosome Inversions in Linkage Analysis

    PubMed Central

    Chen, Gary K.; Slaten, Erin; Ophoff, Roel A.; Lange, Kenneth

    2006-01-01

    This work develops a population-genetics model for polymorphic chromosome inversions. The model precisely describes how an inversion changes the nature of and approach to linkage equilibrium. The work also describes algorithms and software for allele-frequency estimation and linkage analysis in the presence of an inversion. The linkage algorithms implemented in the software package Mendel estimate recombination parameters and calculate the posterior probability that each pedigree member carries the inversion. Application of Mendel to eight Centre d'Étude du Polymorphisme Humain pedigrees in a region containing a common inversion on 8p23 illustrates its potential for providing more-precise estimates of the location of an unmapped marker or trait gene. Our expanded cytogenetic analysis of these families further identifies inversion carriers and increases the evidence of linkage. PMID:16826515

  4. Optimization of contrast-to-tissue ratio by adaptation of transmitted ternary signal in ultrasound pulse inversion imaging.

    PubMed

    Ménigot, Sébastien; Girault, Jean-Marc

    2013-01-01

    Ultrasound contrast imaging has provided more accurate medical diagnoses thanks to the development of innovating modalities like the pulse inversion imaging. However, this latter modality that improves the contrast-to-tissue ratio (CTR) is not optimal, since the frequency is manually chosen jointly with the probe. However, an optimal choice of this command is possible, but it requires precise information about the transducer and the medium which can be experimentally difficult to obtain, even inaccessible. It turns out that the optimization can become more complex by taking into account the kind of generators, since the generators of electrical signals in a conventional ultrasound scanner can be unipolar, bipolar, or tripolar. Our aim was to seek the ternary command which maximized the CTR. By combining a genetic algorithm and a closed loop, the system automatically proposed the optimal ternary command. In simulation, the gain compared with the usual ternary signal could reach about 3.9 dB. Another interesting finding was that, in contrast to what is generally accepted, the optimal command was not a fixed-frequency signal but had harmonic components.

  5. Improved preconditioned conjugate gradient algorithm and application in 3D inversion of gravity-gradiometry data

    NASA Astrophysics Data System (ADS)

    Wang, Tai-Han; Huang, Da-Nian; Ma, Guo-Qing; Meng, Zhao-Hai; Li, Ye

    2017-06-01

    With the continuous development of full tensor gradiometer (FTG) measurement techniques, three-dimensional (3D) inversion of FTG data is becoming increasingly used in oil and gas exploration. In the fast processing and interpretation of large-scale high-precision data, the use of the graphics processing unit process unit (GPU) and preconditioning methods are very important in the data inversion. In this paper, an improved preconditioned conjugate gradient algorithm is proposed by combining the symmetric successive over-relaxation (SSOR) technique and the incomplete Choleksy decomposition conjugate gradient algorithm (ICCG). Since preparing the preconditioner requires extra time, a parallel implement based on GPU is proposed. The improved method is then applied in the inversion of noisecontaminated synthetic data to prove its adaptability in the inversion of 3D FTG data. Results show that the parallel SSOR-ICCG algorithm based on NVIDIA Tesla C2050 GPU achieves a speedup of approximately 25 times that of a serial program using a 2.0 GHz Central Processing Unit (CPU). Real airborne gravity-gradiometry data from Vinton salt dome (southwest Louisiana, USA) are also considered. Good results are obtained, which verifies the efficiency and feasibility of the proposed parallel method in fast inversion of 3D FTG data.

  6. Neural joint control for Space Shuttle Remote Manipulator System

    NASA Technical Reports Server (NTRS)

    Atkins, Mark A.; Cox, Chadwick J.; Lothers, Michael D.; Pap, Robert M.; Thomas, Charles R.

    1992-01-01

    Neural networks are being used to control a robot arm in a telerobotic operation. The concept uses neural networks for both joint and inverse kinematics in a robotic control application. An upper level neural network is trained to learn inverse kinematic mappings. The output, a trajectory, is then fed to the Decentralized Adaptive Joint Controllers. This neural network implementation has shown that the controlled arm recovers from unexpected payload changes while following the reference trajectory. The neural network-based decentralized joint controller is faster, more robust and efficient than conventional approaches. Implementations of this architecture are discussed that would relax assumptions about dynamics, obstacles, and heavy loads. This system is being developed to use with the Space Shuttle Remote Manipulator System.

  7. Uncertainties in the 2004 Sumatra–Andaman source through nonlinear stochastic inversion of tsunami waves

    PubMed Central

    Venugopal, M.; Roy, D.; Rajendran, K.; Guillas, S.; Dias, F.

    2017-01-01

    Numerical inversions for earthquake source parameters from tsunami wave data usually incorporate subjective elements to stabilize the search. In addition, noisy and possibly insufficient data result in instability and non-uniqueness in most deterministic inversions, which are barely acknowledged. Here, we employ the satellite altimetry data for the 2004 Sumatra–Andaman tsunami event to invert the source parameters. We also include kinematic parameters that improve the description of tsunami generation and propagation, especially near the source. Using a finite fault model that represents the extent of rupture and the geometry of the trench, we perform a new type of nonlinear joint inversion of the slips, rupture velocities and rise times with minimal a priori constraints. Despite persistently good waveform fits, large uncertainties in the joint parameter distribution constitute a remarkable feature of the inversion. These uncertainties suggest that objective inversion strategies should incorporate more sophisticated physical models of seabed deformation in order to significantly improve the performance of early warning systems. PMID:28989311

  8. Uncertainties in the 2004 Sumatra-Andaman source through nonlinear stochastic inversion of tsunami waves.

    PubMed

    Gopinathan, D; Venugopal, M; Roy, D; Rajendran, K; Guillas, S; Dias, F

    2017-09-01

    Numerical inversions for earthquake source parameters from tsunami wave data usually incorporate subjective elements to stabilize the search. In addition, noisy and possibly insufficient data result in instability and non-uniqueness in most deterministic inversions, which are barely acknowledged. Here, we employ the satellite altimetry data for the 2004 Sumatra-Andaman tsunami event to invert the source parameters. We also include kinematic parameters that improve the description of tsunami generation and propagation, especially near the source. Using a finite fault model that represents the extent of rupture and the geometry of the trench, we perform a new type of nonlinear joint inversion of the slips, rupture velocities and rise times with minimal a priori constraints. Despite persistently good waveform fits, large uncertainties in the joint parameter distribution constitute a remarkable feature of the inversion. These uncertainties suggest that objective inversion strategies should incorporate more sophisticated physical models of seabed deformation in order to significantly improve the performance of early warning systems.

  9. A New Inversion-Based Algorithm for Retrieval of Over-Water Rain Rate from SSM/I Multichannel Imagery

    NASA Technical Reports Server (NTRS)

    Petty, Grant W.; Stettner, David R.

    1994-01-01

    This paper discusses certain aspects of a new inversion based algorithm for the retrieval of rain rate over the open ocean from the special sensor microwave/imager (SSM/I) multichannel imagery. This algorithm takes a more detailed physical approach to the retrieval problem than previously discussed algorithms that perform explicit forward radiative transfer calculations based on detailed model hydrometer profiles and attempt to match the observations to the predicted brightness temperature.

  10. VLSI architectures for computing multiplications and inverses in GF(2m)

    NASA Technical Reports Server (NTRS)

    Wang, C. C.; Truong, T. K.; Shao, H. M.; Deutsch, L. J.; Omura, J. K.

    1985-01-01

    Finite field arithmetic logic is central in the implementation of Reed-Solomon coders and in some cryptographic algorithms. There is a need for good multiplication and inversion algorithms that are easily realized on VLSI chips. Massey and Omura recently developed a new multiplication algorithm for Galois fields based on a normal basis representation. A pipeline structure is developed to realize the Massey-Omura multiplier in the finite field GF(2m). With the simple squaring property of the normal-basis representation used together with this multiplier, a pipeline architecture is also developed for computing inverse elements in GF(2m). The designs developed for the Massey-Omura multiplier and the computation of inverse elements are regular, simple, expandable and, therefore, naturally suitable for VLSI implementation.

  11. VLSI architectures for computing multiplications and inverses in GF(2-m)

    NASA Technical Reports Server (NTRS)

    Wang, C. C.; Truong, T. K.; Shao, H. M.; Deutsch, L. J.; Omura, J. K.; Reed, I. S.

    1983-01-01

    Finite field arithmetic logic is central in the implementation of Reed-Solomon coders and in some cryptographic algorithms. There is a need for good multiplication and inversion algorithms that are easily realized on VLSI chips. Massey and Omura recently developed a new multiplication algorithm for Galois fields based on a normal basis representation. A pipeline structure is developed to realize the Massey-Omura multiplier in the finite field GF(2m). With the simple squaring property of the normal-basis representation used together with this multiplier, a pipeline architecture is also developed for computing inverse elements in GF(2m). The designs developed for the Massey-Omura multiplier and the computation of inverse elements are regular, simple, expandable and, therefore, naturally suitable for VLSI implementation.

  12. VLSI architectures for computing multiplications and inverses in GF(2m).

    PubMed

    Wang, C C; Truong, T K; Shao, H M; Deutsch, L J; Omura, J K; Reed, I S

    1985-08-01

    Finite field arithmetic logic is central in the implementation of Reed-Solomon coders and in some cryptographic algorithms. There is a need for good multiplication and inversion algorithms that can be easily realized on VLSI chips. Massey and Omura recently developed a new multiplication algorithm for Galois fields based on a normal basis representation. In this paper, a pipeline structure is developed to realize the Massey-Omura multiplier in the finite field GF(2m). With the simple squaring property of the normal basis representation used together with this multiplier, a pipeline architecture is developed for computing inverse elements in GF(2m). The designs developed for the Massey-Omura multiplier and the computation of inverse elements are regular, simple, expandable, and therefore, naturally suitable for VLSI implementation.

  13. Expecting ankle tilts and wearing an ankle brace influence joint control in an imitated ankle sprain mechanism during walking.

    PubMed

    Gehring, Dominic; Wissler, Sabrina; Lohrer, Heinz; Nauck, Tanja; Gollhofer, Albert

    2014-03-01

    A thorough understanding of the functional aspects of ankle joint control is essential to developing effective injury prevention. It is of special interest to understand how neuromuscular control mechanisms and mechanical constraints stabilize the ankle joint. Therefore, the aim of the present study was to determine how expecting ankle tilts and the application of an ankle brace influence ankle joint control when imitating the ankle sprain mechanism during walking. Ankle kinematics and muscle activity were assessed in 17 healthy men. During gait rapid perturbations were applied using a trapdoor (tilting with 24° inversion and 15° plantarflexion). The subjects either knew that a perturbation would definitely occur (expected tilts) or there was only the possibility that a perturbation would occur (potential tilts). Both conditions were conducted with and without a semi-rigid ankle brace. Expecting perturbations led to an increased ankle eversion at foot contact, which was mediated by an altered muscle preactivation pattern. Moreover, the maximal inversion angle (-7%) and velocity (-4%), as well as the reactive muscle response were significantly reduced when the perturbation was expected. While wearing an ankle brace did not influence muscle preactivation nor the ankle kinematics before ground contact, it significantly reduced the maximal ankle inversion angle (-14%) and velocity (-11%) as well as reactive neuromuscular responses. The present findings reveal that expecting ankle inversion modifies neuromuscular joint control prior to landing. Although such motor control strategies are weaker in their magnitude compared with braces, they seem to assist ankle joint stabilization in a close-to-injury situation. Copyright © 2013 Elsevier B.V. All rights reserved.

  14. A Fast EM Algorithm for Fitting Joint Models of a Binary Response and Multiple Longitudinal Covariates Subject to Detection Limits

    PubMed Central

    Bernhardt, Paul W.; Zhang, Daowen; Wang, Huixia Judy

    2014-01-01

    Joint modeling techniques have become a popular strategy for studying the association between a response and one or more longitudinal covariates. Motivated by the GenIMS study, where it is of interest to model the event of survival using censored longitudinal biomarkers, a joint model is proposed for describing the relationship between a binary outcome and multiple longitudinal covariates subject to detection limits. A fast, approximate EM algorithm is developed that reduces the dimension of integration in the E-step of the algorithm to one, regardless of the number of random effects in the joint model. Numerical studies demonstrate that the proposed approximate EM algorithm leads to satisfactory parameter and variance estimates in situations with and without censoring on the longitudinal covariates. The approximate EM algorithm is applied to analyze the GenIMS data set. PMID:25598564

  15. Acoustic Inversion in Optoacoustic Tomography: A Review

    PubMed Central

    Rosenthal, Amir; Ntziachristos, Vasilis; Razansky, Daniel

    2013-01-01

    Optoacoustic tomography enables volumetric imaging with optical contrast in biological tissue at depths beyond the optical mean free path by the use of optical excitation and acoustic detection. The hybrid nature of optoacoustic tomography gives rise to two distinct inverse problems: The optical inverse problem, related to the propagation of the excitation light in tissue, and the acoustic inverse problem, which deals with the propagation and detection of the generated acoustic waves. Since the two inverse problems have different physical underpinnings and are governed by different types of equations, they are often treated independently as unrelated problems. From an imaging standpoint, the acoustic inverse problem relates to forming an image from the measured acoustic data, whereas the optical inverse problem relates to quantifying the formed image. This review focuses on the acoustic aspects of optoacoustic tomography, specifically acoustic reconstruction algorithms and imaging-system practicalities. As these two aspects are intimately linked, and no silver bullet exists in the path towards high-performance imaging, we adopt a holistic approach in our review and discuss the many links between the two aspects. Four classes of reconstruction algorithms are reviewed: time-domain (so called back-projection) formulae, frequency-domain formulae, time-reversal algorithms, and model-based algorithms. These algorithms are discussed in the context of the various acoustic detectors and detection surfaces which are commonly used in experimental studies. We further discuss the effects of non-ideal imaging scenarios on the quality of reconstruction and review methods that can mitigate these effects. Namely, we consider the cases of finite detector aperture, limited-view tomography, spatial under-sampling of the acoustic signals, and acoustic heterogeneities and losses. PMID:24772060

  16. Geoelectric Characterization of Thermal Water Aquifers Using 2.5D Inversion of VES Measurements

    NASA Astrophysics Data System (ADS)

    Gyulai, Á.; Szűcs, P.; Turai, E.; Baracza, M. K.; Fejes, Z.

    2017-03-01

    This paper presents a short theoretical summary of the series expansion-based 2.5D combined geoelectric weighted inversion (CGWI) method and highlights the advantageous way with which the number of unknowns can be decreased due to the simultaneous characteristic of this inversion. 2.5D CGWI is an approximate inversion method for the determination of 3D structures, which uses the joint 2D forward modeling of dip and strike direction data. In the inversion procedure, the Steiner's most frequent value method is applied to the automatic separation of dip and strike direction data and outliers. The workflow of inversion and its practical application are presented in the study. For conventional vertical electrical sounding (VES) measurements, this method can determine the parameters of complex structures more accurately than the single inversion method. Field data show that the 2.5D CGWI which was developed can determine the optimal location for drilling an exploratory thermal water prospecting well. The novelty of this research is that the measured VES data in dip and strike direction are jointly inverted by the 2.5D CGWI method.

  17. Stochastic joint inversion of hydrogeophysical data for salt tracer test monitoring and hydraulic conductivity imaging

    NASA Astrophysics Data System (ADS)

    Jardani, A.; Revil, A.; Dupont, J. P.

    2013-02-01

    The assessment of hydraulic conductivity of heterogeneous aquifers is a difficult task using traditional hydrogeological methods (e.g., steady state or transient pumping tests) due to their low spatial resolution. Geophysical measurements performed at the ground surface and in boreholes provide additional information for increasing the resolution and accuracy of the inverted hydraulic conductivity field. We used a stochastic joint inversion of Direct Current (DC) resistivity and self-potential (SP) data plus in situ measurement of the salinity in a downstream well during a synthetic salt tracer experiment to reconstruct the hydraulic conductivity field between two wells. The pilot point parameterization was used to avoid over-parameterization of the inverse problem. Bounds on the model parameters were used to promote a consistent Markov chain Monte Carlo sampling of the model parameters. To evaluate the effectiveness of the joint inversion process, we compared eight cases in which the geophysical data are coupled or not to the in situ sampling of the salinity to map the hydraulic conductivity. We first tested the effectiveness of the inversion of each type of data alone (concentration sampling, self-potential, and DC resistivity), and then we combined the data two by two. We finally combined all the data together to show the value of each type of geophysical data in the joint inversion process because of their different sensitivity map. We also investigated a case in which the data were contaminated with noise and the variogram unknown and inverted stochastically. The results of the inversion revealed that incorporating the self-potential data improves the estimate of hydraulic conductivity field especially when the self-potential data were combined to the salt concentration measurement in the second well or to the time-lapse cross-well electrical resistivity data. Various tests were also performed to quantify the uncertainty in the inverted hydraulic conductivity field.

  18. Calculation method of water injection forward modeling and inversion process in oilfield water injection network

    NASA Astrophysics Data System (ADS)

    Liu, Long; Liu, Wei

    2018-04-01

    A forward modeling and inversion algorithm is adopted in order to determine the water injection plan in the oilfield water injection network. The main idea of the algorithm is shown as follows: firstly, the oilfield water injection network is inversely calculated. The pumping station demand flow is calculated. Then, forward modeling calculation is carried out for judging whether all water injection wells meet the requirements of injection allocation or not. If all water injection wells meet the requirements of injection allocation, calculation is stopped, otherwise the demand injection allocation flow rate of certain step size is reduced aiming at water injection wells which do not meet requirements, and next iterative operation is started. It is not necessary to list the algorithm into water injection network system algorithm, which can be realized easily. Iterative method is used, which is suitable for computer programming. Experimental result shows that the algorithm is fast and accurate.

  19. A genetic meta-algorithm-assisted inversion approach: hydrogeological study for the determination of volumetric rock properties and matrix and fluid parameters in unsaturated formations

    NASA Astrophysics Data System (ADS)

    Szabó, Norbert Péter

    2018-03-01

    An evolutionary inversion approach is suggested for the interpretation of nuclear and resistivity logs measured by direct-push tools in shallow unsaturated sediments. The efficiency of formation evaluation is improved by estimating simultaneously (1) the petrophysical properties that vary rapidly along a drill hole with depth and (2) the zone parameters that can be treated as constant, in one inversion procedure. In the workflow, the fractional volumes of water, air, matrix and clay are estimated in adjacent depths by linearized inversion, whereas the clay and matrix properties are updated using a float-encoded genetic meta-algorithm. The proposed inversion method provides an objective estimate of the zone parameters that appear in the tool response equations applied to solve the forward problem, which can significantly increase the reliability of the petrophysical model as opposed to setting these parameters arbitrarily. The global optimization meta-algorithm not only assures the best fit between the measured and calculated data but also gives a reliable solution, practically independent of the initial model, as laboratory data are unnecessary in the inversion procedure. The feasibility test uses engineering geophysical sounding logs observed in an unsaturated loessy-sandy formation in Hungary. The multi-borehole extension of the inversion technique is developed to determine the petrophysical properties and their estimation errors along a profile of drill holes. The genetic meta-algorithmic inversion method is recommended for hydrogeophysical logging applications of various kinds to automatically extract the volumetric ratios of rock and fluid constituents as well as the most important zone parameters in a reliable inversion procedure.

  20. Adaptive Inverse Control for Rotorcraft Vibration Reduction

    NASA Technical Reports Server (NTRS)

    Jacklin, Stephen A.

    1985-01-01

    This thesis extends the Least Mean Square (LMS) algorithm to solve the mult!ple-input, multiple-output problem of alleviating N/Rev (revolutions per minute by number of blades) helicopter fuselage vibration by means of adaptive inverse control. A frequency domain locally linear model is used to represent the transfer matrix relating the higher harmonic pitch control inputs to the harmonic vibration outputs to be controlled. By using the inverse matrix as the controller gain matrix, an adaptive inverse regulator is formed to alleviate the N/Rev vibration. The stability and rate of convergence properties of the extended LMS algorithm are discussed. It is shown that the stability ranges for the elements of the stability gain matrix are directly related to the eigenvalues of the vibration signal information matrix for the learning phase, but not for the control phase. The overall conclusion is that the LMS adaptive inverse control method can form a robust vibration control system, but will require some tuning of the input sensor gains, the stability gain matrix, and the amount of control relaxation to be used. The learning curve of the controller during the learning phase is shown to be quantitatively close to that predicted by averaging the learning curves of the normal modes. For higher order transfer matrices, a rough estimate of the inverse is needed to start the algorithm efficiently. The simulation results indicate that the factor which most influences LMS adaptive inverse control is the product of the control relaxation and the the stability gain matrix. A small stability gain matrix makes the controller less sensitive to relaxation selection, and permits faster and more stable vibration reduction, than by choosing the stability gain matrix large and the control relaxation term small. It is shown that the best selections of the stability gain matrix elements and the amount of control relaxation is basically a compromise between slow, stable convergence and fast convergence with increased possibility of unstable identification. In the simulation studies, the LMS adaptive inverse control algorithm is shown to be capable of adapting the inverse (controller) matrix to track changes in the flight conditions. The algorithm converges quickly for moderate disturbances, while taking longer for larger disturbances. Perfect knowledge of the inverse matrix is not required for good control of the N/Rev vibration. However it is shown that measurement noise will prevent the LMS adaptive inverse control technique from controlling the vibration, unless the signal averaging method presented is incorporated into the algorithm.

  1. Joint refraction and reflection travel-time tomography of multichannel and wide-angle seismic data

    NASA Astrophysics Data System (ADS)

    Begovic, Slaven; Meléndez, Adrià; Ranero, César; Sallarès, Valentí

    2017-04-01

    Both near-vertical multichannel (MCS) and wide-angle (WAS) seismic data are sensitive to same properties of sampled model, but commonly they are interpreted and modeled using different approaches. Traditional MCS images provide good information on position and geometry of reflectors especially in shallow, commonly sedimentary layers, but have limited or no refracted waves, which severely hampers the retrieval of velocity information. Compared to MCS data, conventional wide-angle seismic (WAS) travel-time tomography uses sparse data (generally stations are spaced by several kilometers). While it has refractions that allow retrieving velocity information, the data sparsity makes it difficult to define velocity and the geometry of geologic boundaries (reflectors) with the appropriate resolution, especially at the shallowest crustal levels. A well-known strategy to overcome these limitations is to combine MCS and WAS data into a common inversion strategy. However, the number of available codes that can jointly invert for both types of data is limited. We have adapted the well-known and widely-used joint refraction and reflection travel-time tomography code tomo2d (Korenaga et al, 2000), and its 3D version tomo3d (Meléndez et al, 2015), to implement streamer data and multichannel acquisition geometries. This allows performing joint travel-time tomographic inversion based on refracted and reflected phases from both WAS and MCS data sets. We show with a series of synthetic tests following a layer-stripping strategy that combining these two data sets into joint travel-time tomographic method the drawbacks of each data set are notably reduced. First, we have tested traditional travel-time inversion scheme using only WAS data (refracted and reflected phases) with typical acquisition geometry with one ocean bottom seismometer (OBS) each 10 km. Second, we have jointly inverted WAS refracted and reflected phases with only streamer (MCS) reflection travel-times. And at the end we have performed joint inversion of combined refracted and reflected phases from both data sets. MCS data set (synthetic) has been produced for a 8 km-long streamer and refracted phases used for the streamer have been downward continued (projected on the seafloor). Taking advantage of high redundancy of MCS data, the definition of geometry of reflectors and velocity of uppermost layers are much improved. Additionally, long- offset wide-angle refracted phases minimize velocity-depth trade-off of reflection travel-time inversion. As a result, the obtained models have increased accuracy in both velocity and reflector's geometry as compared to the independent inversion of each data set. This is further corroborated by performing a statistical parameter uncertainty analysis to explore the effects of unknown initial model and data noise in the linearized inversion scheme.

  2. Time-reversal and Bayesian inversion

    NASA Astrophysics Data System (ADS)

    Debski, Wojciech

    2017-04-01

    Probabilistic inversion technique is superior to the classical optimization-based approach in all but one aspects. It requires quite exhaustive computations which prohibit its use in huge size inverse problems like global seismic tomography or waveform inversion to name a few. The advantages of the approach are, however, so appealing that there is an ongoing continuous afford to make the large inverse task as mentioned above manageable with the probabilistic inverse approach. One of the perspective possibility to achieve this goal relays on exploring the internal symmetry of the seismological modeling problems in hand - a time reversal and reciprocity invariance. This two basic properties of the elastic wave equation when incorporating into the probabilistic inversion schemata open a new horizons for Bayesian inversion. In this presentation we discuss the time reversal symmetry property, its mathematical aspects and propose how to combine it with the probabilistic inverse theory into a compact, fast inversion algorithm. We illustrate the proposed idea with the newly developed location algorithm TRMLOC and discuss its efficiency when applied to mining induced seismic data.

  3. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    NASA Astrophysics Data System (ADS)

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    2016-09-01

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace such that the dimensionality of the problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2-D and a random hydraulic conductivity field in 3-D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ˜101 to ˜102 in a multicore computational environment. Therefore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate to large-scale problems.

  4. The inverse electroencephalography pipeline

    NASA Astrophysics Data System (ADS)

    Weinstein, David Michael

    The inverse electroencephalography (EEG) problem is defined as determining which regions of the brain are active based on remote measurements recorded with scalp EEG electrodes. An accurate solution to this problem would benefit both fundamental neuroscience research and clinical neuroscience applications. However, constructing accurate patient-specific inverse EEG solutions requires complex modeling, simulation, and visualization algorithms, and to date only a few systems have been developed that provide such capabilities. In this dissertation, a computational system for generating and investigating patient-specific inverse EEG solutions is introduced, and the requirements for each stage of this Inverse EEG Pipeline are defined and discussed. While the requirements of many of the stages are satisfied with existing algorithms, others have motivated research into novel modeling and simulation methods. The principal technical results of this work include novel surface-based volume modeling techniques, an efficient construction for the EEG lead field, and the Open Source release of the Inverse EEG Pipeline software for use by the bioelectric field research community. In this work, the Inverse EEG Pipeline is applied to three research problems in neurology: comparing focal and distributed source imaging algorithms; separating measurements into independent activation components for multifocal epilepsy; and localizing the cortical activity that produces the P300 effect in schizophrenia.

  5. Arikan and Alamouti matrices based on fast block-wise inverse Jacket transform

    NASA Astrophysics Data System (ADS)

    Lee, Moon Ho; Khan, Md Hashem Ali; Kim, Kyeong Jin

    2013-12-01

    Recently, Lee and Hou (IEEE Signal Process Lett 13: 461-464, 2006) proposed one-dimensional and two-dimensional fast algorithms for block-wise inverse Jacket transforms (BIJTs). Their BIJTs are not real inverse Jacket transforms from mathematical point of view because their inverses do not satisfy the usual condition, i.e., the multiplication of a matrix with its inverse matrix is not equal to the identity matrix. Therefore, we mathematically propose a fast block-wise inverse Jacket transform of orders N = 2 k , 3 k , 5 k , and 6 k , where k is a positive integer. Based on the Kronecker product of the successive lower order Jacket matrices and the basis matrix, the fast algorithms for realizing these transforms are obtained. Due to the simple inverse and fast algorithms of Arikan polar binary and Alamouti multiple-input multiple-output (MIMO) non-binary matrices, which are obtained from BIJTs, they can be applied in areas such as 3GPP physical layer for ultra mobile broadband permutation matrices design, first-order q-ary Reed-Muller code design, diagonal channel design, diagonal subchannel decompose for interference alignment, and 4G MIMO long-term evolution Alamouti precoding design.

  6. Effects of load proportioning on the capacity of multiple-hole composite joints

    NASA Technical Reports Server (NTRS)

    Hyer, M. W.; Chastain, P. A.

    1985-01-01

    This study addresses the issue of adjusting the proportion of load transmitted by each hole in a multiple-hole joint so that the joint capacity is a maximum. Specifically two-hole-in-series joints are examined. The results indicate that when each hole reacts 50% of the total load, the joint capacity is not a maximum. One hole generally is understressed at joint failure. The algorithm developed to determine the load proportion at each hole which results in maximum capacity is discussed. The algorithm includes two-dimensional finite-element stress analysis and failure criteria. The algorithm is used to study the effects of joint width, hole spacing, and hole to joint-end distance on load proportioning and capacity. To study hole size effects, two hole diameters are considered. Three laminates are considered: a quasi-isotropic laminate; a cross-ply laminate; and a 45 degree angle-ply laminate. By proportioning the load, capacity can be increased generally from 5 to 10%. In some cases a greater increase is possible.

  7. Reconstruction of the temperature field for inverse ultrasound hyperthermia calculations at a muscle/bone interface.

    PubMed

    Liauh, Chihng-Tsung; Shih, Tzu-Ching; Huang, Huang-Wen; Lin, Win-Li

    2004-02-01

    An inverse algorithm with Tikhonov regularization of order zero has been used to estimate the intensity ratios of the reflected longitudinal wave to the incident longitudinal wave and that of the refracted shear wave to the total transmitted wave into bone in calculating the absorbed power field and then to reconstruct the temperature distribution in muscle and bone regions based on a limited number of temperature measurements during simulated ultrasound hyperthermia. The effects of the number of temperature sensors are investigated, as is the amount of noise superimposed on the temperature measurements, and the effects of the optimal sensor location on the performance of the inverse algorithm. Results show that noisy input data degrades the performance of this inverse algorithm, especially when the number of temperature sensors is small. Results are also presented demonstrating an improvement in the accuracy of the temperature estimates by employing an optimal value of the regularization parameter. Based on the analysis of singular-value decomposition, the optimal sensor position in a case utilizing only one temperature sensor can be determined to make the inverse algorithm converge to the true solution.

  8. Kinematic performance of a six degree-of-freedom hand model (6DHand) for use in occupational biomechanics.

    PubMed

    Buczek, Frank L; Sinsel, Erik W; Gloekler, Daniel S; Wimer, Bryan M; Warren, Christopher M; Wu, John Z

    2011-06-03

    Upper extremity musculoskeletal disorders represent an important health issue across all industry sectors; as such, the need exists to develop models of the hand that provide comprehensive biomechanics during occupational tasks. Previous optical motion capture studies used a single marker on the dorsal aspect of finger joints, allowing calculation of one and two degree-of-freedom (DOF) joint angles; additional algorithms were needed to define joint centers and the palmar surface of fingers. We developed a 6DOF model (6DHand) to obtain unconstrained kinematics of finger segments, modeled as frusta of right circular cones that approximate the palmar surface. To evaluate kinematic performance, twenty subjects gripped a cylindrical handle as a surrogate for a powered hand tool. We hypothesized that accessory motions (metacarpophalangeal pronation/supination; proximal and distal interphalangeal radial/ulnar deviation and pronation/supination; all joint translations) would be small (less than 5° rotations, less than 2mm translations) if segment anatomical reference frames were aligned correctly, and skin movement artifacts were negligible. For the gripping task, 93 of 112 accessory motions were small by our definition, suggesting this 6DOF approach appropriately models joints of the fingers. Metacarpophalangeal supination was larger than expected (approximately 10°), and may be adjusted through local reference frame optimization procedures previously developed for knee kinematics in gait analysis. Proximal translations at the metacarpophalangeal joints (approximately 10mm) were explained by skin movement across the metacarpals, but would not corrupt inverse dynamics calculated for the phalanges. We assessed performance in this study; a more rigorous validation would likely require medical imaging. Published by Elsevier Ltd.

  9. Lateral Ligament Repair and Reconstruction Restore Neither Contact Mechanics of the Ankle Joint nor Motion Patterns of the Hindfoot

    PubMed Central

    Prisk, Victor R.; Imhauser, Carl W.; O'Loughlin, Padhraig F.; Kennedy, John G.

    2010-01-01

    Background: Ankle sprains may damage both the lateral ligaments of the hindfoot and the osteochondral tissue of the ankle joint. When nonoperative treatment fails, operative approaches are indicated to restore both native motion patterns at the hindfoot and ankle joint contact mechanics. The goal of the present study was to determine the effect of lateral ligament injury, repair, and reconstruction on ankle joint contact mechanics and hindfoot motion patterns. Methods: Eight cadaveric specimens were tested with use of robotic technology to apply combined compressive (200-N) and inversion (4.5-Nm) loads to the hindfoot at 0° and 20° of plantar flexion. Contact mechanics at the ankle joint were simultaneously measured. A repeated-measures experiment was designed with use of the intact condition as control, with the other conditions including sectioned anterior talofibular and calcaneofibular ligaments, the Broström and Broström-Gould repairs, and graft reconstruction. Results: Ligament sectioning decreased contact area and caused a medial and anterior shift in the center of pressure with inversion loads relative to those with the intact condition. There were no significant differences in inversion or coupled axial rotation with inversion between the Broström repair and the intact condition; however, medial translation of the center of pressure remained elevated after the Broström repair relative to the intact condition. The Gould modification of the Broström procedure provided additional support to the hindfoot relative to the Broström repair, reducing inversion and axial rotation with inversion beyond that of intact ligaments. There were no significant differences in center-of-pressure excursion patterns between the Broström-Gould repair and the intact ligament condition, but this repair increased contact area beyond that with the ligaments intact. Graft reconstruction more closely restored inversion motion than did the Broström-Gould repair at 20° of plantar flexion but limited coupled axial rotation. Graft reconstruction also increased contact areas beyond the lateral ligament-deficient conditions but altered center-of-pressure excursion patterns relative to the intact condition. Conclusions: No lateral ankle ligament reconstruction completely restored native contact mechanics of the ankle joint and hindfoot motion patterns. Clinical Relevance: Our results provide a rationale for conducting long-term, prospective, comparative, in vivo studies to assess the impact of altered mechanics following lateral ligament injury, and its nonoperative and operative treatment, on the development of ankle osteoarthritis. PMID:20962188

  10. AIDA - from Airborne Data Inversion to In-Depth Analysis

    NASA Astrophysics Data System (ADS)

    Meyer, U.; Goetze, H.; Schroeder, M.; Boerner, R.; Tezkan, B.; Winsemann, J.; Siemon, B.; Alvers, M.; Stoll, J. B.

    2011-12-01

    The rising competition in land use especially between water economy, agriculture, forestry, building material economy and other industries often leads to irreversible deterioration in the water and soil system (as salinization and degradation) which results in a long term damage of natural resources. A sustainable exploitation of the near subsurface by industry, economy and private households is a fundamental demand of a modern society. To fulfill this demand, a sound and comprehensive knowledge on structures and processes of the near subsurface is an important prerequisite. A spatial survey of the usable underground by aerogeophysical means and a subsequent ground geophysics survey targeted at special locations will deliver essential contributions within short time that make it possible to gain the needed additional knowledge. The complementary use of airborne and ground geophysics as well as the validation, assimilation and improvement of current findings by geological and hydrogeological investigations and plausibility tests leads to the following key questions: a) Which new and/or improved automatic algorithms (joint inversion, data assimilation and such) are useful to describe the structural setting of the usable subsurface by user specific characteristics as i.e. water volume, layer thicknesses, porosities etc.? b) What are the physical relations of the measured parameters (as electrical conductivities, magnetic susceptibilities, densities, etc.)? c) How can we deduce characteristics or parameters from the observations which describe near subsurface structures as ground water systems, their charge, discharge and recharge, vulnerabilities and other quantities? d) How plausible and realistic are the numerically obtained results in relation to user specific questions and parameters? e) Is it possible to compile material flux balances that describe spatial and time dependent impacts of environmental changes on aquifers and soils by repeated airborne surveys? In order to follow up these questions raised the project aims to achieve the following goals: a) Development of new and expansion of existent inversion strategies to improve structural parameter information on different space and time scales. b) Development, modification, and tests for a multi-parameter inversion (joint inversion). c) Development of new quantitative approaches in data assimilation and plausibility studies. d) Compilation of optimized work flows for fast employment by end users. e) Primary goal is to solve comparable society related problems (as salinization, erosion, contamination, degradation etc.) in regions within Germany and abroad by generalization of project results.

  11. 3-dimensional magnetotelluric inversion including topography using deformed hexahedral edge finite elements and direct solvers parallelized on symmetric multiprocessor computers - Part II: direct data-space inverse solution

    NASA Astrophysics Data System (ADS)

    Kordy, M.; Wannamaker, P.; Maris, V.; Cherkaev, E.; Hill, G.

    2016-01-01

    Following the creation described in Part I of a deformable edge finite-element simulator for 3-D magnetotelluric (MT) responses using direct solvers, in Part II we develop an algorithm named HexMT for 3-D regularized inversion of MT data including topography. Direct solvers parallelized on large-RAM, symmetric multiprocessor (SMP) workstations are used also for the Gauss-Newton model update. By exploiting the data-space approach, the computational cost of the model update becomes much less in both time and computer memory than the cost of the forward simulation. In order to regularize using the second norm of the gradient, we factor the matrix related to the regularization term and apply its inverse to the Jacobian, which is done using the MKL PARDISO library. For dense matrix multiplication and factorization related to the model update, we use the PLASMA library which shows very good scalability across processor cores. A synthetic test inversion using a simple hill model shows that including topography can be important; in this case depression of the electric field by the hill can cause false conductors at depth or mask the presence of resistive structure. With a simple model of two buried bricks, a uniform spatial weighting for the norm of model smoothing recovered more accurate locations for the tomographic images compared to weightings which were a function of parameter Jacobians. We implement joint inversion for static distortion matrices tested using the Dublin secret model 2, for which we are able to reduce nRMS to ˜1.1 while avoiding oscillatory convergence. Finally we test the code on field data by inverting full impedance and tipper MT responses collected around Mount St Helens in the Cascade volcanic chain. Among several prominent structures, the north-south trending, eruption-controlling shear zone is clearly imaged in the inversion.

  12. Analysis of the Effects of Normal Walking on Ankle Joint Contact Characteristics After Acute Inversion Ankle Sprain.

    PubMed

    Bae, Ji Yong; Park, Kyung Soon; Seon, Jong Keun; Jeon, Insu

    2015-12-01

    To show the causal relationship between normal walking after various lateral ankle ligament (LAL) injuries caused by acute inversion ankle sprains and alterations in ankle joint contact characteristics, finite element simulations of normal walking were carried out using an intact ankle joint model and LAL injury models. A walking experiment using a volunteer with a normal ankle joint was performed to obtain the boundary conditions for the simulations and to support the appropriateness of the simulation results. Contact pressure and strain on the talus articular cartilage and anteroposterior and mediolateral translations of the talus were calculated. Ankles with ruptured anterior talofibular ligaments (ATFLs) had a higher likelihood of experiencing increased ankle joint contact pressures, strains and translations than ATFL-deficient ankles. In particular, ankles with ruptured ATFL + calcaneofibular ligaments and all ruptured ankles had a similar likelihood as the ATFL-ruptured ankles. The push off stance phase was the most likely situation for increased ankle joint contact pressures, strains and translations in LAL-injured ankles.

  13. Bayesian Approach to the Joint Inversion of Gravity and Magnetic Data, with Application to the Ismenius Area of Mars

    NASA Technical Reports Server (NTRS)

    Jewell, Jeffrey B.; Raymond, C.; Smrekar, S.; Millbury, C.

    2004-01-01

    This viewgraph presentation reviews a Bayesian approach to the inversion of gravity and magnetic data with specific application to the Ismenius Area of Mars. Many inverse problems encountered in geophysics and planetary science are well known to be non-unique (i.e. inversion of gravity the density structure of a body). In hopes of reducing the non-uniqueness of solutions, there has been interest in the joint analysis of data. An example is the joint inversion of gravity and magnetic data, with the assumption that the same physical anomalies generate both the observed magnetic and gravitational anomalies. In this talk, we formulate the joint analysis of different types of data in a Bayesian framework and apply the formalism to the inference of the density and remanent magnetization structure for a local region in the Ismenius area of Mars. The Bayesian approach allows prior information or constraints in the solutions to be incorporated in the inversion, with the "best" solutions those whose forward predictions most closely match the data while remaining consistent with assumed constraints. The application of this framework to the inversion of gravity and magnetic data on Mars reveals two typical challenges - the forward predictions of the data have a linear dependence on some of the quantities of interest, and non-linear dependence on others (termed the "linear" and "non-linear" variables, respectively). For observations with Gaussian noise, a Bayesian approach to inversion for "linear" variables reduces to a linear filtering problem, with an explicitly computable "error" matrix. However, for models whose forward predictions have non-linear dependencies, inference is no longer given by such a simple linear problem, and moreover, the uncertainty in the solution is no longer completely specified by a computable "error matrix". It is therefore important to develop methods for sampling from the full Bayesian posterior to provide a complete and statistically consistent picture of model uncertainty, and what has been learned from observations. We will discuss advanced numerical techniques, including Monte Carlo Markov

  14. A general rough-surface inversion algorithm: Theory and application to SAR data

    NASA Technical Reports Server (NTRS)

    Moghaddam, M.

    1993-01-01

    Rough-surface inversion has significant applications in interpretation of SAR data obtained over bare soil surfaces and agricultural lands. Due to the sparsity of data and the large pixel size in SAR applications, it is not feasible to carry out inversions based on numerical scattering models. The alternative is to use parameter estimation techniques based on approximate analytical or empirical models. Hence, there are two issues to be addressed, namely, what model to choose and what estimation algorithm to apply. Here, a small perturbation model (SPM) is used to express the backscattering coefficients of the rough surface in terms of three surface parameters. The algorithm used to estimate these parameters is based on a nonlinear least-squares criterion. The least-squares optimization methods are widely used in estimation theory, but the distinguishing factor for SAR applications is incorporating the stochastic nature of both the unknown parameters and the data into formulation, which will be discussed in detail. The algorithm is tested with synthetic data, and several Newton-type least-squares minimization methods are discussed to compare their convergence characteristics. Finally, the algorithm is applied to multifrequency polarimetric SAR data obtained over some bare soil and agricultural fields. Results will be shown and compared to ground-truth measurements obtained from these areas. The strength of this general approach to inversion of SAR data is that it can be easily modified for use with any scattering model without changing any of the inversion steps. Note also that, for the same reason it is not limited to inversion of rough surfaces, and can be applied to any parameterized scattering process.

  15. Aging Aircraft 2005, The Joint NASA/FAA/DOD Conference on Aging Aircraft, Decision algorithms for Electrical Wiring Interconnect Systems (EWIS)Fault Detection

    DTIC Science & Technology

    2005-02-03

    Aging Aircraft 2005 The 8th Joint NASA /FAA/DOD Conference on Aging Aircraft Decision Algorithms for Electrical Wiring Interconnect Systems (EWIS...SUBTITLE Aging Aircraft 2005, The 8th Joint NASA /FAA/DOD Conference on Aging Aircraft, Decision algorithms for Electrical Wiring Interconnect...UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) NASA Langley Research Center, 8W. Taylor St., M/S 190 Hampton, VA 23681 and NAVAIR

  16. Acoustic Impedance Inversion of Seismic Data Using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Eladj, Said; Djarfour, Noureddine; Ferahtia, Djalal; Ouadfeul, Sid-Ali

    2013-04-01

    The inversion of seismic data can be used to constrain estimates of the Earth's acoustic impedance structure. This kind of problem is usually known to be non-linear, high-dimensional, with a complex search space which may be riddled with many local minima, and results in irregular objective functions. We investigate here the performance and the application of a genetic algorithm, in the inversion of seismic data. The proposed algorithm has the advantage of being easily implemented without getting stuck in local minima. The effects of population size, Elitism strategy, uniform cross-over and lower mutation are examined. The optimum solution parameters and performance were decided as a function of the testing error convergence with respect to the generation number. To calculate the fitness function, we used L2 norm of the sample-to-sample difference between the reference and the inverted trace. The cross-over probability is of 0.9-0.95 and mutation has been tested at 0.01 probability. The application of such a genetic algorithm to synthetic data shows that the inverted acoustic impedance section was efficient. Keywords: Seismic, Inversion, acoustic impedance, genetic algorithm, fitness functions, cross-over, mutation.

  17. Performance impact of mutation operators of a subpopulation-based genetic algorithm for multi-robot task allocation problems.

    PubMed

    Liu, Chun; Kroll, Andreas

    2016-01-01

    Multi-robot task allocation determines the task sequence and distribution for a group of robots in multi-robot systems, which is one of constrained combinatorial optimization problems and more complex in case of cooperative tasks because they introduce additional spatial and temporal constraints. To solve multi-robot task allocation problems with cooperative tasks efficiently, a subpopulation-based genetic algorithm, a crossover-free genetic algorithm employing mutation operators and elitism selection in each subpopulation, is developed in this paper. Moreover, the impact of mutation operators (swap, insertion, inversion, displacement, and their various combinations) is analyzed when solving several industrial plant inspection problems. The experimental results show that: (1) the proposed genetic algorithm can obtain better solutions than the tested binary tournament genetic algorithm with partially mapped crossover; (2) inversion mutation performs better than other tested mutation operators when solving problems without cooperative tasks, and the swap-inversion combination performs better than other tested mutation operators/combinations when solving problems with cooperative tasks. As it is difficult to produce all desired effects with a single mutation operator, using multiple mutation operators (including both inversion and swap) is suggested when solving similar combinatorial optimization problems.

  18. Genetic algorithms and their use in Geophysical Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parker, Paul B.

    1999-04-01

    Genetic algorithms (GAs), global optimization methods that mimic Darwinian evolution are well suited to the nonlinear inverse problems of geophysics. A standard genetic algorithm selects the best or ''fittest'' models from a ''population'' and then applies operators such as crossover and mutation in order to combine the most successful characteristics of each model and produce fitter models. More sophisticated operators have been developed, but the standard GA usually provides a robust and efficient search. Although the choice of parameter settings such as crossover and mutation rate may depend largely on the type of problem being solved, numerous results show thatmore » certain parameter settings produce optimal performance for a wide range of problems and difficulties. In particular, a low (about half of the inverse of the population size) mutation rate is crucial for optimal results, but the choice of crossover method and rate do not seem to affect performance appreciably. Optimal efficiency is usually achieved with smaller (< 50) populations. Lastly, tournament selection appears to be the best choice of selection methods due to its simplicity and its autoscaling properties. However, if a proportional selection method is used such as roulette wheel selection, fitness scaling is a necessity, and a high scaling factor (> 2.0) should be used for the best performance. Three case studies are presented in which genetic algorithms are used to invert for crustal parameters. The first is an inversion for basement depth at Yucca mountain using gravity data, the second an inversion for velocity structure in the crust of the south island of New Zealand using receiver functions derived from teleseismic events, and the third is a similar receiver function inversion for crustal velocities beneath the Mendocino Triple Junction region of Northern California. The inversions demonstrate that genetic algorithms are effective in solving problems with reasonably large numbers of free parameters and with computationally expensive objective function calculations. More sophisticated techniques are presented for special problems. Niching and island model algorithms are introduced as methods to find multiple, distinct solutions to the nonunique problems that are typically seen in geophysics. Finally, hybrid algorithms are investigated as a way to improve the efficiency of the standard genetic algorithm.« less

  19. Genetic algorithms and their use in geophysical problems

    NASA Astrophysics Data System (ADS)

    Parker, Paul Bradley

    Genetic algorithms (GAs), global optimization methods that mimic Darwinian evolution are well suited to the nonlinear inverse problems of geophysics. A standard genetic algorithm selects the best or "fittest" models from a "population" and then applies operators such as crossover and mutation in order to combine the most successful characteristics of each model and produce fitter models. More sophisticated operators have been developed, but the standard GA usually provides a robust and efficient search. Although the choice of parameter settings such as crossover and mutation rate may depend largely on the type of problem being solved, numerous results show that certain parameter settings produce optimal performance for a wide range of problems and difficulties. In particular, a low (about half of the inverse of the population size) mutation rate is crucial for optimal results, but the choice of crossover method and rate do not seem to affect performance appreciably. Also, optimal efficiency is usually achieved with smaller (<50) populations. Lastly, tournament selection appears to be the best choice of selection methods due to its simplicity and its autoscaling properties. However, if a proportional selection method is used such as roulette wheel selection, fitness scaling is a necessity, and a high scaling factor (>2.0) should be used for the best performance. Three case studies are presented in which genetic algorithms are used to invert for crustal parameters. The first is an inversion for basement depth at Yucca mountain using gravity data, the second an inversion for velocity structure in the crust of the south island of New Zealand using receiver functions derived from teleseismic events, and the third is a similar receiver function inversion for crustal velocities beneath the Mendocino Triple Junction region of Northern California. The inversions demonstrate that genetic algorithms are effective in solving problems with reasonably large numbers of free parameters and with computationally expensive objective function calculations. More sophisticated techniques are presented for special problems. Niching and island model algorithms are introduced as methods to find multiple, distinct solutions to the nonunique problems that are typically seen in geophysics. Finally, hybrid algorithms are investigated as a way to improve the efficiency of the standard genetic algorithm.

  20. An iterative ensemble quasi-linear data assimilation approach for integrated reservoir monitoring

    NASA Astrophysics Data System (ADS)

    Li, J. Y.; Kitanidis, P. K.

    2013-12-01

    Reservoir forecasting and management are increasingly relying on an integrated reservoir monitoring approach, which involves data assimilation to calibrate the complex process of multi-phase flow and transport in the porous medium. The numbers of unknowns and measurements arising in such joint inversion problems are usually very large. The ensemble Kalman filter and other ensemble-based techniques are popular because they circumvent the computational barriers of computing Jacobian matrices and covariance matrices explicitly and allow nonlinear error propagation. These algorithms are very useful but their performance is not well understood and it is not clear how many realizations are needed for satisfactory results. In this presentation we introduce an iterative ensemble quasi-linear data assimilation approach for integrated reservoir monitoring. It is intended for problems for which the posterior or conditional probability density function is not too different from a Gaussian, despite nonlinearity in the state transition and observation equations. The algorithm generates realizations that have the potential to adequately represent the conditional probability density function (pdf). Theoretical analysis sheds light on the conditions under which this algorithm should work well and explains why some applications require very few realizations while others require many. This algorithm is compared with the classical ensemble Kalman filter (Evensen, 2003) and with Gu and Oliver's (2007) iterative ensemble Kalman filter on a synthetic problem of monitoring a reservoir using wellbore pressure and flux data.

  1. Scalable posterior approximations for large-scale Bayesian inverse problems via likelihood-informed parameter and state reduction

    NASA Astrophysics Data System (ADS)

    Cui, Tiangang; Marzouk, Youssef; Willcox, Karen

    2016-06-01

    Two major bottlenecks to the solution of large-scale Bayesian inverse problems are the scaling of posterior sampling algorithms to high-dimensional parameter spaces and the computational cost of forward model evaluations. Yet incomplete or noisy data, the state variation and parameter dependence of the forward model, and correlations in the prior collectively provide useful structure that can be exploited for dimension reduction in this setting-both in the parameter space of the inverse problem and in the state space of the forward model. To this end, we show how to jointly construct low-dimensional subspaces of the parameter space and the state space in order to accelerate the Bayesian solution of the inverse problem. As a byproduct of state dimension reduction, we also show how to identify low-dimensional subspaces of the data in problems with high-dimensional observations. These subspaces enable approximation of the posterior as a product of two factors: (i) a projection of the posterior onto a low-dimensional parameter subspace, wherein the original likelihood is replaced by an approximation involving a reduced model; and (ii) the marginal prior distribution on the high-dimensional complement of the parameter subspace. We present and compare several strategies for constructing these subspaces using only a limited number of forward and adjoint model simulations. The resulting posterior approximations can rapidly be characterized using standard sampling techniques, e.g., Markov chain Monte Carlo. Two numerical examples demonstrate the accuracy and efficiency of our approach: inversion of an integral equation in atmospheric remote sensing, where the data dimension is very high; and the inference of a heterogeneous transmissivity field in a groundwater system, which involves a partial differential equation forward model with high dimensional state and parameters.

  2. TOMO3D: 3-D joint refraction and reflection traveltime tomography parallel code for active-source seismic data—synthetic test

    NASA Astrophysics Data System (ADS)

    Meléndez, A.; Korenaga, J.; Sallarès, V.; Miniussi, A.; Ranero, C. R.

    2015-10-01

    We present a new 3-D traveltime tomography code (TOMO3D) for the modelling of active-source seismic data that uses the arrival times of both refracted and reflected seismic phases to derive the velocity distribution and the geometry of reflecting boundaries in the subsurface. This code is based on its popular 2-D version TOMO2D from which it inherited the methods to solve the forward and inverse problems. The traveltime calculations are done using a hybrid ray-tracing technique combining the graph and bending methods. The LSQR algorithm is used to perform the iterative regularized inversion to improve the initial velocity and depth models. In order to cope with an increased computational demand due to the incorporation of the third dimension, the forward problem solver, which takes most of the run time (˜90 per cent in the test presented here), has been parallelized with a combination of multi-processing and message passing interface standards. This parallelization distributes the ray-tracing and traveltime calculations among available computational resources. The code's performance is illustrated with a realistic synthetic example, including a checkerboard anomaly and two reflectors, which simulates the geometry of a subduction zone. The code is designed to invert for a single reflector at a time. A data-driven layer-stripping strategy is proposed for cases involving multiple reflectors, and it is tested for the successive inversion of the two reflectors. Layers are bound by consecutive reflectors, and an initial velocity model for each inversion step incorporates the results from previous steps. This strategy poses simpler inversion problems at each step, allowing the recovery of strong velocity discontinuities that would otherwise be smoothened.

  3. SDM - A geodetic inversion code incorporating with layered crust structure and curved fault geometry

    NASA Astrophysics Data System (ADS)

    Wang, Rongjiang; Diao, Faqi; Hoechner, Andreas

    2013-04-01

    Currently, inversion of geodetic data for earthquake fault ruptures is most based on a uniform half-space earth model because of its closed-form Green's functions. However, the layered structure of the crust can significantly affect the inversion results. The other effect, which is often neglected, is related to the curved fault geometry. Especially, fault planes of most mega thrust earthquakes vary their dip angle with depth from a few to several tens of degrees. Also the strike directions of many large earthquakes are variable. For simplicity, such curved fault geometry is usually approximated to several connected rectangular segments, leading to an artificial loss of the slip resolution and data fit. In this presentation, we introduce a free FORTRAN code incorporating with the layered crust structure and curved fault geometry in a user-friendly way. The name SDM stands for Steepest Descent Method, an iterative algorithm used for the constrained least-squares optimization. The new code can be used for joint inversion of different datasets, which may include systematic offsets, as most geodetic data are obtained from relative measurements. These offsets are treated as unknowns to be determined simultaneously with the slip unknowns. In addition, a-priori and physical constraints are considered. The a-priori constraint includes the upper limit of the slip amplitude and the variation range of the slip direction (rake angle) defined by the user. The physical constraint is needed to obtain a smooth slip model, which is realized through a smoothing term to be minimized with the misfit to data. In difference to most previous inversion codes, the smoothing can be optionally applied to slip or stress-drop. The code works with an input file, a well-documented example of which is provided with the source code. Application examples are demonstrated.

  4. Key Generation for Fast Inversion of the Paillier Encryption Function

    NASA Astrophysics Data System (ADS)

    Hirano, Takato; Tanaka, Keisuke

    We study fast inversion of the Paillier encryption function. Especially, we focus only on key generation, and do not modify the Paillier encryption function. We propose three key generation algorithms based on the speeding-up techniques for the RSA encryption function. By using our algorithms, the size of the private CRT exponent is half of that of Paillier-CRT. The first algorithm employs the extended Euclidean algorithm. The second algorithm employs factoring algorithms, and can construct the private CRT exponent with low Hamming weight. The third algorithm is a variant of the second one, and has some advantage such as compression of the private CRT exponent and no requirement for factoring algorithms. We also propose the settings of the parameters for these algorithms and analyze the security of the Paillier encryption function by these algorithms against known attacks. Finally, we give experimental results of our algorithms.

  5. Inverse problem of radiofrequency sounding of ionosphere

    NASA Astrophysics Data System (ADS)

    Velichko, E. N.; Yu. Grishentsev, A.; Korobeynikov, A. G.

    2016-01-01

    An algorithm for the solution of the inverse problem of vertical ionosphere sounding and a mathematical model of noise filtering are presented. An automated system for processing and analysis of spectrograms of vertical ionosphere sounding based on our algorithm is described. It is shown that the algorithm we suggest has a rather high efficiency. This is supported by the data obtained at the ionospheric stations of the so-called “AIS-M” type.

  6. A Computationally Efficient Parallel Levenberg-Marquardt Algorithm for Large-Scale Big-Data Inversion

    NASA Astrophysics Data System (ADS)

    Lin, Y.; O'Malley, D.; Vesselinov, V. V.

    2015-12-01

    Inverse modeling seeks model parameters given a set of observed state variables. However, for many practical problems due to the facts that the observed data sets are often large and model parameters are often numerous, conventional methods for solving the inverse modeling can be computationally expensive. We have developed a new, computationally-efficient Levenberg-Marquardt method for solving large-scale inverse modeling. Levenberg-Marquardt methods require the solution of a dense linear system of equations which can be prohibitively expensive to compute for large-scale inverse problems. Our novel method projects the original large-scale linear problem down to a Krylov subspace, such that the dimensionality of the measurements can be significantly reduced. Furthermore, instead of solving the linear system for every Levenberg-Marquardt damping parameter, we store the Krylov subspace computed when solving the first damping parameter and recycle it for all the following damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved by using these computational techniques. We apply this new inverse modeling method to invert for a random transitivity field. Our algorithm is fast enough to solve for the distributed model parameters (transitivity) at each computational node in the model domain. The inversion is also aided by the use regularization techniques. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). Julia is an advanced high-level scientific programing language that allows for efficient memory management and utilization of high-performance computational resources. By comparing with a Levenberg-Marquardt method using standard linear inversion techniques, our Levenberg-Marquardt method yields speed-up ratio of 15 in a multi-core computational environment and a speed-up ratio of 45 in a single-core computational environment. Therefore, our new inverse modeling method is a powerful tool for large-scale applications.

  7. Single and Multiple Object Tracking Using a Multi-Feature Joint Sparse Representation.

    PubMed

    Hu, Weiming; Li, Wei; Zhang, Xiaoqin; Maybank, Stephen

    2015-04-01

    In this paper, we propose a tracking algorithm based on a multi-feature joint sparse representation. The templates for the sparse representation can include pixel values, textures, and edges. In the multi-feature joint optimization, noise or occlusion is dealt with using a set of trivial templates. A sparse weight constraint is introduced to dynamically select the relevant templates from the full set of templates. A variance ratio measure is adopted to adaptively adjust the weights of different features. The multi-feature template set is updated adaptively. We further propose an algorithm for tracking multi-objects with occlusion handling based on the multi-feature joint sparse reconstruction. The observation model based on sparse reconstruction automatically focuses on the visible parts of an occluded object by using the information in the trivial templates. The multi-object tracking is simplified into a joint Bayesian inference. The experimental results show the superiority of our algorithm over several state-of-the-art tracking algorithms.

  8. Human body motion tracking based on quantum-inspired immune cloning algorithm

    NASA Astrophysics Data System (ADS)

    Han, Hong; Yue, Lichuan; Jiao, Licheng; Wu, Xing

    2009-10-01

    In a static monocular camera system, to gain a perfect 3D human body posture is a great challenge for Computer Vision technology now. This paper presented human postures recognition from video sequences using the Quantum-Inspired Immune Cloning Algorithm (QICA). The algorithm included three parts. Firstly, prior knowledge of human beings was used, the key joint points of human could be detected automatically from the human contours and skeletons which could be thinning from the contours; And due to the complexity of human movement, a forecasting mechanism of occlusion joint points was addressed to get optimum 2D key joint points of human body; And then pose estimation recovered by optimizing between the 2D projection of 3D human key joint points and 2D detection key joint points using QICA, which recovered the movement of human body perfectly, because this algorithm could acquire not only the global optimal solution, but the local optimal solution.

  9. Research on adaptive optics image restoration algorithm based on improved joint maximum a posteriori method

    NASA Astrophysics Data System (ADS)

    Zhang, Lijuan; Li, Yang; Wang, Junnan; Liu, Ying

    2018-03-01

    In this paper, we propose a point spread function (PSF) reconstruction method and joint maximum a posteriori (JMAP) estimation method for the adaptive optics image restoration. Using the JMAP method as the basic principle, we establish the joint log likelihood function of multi-frame adaptive optics (AO) images based on the image Gaussian noise models. To begin with, combining the observed conditions and AO system characteristics, a predicted PSF model for the wavefront phase effect is developed; then, we build up iterative solution formulas of the AO image based on our proposed algorithm, addressing the implementation process of multi-frame AO images joint deconvolution method. We conduct a series of experiments on simulated and real degraded AO images to evaluate our proposed algorithm. Compared with the Wiener iterative blind deconvolution (Wiener-IBD) algorithm and Richardson-Lucy IBD algorithm, our algorithm has better restoration effects including higher peak signal-to-noise ratio ( PSNR) and Laplacian sum ( LS) value than the others. The research results have a certain application values for actual AO image restoration.

  10. Glossary of Foot and Ankle Terms

    MedlinePlus

    ... or she will probably outgrow the condition naturally. Inversion - Twisting in toward the midline of the body. ... with the leg; the subtalar joint, which allows inversion and eversion of the foot with the leg; ...

  11. DenInv3D: a geophysical software for three-dimensional density inversion of gravity field data

    NASA Astrophysics Data System (ADS)

    Tian, Yu; Ke, Xiaoping; Wang, Yong

    2018-04-01

    This paper presents a three-dimensional density inversion software called DenInv3D that operates on gravity and gravity gradient data. The software performs inversion modelling, kernel function calculation, and inversion calculations using the improved preconditioned conjugate gradient (PCG) algorithm. In the PCG algorithm, due to the uncertainty of empirical parameters, such as the Lagrange multiplier, we use the inflection point of the L-curve as the regularisation parameter. The software can construct unequally spaced grids and perform inversions using such grids, which enables changing the resolution of the inversion results at different depths. Through inversion of airborne gradiometry data on the Australian Kauring test site, we discovered that anomalous blocks of different sizes are present within the study area in addition to the central anomalies. The software of DenInv3D can be downloaded from http://159.226.162.30.

  12. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10 1 to ~10 2 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less

  13. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    DOE PAGES

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    2016-09-01

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10 1 to ~10 2 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less

  14. An algorithm for continuum modeling of rocks with multiple embedded nonlinearly-compliant joints [Continuum modeling of elasto-plastic media with multiple embedded nonlinearly-compliant joints

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hurley, R. C.; Vorobiev, O. Y.; Ezzedine, S. M.

    Here, we present a numerical method for modeling the mechanical effects of nonlinearly-compliant joints in elasto-plastic media. The method uses a series of strain-rate and stress update algorithms to determine joint closure, slip, and solid stress within computational cells containing multiple “embedded” joints. This work facilitates efficient modeling of nonlinear wave propagation in large spatial domains containing a large number of joints that affect bulk mechanical properties. We implement the method within the massively parallel Lagrangian code GEODYN-L and provide verification and examples. We highlight the ability of our algorithms to capture joint interactions and multiple weakness planes within individualmore » computational cells, as well as its computational efficiency. We also discuss the motivation for developing the proposed technique: to simulate large-scale wave propagation during the Source Physics Experiments (SPE), a series of underground explosions conducted at the Nevada National Security Site (NNSS).« less

  15. An algorithm for continuum modeling of rocks with multiple embedded nonlinearly-compliant joints [Continuum modeling of elasto-plastic media with multiple embedded nonlinearly-compliant joints

    DOE PAGES

    Hurley, R. C.; Vorobiev, O. Y.; Ezzedine, S. M.

    2017-04-06

    Here, we present a numerical method for modeling the mechanical effects of nonlinearly-compliant joints in elasto-plastic media. The method uses a series of strain-rate and stress update algorithms to determine joint closure, slip, and solid stress within computational cells containing multiple “embedded” joints. This work facilitates efficient modeling of nonlinear wave propagation in large spatial domains containing a large number of joints that affect bulk mechanical properties. We implement the method within the massively parallel Lagrangian code GEODYN-L and provide verification and examples. We highlight the ability of our algorithms to capture joint interactions and multiple weakness planes within individualmore » computational cells, as well as its computational efficiency. We also discuss the motivation for developing the proposed technique: to simulate large-scale wave propagation during the Source Physics Experiments (SPE), a series of underground explosions conducted at the Nevada National Security Site (NNSS).« less

  16. A hybrid artificial bee colony algorithm and pattern search method for inversion of particle size distribution from spectral extinction data

    NASA Astrophysics Data System (ADS)

    Wang, Li; Li, Feng; Xing, Jian

    2017-10-01

    In this paper, a hybrid artificial bee colony (ABC) algorithm and pattern search (PS) method is proposed and applied for recovery of particle size distribution (PSD) from spectral extinction data. To be more useful and practical, size distribution function is modelled as the general Johnson's ? function that can overcome the difficulty of not knowing the exact type beforehand encountered in many real circumstances. The proposed hybrid algorithm is evaluated through simulated examples involving unimodal, bimodal and trimodal PSDs with different widths and mean particle diameters. For comparison, all examples are additionally validated by the single ABC algorithm. In addition, the performance of the proposed algorithm is further tested by actual extinction measurements with real standard polystyrene samples immersed in water. Simulation and experimental results illustrate that the hybrid algorithm can be used as an effective technique to retrieve the PSDs with high reliability and accuracy. Compared with the single ABC algorithm, our proposed algorithm can produce more accurate and robust inversion results while taking almost comparative CPU time over ABC algorithm alone. The superiority of ABC and PS hybridization strategy in terms of reaching a better balance of estimation accuracy and computation effort increases its potentials as an excellent inversion technique for reliable and efficient actual measurement of PSD.

  17. Joint inversion of regional and teleseismic earthquake waveforms

    NASA Astrophysics Data System (ADS)

    Baker, Mark R.; Doser, Diane I.

    1988-03-01

    A least squares joint inversion technique for regional and teleseismic waveforms is presented. The mean square error between seismograms and synthetics is minimized using true amplitudes. Matching true amplitudes in modeling requires meaningful estimates of modeling uncertainties and of seismogram signal-to-noise ratios. This also permits calculating linearized uncertainties on the solution based on accuracy and resolution. We use a priori estimates of earthquake parameters to stabilize unresolved parameters, and for comparison with a posteriori uncertainties. We verify the technique on synthetic data, and on the 1983 Borah Peak, Idaho (M = 7.3), earthquake. We demonstrate the inversion on the August 1954 Rainbow Mountain, Nevada (M = 6.8), earthquake and find parameters consistent with previous studies.

  18. Spectral unmixing of agents on surfaces for the Joint Contaminated Surface Detector (JCSD)

    NASA Astrophysics Data System (ADS)

    Slamani, Mohamed-Adel; Chyba, Thomas H.; LaValley, Howard; Emge, Darren

    2007-09-01

    ITT Corporation, Advanced Engineering and Sciences Division, is currently developing the Joint Contaminated Surface Detector (JCSD) technology under an Advanced Concept Technology Demonstration (ACTD) managed jointly by the U.S. Army Research, Development, and Engineering Command (RDECOM) and the Joint Project Manager for Nuclear, Biological, and Chemical Contamination Avoidance for incorporation on the Army's future reconnaissance vehicles. This paper describes the design of the chemical agent identification (ID) algorithm associated with JCSD. The algorithm detects target chemicals mixed with surface and interferent signatures. Simulated data sets were generated from real instrument measurements to support a matrix of parameters based on a Design Of Experiments approach (DOE). Decisions based on receiver operating characteristics (ROC) curves and area-under-the-curve (AUC) measures were used to down-select between several ID algorithms. Results from top performing algorithms were then combined via a fusion approach to converge towards optimum rates of detections and false alarms. This paper describes the process associated with the algorithm design and provides an illustrating example.

  19. Probabilistic inversion of expert assessments to inform projections about Antarctic ice sheet responses.

    PubMed

    Fuller, Robert William; Wong, Tony E; Keller, Klaus

    2017-01-01

    The response of the Antarctic ice sheet (AIS) to changing global temperatures is a key component of sea-level projections. Current projections of the AIS contribution to sea-level changes are deeply uncertain. This deep uncertainty stems, in part, from (i) the inability of current models to fully resolve key processes and scales, (ii) the relatively sparse available data, and (iii) divergent expert assessments. One promising approach to characterizing the deep uncertainty stemming from divergent expert assessments is to combine expert assessments, observations, and simple models by coupling probabilistic inversion and Bayesian inversion. Here, we present a proof-of-concept study that uses probabilistic inversion to fuse a simple AIS model and diverse expert assessments. We demonstrate the ability of probabilistic inversion to infer joint prior probability distributions of model parameters that are consistent with expert assessments. We then confront these inferred expert priors with instrumental and paleoclimatic observational data in a Bayesian inversion. These additional constraints yield tighter hindcasts and projections. We use this approach to quantify how the deep uncertainty surrounding expert assessments affects the joint probability distributions of model parameters and future projections.

  20. Medial compressible forefoot sole elements reduce ankle inversion in lateral SSC jumps.

    PubMed

    Fleischmann, Jana; Mornieux, Guillaume; Gehring, Dominic; Gollhofer, Albert

    2013-06-01

    Sideward movements are associated with high incidences of lateral ankle sprains. Special shoe constructions might be able to reduce these injuries during lateral movements. The purpose of this study was to investigate whether medial compressible forefoot sole elements can reduce ankle inversion in a reactive lateral movement, and to evaluate those elements' influence on neuromuscular and mechanical adjustments in lower extremities. Foot placement and frontal plane ankle joint kinematics and kinetics were analyzed by 3-dimensional motion analysis. Electromyographic data of triceps surae, peroneus longus, and tibialis anterior were collected. This modified shoe reduced ankle inversion in comparison with a shoe with a standard sole construction. No differences in ankle inversion moments were found. With the modified shoe, foot placement occurred more internally rotated, and muscle activity of the lateral shank muscles was reduced. Hence, lateral ankle joint stability during reactive sideward movements can be improved by these compressible elements, and therefore lower lateral shank muscle activity is required. As those elements limit inversion, the strategy to control inversion angles via a high external foot rotation does not need to be used.

  1. A compressed sensing based 3D resistivity inversion algorithm for hydrogeological applications

    NASA Astrophysics Data System (ADS)

    Ranjan, Shashi; Kambhammettu, B. V. N. P.; Peddinti, Srinivasa Rao; Adinarayana, J.

    2018-04-01

    Image reconstruction from discrete electrical responses pose a number of computational and mathematical challenges. Application of smoothness constrained regularized inversion from limited measurements may fail to detect resistivity anomalies and sharp interfaces separated by hydro stratigraphic units. Under favourable conditions, compressed sensing (CS) can be thought of an alternative to reconstruct the image features by finding sparse solutions to highly underdetermined linear systems. This paper deals with the development of a CS assisted, 3-D resistivity inversion algorithm for use with hydrogeologists and groundwater scientists. CS based l1-regularized least square algorithm was applied to solve the resistivity inversion problem. Sparseness in the model update vector is introduced through block oriented discrete cosine transformation, with recovery of the signal achieved through convex optimization. The equivalent quadratic program was solved using primal-dual interior point method. Applicability of the proposed algorithm was demonstrated using synthetic and field examples drawn from hydrogeology. The proposed algorithm has outperformed the conventional (smoothness constrained) least square method in recovering the model parameters with much fewer data, yet preserving the sharp resistivity fronts separated by geologic layers. Resistivity anomalies represented by discrete homogeneous blocks embedded in contrasting geologic layers were better imaged using the proposed algorithm. In comparison to conventional algorithm, CS has resulted in an efficient (an increase in R2 from 0.62 to 0.78; a decrease in RMSE from 125.14 Ω-m to 72.46 Ω-m), reliable, and fast converging (run time decreased by about 25%) solution.

  2. Comparison of Compressed Sensing Algorithms for Inversion of 3-D Electrical Resistivity Tomography.

    NASA Astrophysics Data System (ADS)

    Peddinti, S. R.; Ranjan, S.; Kbvn, D. P.

    2016-12-01

    Image reconstruction algorithms derived from electrical resistivity tomography (ERT) are highly non-linear, sparse, and ill-posed. The inverse problem is much severe, when dealing with 3-D datasets that result in large sized matrices. Conventional gradient based techniques using L2 norm minimization with some sort of regularization can impose smoothness constraint on the solution. Compressed sensing (CS) is relatively new technique that takes the advantage of inherent sparsity in parameter space in one or the other form. If favorable conditions are met, CS was proven to be an efficient image reconstruction technique that uses limited observations without losing edge sharpness. This paper deals with the development of an open source 3-D resistivity inversion tool using CS framework. The forward model was adopted from RESINVM3D (Pidlisecky et al., 2007) with CS as the inverse code. Discrete cosine transformation (DCT) function was used to induce model sparsity in orthogonal form. Two CS based algorithms viz., interior point method and two-step IST were evaluated on a synthetic layered model with surface electrode observations. The algorithms were tested (in terms of quality and convergence) under varying degrees of parameter heterogeneity, model refinement, and reduced observation data space. In comparison to conventional gradient algorithms, CS was proven to effectively reconstruct the sub-surface image with less computational cost. This was observed by a general increase in NRMSE from 0.5 in 10 iterations using gradient algorithm to 0.8 in 5 iterations using CS algorithms.

  3. Sparsity constrained split feasibility for dose-volume constraints in inverse planning of intensity-modulated photon or proton therapy

    NASA Astrophysics Data System (ADS)

    Penfold, Scott; Zalas, Rafał; Casiraghi, Margherita; Brooke, Mark; Censor, Yair; Schulte, Reinhard

    2017-05-01

    A split feasibility formulation for the inverse problem of intensity-modulated radiation therapy treatment planning with dose-volume constraints included in the planning algorithm is presented. It involves a new type of sparsity constraint that enables the inclusion of a percentage-violation constraint in the model problem and its handling by continuous (as opposed to integer) methods. We propose an iterative algorithmic framework for solving such a problem by applying the feasibility-seeking CQ-algorithm of Byrne combined with the automatic relaxation method that uses cyclic projections. Detailed implementation instructions are furnished. Functionality of the algorithm was demonstrated through the creation of an intensity-modulated proton therapy plan for a simple 2D C-shaped geometry and also for a realistic base-of-skull chordoma treatment site. Monte Carlo simulations of proton pencil beams of varying energy were conducted to obtain dose distributions for the 2D test case. A research release of the Pinnacle 3 proton treatment planning system was used to extract pencil beam doses for a clinical base-of-skull chordoma case. In both cases the beamlet doses were calculated to satisfy dose-volume constraints according to our new algorithm. Examination of the dose-volume histograms following inverse planning with our algorithm demonstrated that it performed as intended. The application of our proposed algorithm to dose-volume constraint inverse planning was successfully demonstrated. Comparison with optimized dose distributions from the research release of the Pinnacle 3 treatment planning system showed the algorithm could achieve equivalent or superior results.

  4. Some practical aspects of prestack waveform inversion using a genetic algorithm: An example from the east Texas Woodbine gas sand

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mallick, S.

    1999-03-01

    In this paper, a prestack inversion method using a genetic algorithm (GA) is presented, and issues relating to the implementation of prestack GA inversion in practice are discussed. GA is a Monte-Carlo type inversion, using a natural analogy to the biological evolution process. When GA is cast into a Bayesian framework, a priori information of the model parameters and the physics of the forward problem are used to compute synthetic data. These synthetic data can then be matched with observations to obtain approximate estimates of the marginal a posteriori probability density (PPD) functions in the model space. Plots of thesemore » PPD functions allow an interpreter to choose models which best describe the specific geologic setting and lead to an accurate prediction of seismic lithology. Poststack inversion and prestack GA inversion were applied to a Woodbine gas sand data set from East Texas. A comparison of prestack inversion with poststack inversion demonstrates that prestack inversion shows detailed stratigraphic features of the subsurface which are not visible on the poststack inversion.« less

  5. Source parameter inversion of compound earthquakes on GPU/CPU hybrid platform

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Ni, S.; Chen, W.

    2012-12-01

    Source parameter of earthquakes is essential problem in seismology. Accurate and timely determination of the earthquake parameters (such as moment, depth, strike, dip and rake of fault planes) is significant for both the rupture dynamics and ground motion prediction or simulation. And the rupture process study, especially for the moderate and large earthquakes, is essential as the more detailed kinematic study has became the routine work of seismologists. However, among these events, some events behave very specially and intrigue seismologists. These earthquakes usually consist of two similar size sub-events which occurred with very little time interval, such as mb4.5 Dec.9, 2003 in Virginia. The studying of these special events including the source parameter determination of each sub-events will be helpful to the understanding of earthquake dynamics. However, seismic signals of two distinctive sources are mixed up bringing in the difficulty of inversion. As to common events, the method(Cut and Paste) has been proven effective for resolving source parameters, which jointly use body wave and surface wave with independent time shift and weights. CAP could resolve fault orientation and focal depth using a grid search algorithm. Based on this method, we developed an algorithm(MUL_CAP) to simultaneously acquire parameters of two distinctive events. However, the simultaneous inversion of both sub-events make the computation very time consuming, so we develop a hybrid GPU and CPU version of CAP(HYBRID_CAP) to improve the computation efficiency. Thanks to advantages on multiple dimension storage and processing in GPU, we obtain excellent performance of the revised code on GPU-CPU combined architecture and the speedup factors can be as high as 40x-90x compared to classical cap on traditional CPU architecture.As the benchmark, we take the synthetics as observation and inverse the source parameters of two given sub-events and the inversion results are very consistent with the true parameters. For the events in Virginia, USA on 9 Dec, 2003, we re-invert source parameters and detailed analysis of regional waveform indicates that Virginia earthquake included two sub-events which are Mw4.05 and Mw4.25 at the same depth of 10km with focal mechanism of strike65/dip32/rake135, which are consistent with previous study. Moreover, compared to traditional two-source model method, MUL_CAP is more automatic with no need for human intervention.

  6. Stochastic static fault slip inversion from geodetic data with non-negativity and bound constraints

    NASA Astrophysics Data System (ADS)

    Nocquet, J.-M.

    2018-07-01

    Despite surface displacements observed by geodesy are linear combinations of slip at faults in an elastic medium, determining the spatial distribution of fault slip remains a ill-posed inverse problem. A widely used approach to circumvent the illness of the inversion is to add regularization constraints in terms of smoothing and/or damping so that the linear system becomes invertible. However, the choice of regularization parameters is often arbitrary, and sometimes leads to significantly different results. Furthermore, the resolution analysis is usually empirical and cannot be made independently of the regularization. The stochastic approach of inverse problems provides a rigorous framework where the a priori information about the searched parameters is combined with the observations in order to derive posterior probabilities of the unkown parameters. Here, I investigate an approach where the prior probability density function (pdf) is a multivariate Gaussian function, with single truncation to impose positivity of slip or double truncation to impose positivity and upper bounds on slip for interseismic modelling. I show that the joint posterior pdf is similar to the linear untruncated Gaussian case and can be expressed as a truncated multivariate normal (TMVN) distribution. The TMVN form can then be used to obtain semi-analytical formulae for the single, 2-D or n-D marginal pdf. The semi-analytical formula involves the product of a Gaussian by an integral term that can be evaluated using recent developments in TMVN probabilities calculations. Posterior mean and covariance can also be efficiently derived. I show that the maximum posterior (MAP) can be obtained using a non-negative least-squares algorithm for the single truncated case or using the bounded-variable least-squares algorithm for the double truncated case. I show that the case of independent uniform priors can be approximated using TMVN. The numerical equivalence to Bayesian inversions using Monte Carlo Markov chain (MCMC) sampling is shown for a synthetic example and a real case for interseismic modelling in Central Peru. The TMVN method overcomes several limitations of the Bayesian approach using MCMC sampling. First, the need of computer power is largely reduced. Second, unlike Bayesian MCMC-based approach, marginal pdf, mean, variance or covariance are obtained independently one from each other. Third, the probability and cumulative density functions can be obtained with any density of points. Finally, determining the MAP is extremely fast.

  7. Crustal and Upper Mantle Structure from Joint Inversion of Body Wave and Gravity Data

    DTIC Science & Technology

    2012-09-01

    CRUSTAL AND UPPER MANTLE STRUCTURE FROM JOINT INVERSION OF BODY WAVE AND GRAVITY DATA Eric A. Bergman1, Charlotte Rowe2, and Monica Maceira2...for these events include many readings of direct crustal P and S phases, as well as regional (Pn and Sn) and teleseismic phases. These data have been...the usefulness of the gravity data, we apply high-pass filtering, yielding gravity anomalies that possess higher resolving power for crustal and

  8. RNA inverse folding using Monte Carlo tree search.

    PubMed

    Yang, Xiufeng; Yoshizoe, Kazuki; Taneda, Akito; Tsuda, Koji

    2017-11-06

    Artificially synthesized RNA molecules provide important ways for creating a variety of novel functional molecules. State-of-the-art RNA inverse folding algorithms can design simple and short RNA sequences of specific GC content, that fold into the target RNA structure. However, their performance is not satisfactory in complicated cases. We present a new inverse folding algorithm called MCTS-RNA, which uses Monte Carlo tree search (MCTS), a technique that has shown exceptional performance in Computer Go recently, to represent and discover the essential part of the sequence space. To obtain high accuracy, initial sequences generated by MCTS are further improved by a series of local updates. Our algorithm has an ability to control the GC content precisely and can deal with pseudoknot structures. Using common benchmark datasets for evaluation, MCTS-RNA showed a lot of promise as a standard method of RNA inverse folding. MCTS-RNA is available at https://github.com/tsudalab/MCTS-RNA .

  9. A sequential coalescent algorithm for chromosomal inversions

    PubMed Central

    Peischl, S; Koch, E; Guerrero, R F; Kirkpatrick, M

    2013-01-01

    Chromosomal inversions are common in natural populations and are believed to be involved in many important evolutionary phenomena, including speciation, the evolution of sex chromosomes and local adaptation. While recent advances in sequencing and genotyping methods are leading to rapidly increasing amounts of genome-wide sequence data that reveal interesting patterns of genetic variation within inverted regions, efficient simulation methods to study these patterns are largely missing. In this work, we extend the sequential Markovian coalescent, an approximation to the coalescent with recombination, to include the effects of polymorphic inversions on patterns of recombination. Results show that our algorithm is fast, memory-efficient and accurate, making it feasible to simulate large inversions in large populations for the first time. The SMC algorithm enables studies of patterns of genetic variation (for example, linkage disequilibria) and tests of hypotheses (using simulation-based approaches) that were previously intractable. PMID:23632894

  10. Learning by Demonstration for Motion Planning of Upper-Limb Exoskeletons

    PubMed Central

    Lauretti, Clemente; Cordella, Francesca; Ciancio, Anna Lisa; Trigili, Emilio; Catalan, Jose Maria; Badesa, Francisco Javier; Crea, Simona; Pagliara, Silvio Marcello; Sterzi, Silvia; Vitiello, Nicola; Garcia Aracil, Nicolas; Zollo, Loredana

    2018-01-01

    The reference joint position of upper-limb exoskeletons is typically obtained by means of Cartesian motion planners and inverse kinematics algorithms with the inverse Jacobian; this approach allows exploiting the available Degrees of Freedom (i.e. DoFs) of the robot kinematic chain to achieve the desired end-effector pose; however, if used to operate non-redundant exoskeletons, it does not ensure that anthropomorphic criteria are satisfied in the whole human-robot workspace. This paper proposes a motion planning system, based on Learning by Demonstration, for upper-limb exoskeletons that allow successfully assisting patients during Activities of Daily Living (ADLs) in unstructured environment, while ensuring that anthropomorphic criteria are satisfied in the whole human-robot workspace. The motion planning system combines Learning by Demonstration with the computation of Dynamic Motion Primitives and machine learning techniques to construct task- and patient-specific joint trajectories based on the learnt trajectories. System validation was carried out in simulation and in a real setting with a 4-DoF upper-limb exoskeleton, a 5-DoF wrist-hand exoskeleton and four patients with Limb Girdle Muscular Dystrophy. Validation was addressed to (i) compare the performance of the proposed motion planning with traditional methods; (ii) assess the generalization capabilities of the proposed method with respect to the environment variability. Three ADLs were chosen to validate the system: drinking, pouring and lifting a light sphere. The achieved results showed a 100% success rate in the task fulfillment, with a high level of generalization with respect to the environment variability. Moreover, an anthropomorphic configuration of the exoskeleton is always ensured. PMID:29527161

  11. Learning by Demonstration for Motion Planning of Upper-Limb Exoskeletons.

    PubMed

    Lauretti, Clemente; Cordella, Francesca; Ciancio, Anna Lisa; Trigili, Emilio; Catalan, Jose Maria; Badesa, Francisco Javier; Crea, Simona; Pagliara, Silvio Marcello; Sterzi, Silvia; Vitiello, Nicola; Garcia Aracil, Nicolas; Zollo, Loredana

    2018-01-01

    The reference joint position of upper-limb exoskeletons is typically obtained by means of Cartesian motion planners and inverse kinematics algorithms with the inverse Jacobian; this approach allows exploiting the available Degrees of Freedom (i.e. DoFs) of the robot kinematic chain to achieve the desired end-effector pose; however, if used to operate non-redundant exoskeletons, it does not ensure that anthropomorphic criteria are satisfied in the whole human-robot workspace. This paper proposes a motion planning system, based on Learning by Demonstration, for upper-limb exoskeletons that allow successfully assisting patients during Activities of Daily Living (ADLs) in unstructured environment, while ensuring that anthropomorphic criteria are satisfied in the whole human-robot workspace. The motion planning system combines Learning by Demonstration with the computation of Dynamic Motion Primitives and machine learning techniques to construct task- and patient-specific joint trajectories based on the learnt trajectories. System validation was carried out in simulation and in a real setting with a 4-DoF upper-limb exoskeleton, a 5-DoF wrist-hand exoskeleton and four patients with Limb Girdle Muscular Dystrophy. Validation was addressed to (i) compare the performance of the proposed motion planning with traditional methods; (ii) assess the generalization capabilities of the proposed method with respect to the environment variability. Three ADLs were chosen to validate the system: drinking, pouring and lifting a light sphere. The achieved results showed a 100% success rate in the task fulfillment, with a high level of generalization with respect to the environment variability. Moreover, an anthropomorphic configuration of the exoskeleton is always ensured.

  12. A Scalable O(N) Algorithm for Large-Scale Parallel First-Principles Molecular Dynamics Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osei-Kuffuor, Daniel; Fattebert, Jean-Luc

    2014-01-01

    Traditional algorithms for first-principles molecular dynamics (FPMD) simulations only gain a modest capability increase from current petascale computers, due to their O(N 3) complexity and their heavy use of global communications. To address this issue, we are developing a truly scalable O(N) complexity FPMD algorithm, based on density functional theory (DFT), which avoids global communications. The computational model uses a general nonorthogonal orbital formulation for the DFT energy functional, which requires knowledge of selected elements of the inverse of the associated overlap matrix. We present a scalable algorithm for approximately computing selected entries of the inverse of the overlap matrix,more » based on an approximate inverse technique, by inverting local blocks corresponding to principal submatrices of the global overlap matrix. The new FPMD algorithm exploits sparsity and uses nearest neighbor communication to provide a computational scheme capable of extreme scalability. Accuracy is controlled by the mesh spacing of the finite difference discretization, the size of the localization regions in which the electronic orbitals are confined, and a cutoff beyond which the entries of the overlap matrix can be omitted when computing selected entries of its inverse. We demonstrate the algorithm's excellent parallel scaling for up to O(100K) atoms on O(100K) processors, with a wall-clock time of O(1) minute per molecular dynamics time step.« less

  13. Inverse Dynamics Model for the Ankle Joint with Applications in Tibia Malleolus Fracture

    NASA Astrophysics Data System (ADS)

    Budescu, E.; Merticaru, E.; Chirazi, M.

    The paper presents a biomechanical model of the ankle joint, in order to determine the force and the torque of reaction into the articulation, through inverse dynamic analysis, in various stages of the gait. Thus, knowing the acceleration of the foot and the reaction force between foot and ground during the gait, determined by experimental measurement, there was calculated, for five different positions of the foot, the joint reaction forces, on the basis of dynamic balance equations. The values numerically determined were compared with the admissible forces appearing in the technical systems of osteosynthesis of tibia malleolus fracture, in order to emphasize the motion restrictions during bone healing.

  14. Neural network explanation using inversion.

    PubMed

    Saad, Emad W; Wunsch, Donald C

    2007-01-01

    An important drawback of many artificial neural networks (ANN) is their lack of explanation capability [Andrews, R., Diederich, J., & Tickle, A. B. (1996). A survey and critique of techniques for extracting rules from trained artificial neural networks. Knowledge-Based Systems, 8, 373-389]. This paper starts with a survey of algorithms which attempt to explain the ANN output. We then present HYPINV, a new explanation algorithm which relies on network inversion; i.e. calculating the ANN input which produces a desired output. HYPINV is a pedagogical algorithm, that extracts rules, in the form of hyperplanes. It is able to generate rules with arbitrarily desired fidelity, maintaining a fidelity-complexity tradeoff. To our knowledge, HYPINV is the only pedagogical rule extraction method, which extracts hyperplane rules from continuous or binary attribute neural networks. Different network inversion techniques, involving gradient descent as well as an evolutionary algorithm, are presented. An information theoretic treatment of rule extraction is presented. HYPINV is applied to example synthetic problems, to a real aerospace problem, and compared with similar algorithms using benchmark problems.

  15. Inverse algorithms for 2D shallow water equations in presence of wet dry fronts: Application to flood plain dynamics

    NASA Astrophysics Data System (ADS)

    Monnier, J.; Couderc, F.; Dartus, D.; Larnier, K.; Madec, R.; Vila, J.-P.

    2016-11-01

    The 2D shallow water equations adequately model some geophysical flows with wet-dry fronts (e.g. flood plain or tidal flows); nevertheless deriving accurate, robust and conservative numerical schemes for dynamic wet-dry fronts over complex topographies remains a challenge. Furthermore for these flows, data are generally complex, multi-scale and uncertain. Robust variational inverse algorithms, providing sensitivity maps and data assimilation processes may contribute to breakthrough shallow wet-dry front dynamics modelling. The present study aims at deriving an accurate, positive and stable finite volume scheme in presence of dynamic wet-dry fronts, and some corresponding inverse computational algorithms (variational approach). The schemes and algorithms are assessed on classical and original benchmarks plus a real flood plain test case (Lèze river, France). Original sensitivity maps with respect to the (friction, topography) pair are performed and discussed. The identification of inflow discharges (time series) or friction coefficients (spatially distributed parameters) demonstrate the algorithms efficiency.

  16. Reliable fusion of control and sensing in intelligent machines. Thesis

    NASA Technical Reports Server (NTRS)

    Mcinroy, John E.

    1991-01-01

    Although robotics research has produced a wealth of sophisticated control and sensing algorithms, very little research has been aimed at reliably combining these control and sensing strategies so that a specific task can be executed. To improve the reliability of robotic systems, analytic techniques are developed for calculating the probability that a particular combination of control and sensing algorithms will satisfy the required specifications. The probability can then be used to assess the reliability of the design. An entropy formulation is first used to quickly eliminate designs not capable of meeting the specifications. Next, a framework for analyzing reliability based on the first order second moment methods of structural engineering is proposed. To ensure performance over an interval of time, lower bounds on the reliability of meeting a set of quadratic specifications with a Gaussian discrete time invariant control system are derived. A case study analyzing visual positioning in robotic system is considered. The reliability of meeting timing and positioning specifications in the presence of camera pixel truncation, forward and inverse kinematic errors, and Gaussian joint measurement noise is determined. This information is used to select a visual sensing strategy, a kinematic algorithm, and a discrete compensator capable of accomplishing the desired task. Simulation results using PUMA 560 kinematic and dynamic characteristics are presented.

  17. A novel color image compression algorithm using the human visual contrast sensitivity characteristics

    NASA Astrophysics Data System (ADS)

    Yao, Juncai; Liu, Guizhong

    2017-03-01

    In order to achieve higher image compression ratio and improve visual perception of the decompressed image, a novel color image compression scheme based on the contrast sensitivity characteristics of the human visual system (HVS) is proposed. In the proposed scheme, firstly the image is converted into the YCrCb color space and divided into sub-blocks. Afterwards, the discrete cosine transform is carried out for each sub-block, and three quantization matrices are built to quantize the frequency spectrum coefficients of the images by combining the contrast sensitivity characteristics of HVS. The Huffman algorithm is used to encode the quantized data. The inverse process involves decompression and matching to reconstruct the decompressed color image. And simulations are carried out for two color images. The results show that the average structural similarity index measurement (SSIM) and peak signal to noise ratio (PSNR) under the approximate compression ratio could be increased by 2.78% and 5.48%, respectively, compared with the joint photographic experts group (JPEG) compression. The results indicate that the proposed compression algorithm in the text is feasible and effective to achieve higher compression ratio under ensuring the encoding and image quality, which can fully meet the needs of storage and transmission of color images in daily life.

  18. Recovering Long-wavelength Velocity Models using Spectrogram Inversion with Single- and Multi-frequency Components

    NASA Astrophysics Data System (ADS)

    Ha, J.; Chung, W.; Shin, S.

    2015-12-01

    Many waveform inversion algorithms have been proposed in order to construct subsurface velocity structures from seismic data sets. These algorithms have suffered from computational burden, local minima problems, and the lack of low-frequency components. Computational efficiency can be improved by the application of back-propagation techniques and advances in computing hardware. In addition, waveform inversion algorithms, for obtaining long-wavelength velocity models, could avoid both the local minima problem and the effect of the lack of low-frequency components in seismic data. In this study, we proposed spectrogram inversion as a technique for recovering long-wavelength velocity models. In spectrogram inversion, decomposed frequency components from spectrograms of traces, in the observed and calculated data, are utilized to generate traces with reproduced low-frequency components. Moreover, since each decomposed component can reveal the different characteristics of a subsurface structure, several frequency components were utilized to analyze the velocity features in the subsurface. We performed the spectrogram inversion using a modified SEG/SEGE salt A-A' line. Numerical results demonstrate that spectrogram inversion could also recover the long-wavelength velocity features. However, inversion results varied according to the frequency components utilized. Based on the results of inversion using a decomposed single-frequency component, we noticed that robust inversion results are obtained when a dominant frequency component of the spectrogram was utilized. In addition, detailed information on recovered long-wavelength velocity models was obtained using a multi-frequency component combined with single-frequency components. Numerical examples indicate that various detailed analyses of long-wavelength velocity models can be carried out utilizing several frequency components.

  19. Velocity Structure of the Iran Region Using Seismic and Gravity Observations

    NASA Astrophysics Data System (ADS)

    Syracuse, E. M.; Maceira, M.; Phillips, W. S.; Begnaud, M. L.; Nippress, S. E. J.; Bergman, E.; Zhang, H.

    2015-12-01

    We present a 3D Vp and Vs model of Iran generated using a joint inversion of body wave travel times, Rayleigh wave dispersion curves, and high-wavenumber filtered Bouguer gravity observations. Our work has two main goals: 1) To better understand the tectonics of a prominent example of continental collision, and 2) To assess the improvements in earthquake location possible as a result of joint inversion. The body wave dataset is mainly derived from previous work on location calibration and includes the first-arrival P and S phases of 2500 earthquakes whose initial locations qualify as GT25 or better. The surface wave dataset consists of Rayleigh wave group velocity measurements for regional earthquakes, which are inverted for a suite of period-dependent Rayleigh wave velocity maps prior to inclusion in the joint inversion for body wave velocities. We use gravity anomalies derived from the global gravity model EGM2008. To avoid mapping broad, possibly dynamic features in the gravity field intovariations in density and body wave velocity, we apply a high-pass wavenumber filter to the gravity measurements. We use a simple, approximate relationship between density and velocity so that the three datasets may be combined in a single inversion. The final optimized 3D Vp and Vs model allows us to explore how multi-parameter tomography addresses crustal heterogeneities in areas of limited coverage and improves travel time predictions. We compare earthquake locations from our models to independent locations obtained from InSAR analysis to assess the improvement in locations derived in a joint-inversion model in comparison to those derived in a more traditional body-wave-only velocity model.

  20. Preparation time influences ankle and knee joint control during dynamic change of direction movements.

    PubMed

    Fuerst, Patrick; Gollhofer, Albert; Gehring, Dominic

    2017-04-01

    The influence of preparation time on ankle joint biomechanics during highly dynamic movements is largely unknown. The aim of this study was to evaluate the impact of limited preparation time on ankle joint loading during highly dynamic run-and-cut movements. Thirteen male basketball players performed 45°-sidestep-cutting and 180°-turning manoeuvres in reaction to light signals which appeared during the approach run. Both movements were executed under (1) an easy condition, in which the light signal appeared very early, (2) a medium condition and (3) a hard condition with very little time to prepare the movements. Maximum ankle inversion angles, moments and velocities during ground contact, as well as EMG signals of three lower extremity muscles, were analysed. In 180°-turning movements, reduced preparation time led to significantly increased maximum ankle inversion velocities. Muscular activation levels, however, did not change. Increased inversion velocities, without accompanying changes in muscular activation, may have the potential to destabilise the ankle joint when less preparation time is available. This may result in a higher injury risk during turning movements and should therefore be considered in ankle injury research and the aetiology of ankle sprains.

  1. Test-retest reliability of a new device for assessing ankle joint threshold to detect passive movement in healthy adults.

    PubMed

    Sun, Wei; Song, Qipeng; Yu, Bing; Zhang, Cui; Mao, Dewei

    2015-01-01

    This study aimed to evaluate the test-retest reliability of a new device for assessing ankle joint kinesthesia. This device could measure the passive motion threshold of four ankle joint movements, namely plantarflexion, dorsiflexion, inversion and eversion. A total of 21 healthy adults, including 13 males and 8 females, participated in the study. Each participant completed two sessions on two separate days with 1-week interval. The sessions were administered by the same experimenter in the same laboratory. At least 12 trials (three successful trials in each of the four directions) were performed in each session. The mean values in each direction were calculated and analysed. The ICC values of test-retest reliability ranged from 0.737 (dorsiflexion) to 0.935 (eversion), whereas the SEM values ranged from 0.21° (plantarflexion) to 0.52° (inversion). The Bland-Altman plots showed that the reliability of plantarflexion-dorsiflexion was better than that of inversion-eversion. The results evaluated the reliability of the new device as fair to excellent. The new device for assessing kinesthesia could be used to examine the ankle joint kinesthesia.

  2. A New Artificial Neural Network Approach in Solving Inverse Kinematics of Robotic Arm (Denso VP6242)

    PubMed Central

    Dülger, L. Canan; Kapucu, Sadettin

    2016-01-01

    This paper presents a novel inverse kinematics solution for robotic arm based on artificial neural network (ANN) architecture. The motion of robotic arm is controlled by the kinematics of ANN. A new artificial neural network approach for inverse kinematics is proposed. The novelty of the proposed ANN is the inclusion of the feedback of current joint angles configuration of robotic arm as well as the desired position and orientation in the input pattern of neural network, while the traditional ANN has only the desired position and orientation of the end effector in the input pattern of neural network. In this paper, a six DOF Denso robotic arm with a gripper is controlled by ANN. The comprehensive experimental results proved the applicability and the efficiency of the proposed approach in robotic motion control. The inclusion of current configuration of joint angles in ANN significantly increased the accuracy of ANN estimation of the joint angles output. The new controller design has advantages over the existing techniques for minimizing the position error in unconventional tasks and increasing the accuracy of ANN in estimation of robot's joint angles. PMID:27610129

  3. A New Artificial Neural Network Approach in Solving Inverse Kinematics of Robotic Arm (Denso VP6242).

    PubMed

    Almusawi, Ahmed R J; Dülger, L Canan; Kapucu, Sadettin

    2016-01-01

    This paper presents a novel inverse kinematics solution for robotic arm based on artificial neural network (ANN) architecture. The motion of robotic arm is controlled by the kinematics of ANN. A new artificial neural network approach for inverse kinematics is proposed. The novelty of the proposed ANN is the inclusion of the feedback of current joint angles configuration of robotic arm as well as the desired position and orientation in the input pattern of neural network, while the traditional ANN has only the desired position and orientation of the end effector in the input pattern of neural network. In this paper, a six DOF Denso robotic arm with a gripper is controlled by ANN. The comprehensive experimental results proved the applicability and the efficiency of the proposed approach in robotic motion control. The inclusion of current configuration of joint angles in ANN significantly increased the accuracy of ANN estimation of the joint angles output. The new controller design has advantages over the existing techniques for minimizing the position error in unconventional tasks and increasing the accuracy of ANN in estimation of robot's joint angles.

  4. Parallel three-dimensional magnetotelluric inversion using adaptive finite-element method. Part I: theory and synthetic study

    NASA Astrophysics Data System (ADS)

    Grayver, Alexander V.

    2015-07-01

    This paper presents a distributed magnetotelluric inversion scheme based on adaptive finite-element method (FEM). The key novel aspect of the introduced algorithm is the use of automatic mesh refinement techniques for both forward and inverse modelling. These techniques alleviate tedious and subjective procedure of choosing a suitable model parametrization. To avoid overparametrization, meshes for forward and inverse problems were decoupled. For calculation of accurate electromagnetic (EM) responses, automatic mesh refinement algorithm based on a goal-oriented error estimator has been adopted. For further efficiency gain, EM fields for each frequency were calculated using independent meshes in order to account for substantially different spatial behaviour of the fields over a wide range of frequencies. An automatic approach for efficient initial mesh design in inverse problems based on linearized model resolution matrix was developed. To make this algorithm suitable for large-scale problems, it was proposed to use a low-rank approximation of the linearized model resolution matrix. In order to fill a gap between initial and true model complexities and resolve emerging 3-D structures better, an algorithm for adaptive inverse mesh refinement was derived. Within this algorithm, spatial variations of the imaged parameter are calculated and mesh is refined in the neighborhoods of points with the largest variations. A series of numerical tests were performed to demonstrate the utility of the presented algorithms. Adaptive mesh refinement based on the model resolution estimates provides an efficient tool to derive initial meshes which account for arbitrary survey layouts, data types, frequency content and measurement uncertainties. Furthermore, the algorithm is capable to deliver meshes suitable to resolve features on multiple scales while keeping number of unknowns low. However, such meshes exhibit dependency on an initial model guess. Additionally, it is demonstrated that the adaptive mesh refinement can be particularly efficient in resolving complex shapes. The implemented inversion scheme was able to resolve a hemisphere object with sufficient resolution starting from a coarse discretization and refining mesh adaptively in a fully automatic process. The code is able to harness the computational power of modern distributed platforms and is shown to work with models consisting of millions of degrees of freedom. Significant computational savings were achieved by using locally refined decoupled meshes.

  5. Robust tuning of robot control systems

    NASA Technical Reports Server (NTRS)

    Minis, I.; Uebel, M.

    1992-01-01

    The computed torque control problem is examined for a robot arm with flexible, geared, joint drive systems which are typical in many industrial robots. The standard computed torque algorithm is not directly applicable to this class of manipulators because of the dynamics introduced by the joint drive system. The proposed approach to computed torque control combines a computed torque algorithm with torque controller at each joint. Three such control schemes are proposed. The first scheme uses the joint torque control system currently implemented on the robot arm and a novel form of the computed torque algorithm. The other two use the standard computed torque algorithm and a novel model following torque control system based on model following techniques. Standard tasks and performance indices are used to evaluate the performance of the controllers. Both numerical simulations and experiments are used in evaluation. The study shows that all three proposed systems lead to improved tracking performance over a conventional PD controller.

  6. SMI adaptive antenna arrays for weak interfering signals. [Sample Matrix Inversion

    NASA Technical Reports Server (NTRS)

    Gupta, Inder J.

    1986-01-01

    The performance of adaptive antenna arrays in the presence of weak interfering signals (below thermal noise) is studied. It is shown that a conventional adaptive antenna array sample matrix inversion (SMI) algorithm is unable to suppress such interfering signals. To overcome this problem, the SMI algorithm is modified. In the modified algorithm, the covariance matrix is redefined such that the effect of thermal noise on the weights of adaptive arrays is reduced. Thus, the weights are dictated by relatively weak signals. It is shown that the modified algorithm provides the desired interference protection.

  7. A cortically-inspired model for inverse kinematics computation of a humanoid finger with mechanically coupled joints.

    PubMed

    Gentili, Rodolphe J; Oh, Hyuk; Kregling, Alissa V; Reggia, James A

    2016-05-19

    The human hand's versatility allows for robust and flexible grasping. To obtain such efficiency, many robotic hands include human biomechanical features such as fingers having their two last joints mechanically coupled. Although such coupling enables human-like grasping, controlling the inverse kinematics of such mechanical systems is challenging. Here we propose a cortical model for fine motor control of a humanoid finger, having its two last joints coupled, that learns the inverse kinematics of the effector. This neural model functionally mimics the population vector coding as well as sensorimotor prediction processes of the brain's motor/premotor and parietal regions, respectively. After learning, this neural architecture could both overtly (actual execution) and covertly (mental execution or motor imagery) perform accurate, robust and flexible finger movements while reproducing the main human finger kinematic states. This work contributes to developing neuro-mimetic controllers for dexterous humanoid robotic/prosthetic upper-extremities, and has the potential to promote human-robot interactions.

  8. Joint inversion of time-lapse VSP data for monitoring CO2 injection at the Farnsworth EOR field in Texas

    NASA Astrophysics Data System (ADS)

    Zhang, M.; Gao, K.; Balch, R. S.; Huang, L.

    2016-12-01

    During the Development Phase (Phase III) of the U.S. Southwest Regional Partnership on Carbon Sequestration (SWP), time-lapse 3D vertical seismic profiling (VSP) data were acquired to monitor CO2 injection/migration at the Farnsworth Enhanced Oil Recovery (EOR) field, in partnership with the industrial partner Chaparral Energy. The project is to inject a million tons of carbon dioxide into the target formation, the deep oil-bearing Morrow Formation in the Farnsworth Unit EOR field. Quantitative time-lapse seismic monitoring has the potential to track CO2 movement in geologic carbon storage sites. Los Alamos National Laboratory (LANL) has recently developed new full-waveform inversion methods to jointly invert time-lapse seismic data for changes in elastic and anisotropic parameters in target monitoring regions such as a CO2 reservoir. We apply our new joint inversion methods to time-lapse VSP data acquired at the Farnsworth EOR filed, and present some preliminary results showing geophysical properties changes in the reservoir.

  9. Fast inversion of gravity data using the symmetric successive over-relaxation (SSOR) preconditioned conjugate gradient algorithm

    NASA Astrophysics Data System (ADS)

    Meng, Zhaohai; Li, Fengting; Xu, Xuechun; Huang, Danian; Zhang, Dailei

    2017-02-01

    The subsurface three-dimensional (3D) model of density distribution is obtained by solving an under-determined linear equation that is established by gravity data. Here, we describe a new fast gravity inversion method to recover a 3D density model from gravity data. The subsurface will be divided into a large number of rectangular blocks, each with an unknown constant density. The gravity inversion method introduces a stabiliser model norm with a depth weighting function to produce smooth models. The depth weighting function is combined with the model norm to counteract the skin effect of the gravity potential field. As the numbers of density model parameters is NZ (the number of layers in the vertical subsurface domain) times greater than the observed gravity data parameters, the inverse density parameter is larger than the observed gravity data parameters. Solving the full set of gravity inversion equations is very time-consuming, and applying a new algorithm to estimate gravity inversion can significantly reduce the number of iterations and the computational time. In this paper, a new symmetric successive over-relaxation (SSOR) iterative conjugate gradient (CG) method is shown to be an appropriate algorithm to solve this Tikhonov cost function (gravity inversion equation). The new, faster method is applied on Gaussian noise-contaminated synthetic data to demonstrate its suitability for 3D gravity inversion. To demonstrate the performance of the new algorithm on actual gravity data, we provide a case study that includes ground-based measurement of residual Bouguer gravity anomalies over the Humble salt dome near Houston, Gulf Coast Basin, off the shore of Louisiana. A 3D distribution of salt rock concentration is used to evaluate the inversion results recovered by the new SSOR iterative method. In the test model, the density values in the constructed model coincide with the known location and depth of the salt dome.

  10. Hydrogeophysical Assessment of Aquifer Uncertainty Using Simulated Annealing driven MRF-Based Stochastic Joint Inversion

    NASA Astrophysics Data System (ADS)

    Oware, E. K.

    2017-12-01

    Geophysical quantification of hydrogeological parameters typically involve limited noisy measurements coupled with inadequate understanding of the target phenomenon. Hence, a deterministic solution is unrealistic in light of the largely uncertain inputs. Stochastic imaging (SI), in contrast, provides multiple equiprobable realizations that enable probabilistic assessment of aquifer properties in a realistic manner. Generation of geologically realistic prior models is central to SI frameworks. Higher-order statistics for representing prior geological features in SI are, however, usually borrowed from training images (TIs), which may produce undesirable outcomes if the TIs are unpresentatitve of the target structures. The Markov random field (MRF)-based SI strategy provides a data-driven alternative to TI-based SI algorithms. In the MRF-based method, the simulation of spatial features is guided by Gibbs energy (GE) minimization. Local configurations with smaller GEs have higher likelihood of occurrence and vice versa. The parameters of the Gibbs distribution for computing the GE are estimated from the hydrogeophysical data, thereby enabling the generation of site-specific structures in the absence of reliable TIs. In Metropolis-like SI methods, the variance of the transition probability controls the jump-size. The procedure is a standard Markov chain Monte Carlo (McMC) method when a constant variance is assumed, and becomes simulated annealing (SA) when the variance (cooling temperature) is allowed to decrease gradually with time. We observe that in certain problems, the large variance typically employed at the beginning to hasten burn-in may be unideal for sampling at the equilibrium state. The powerfulness of SA stems from its flexibility to adaptively scale the variance at different stages of the sampling. Degeneration of results were reported in a previous implementation of the MRF-based SI strategy based on a constant variance. Here, we present an updated version of the algorithm based on SA that appears to resolve the degeneration problem with seemingly improved results. We illustrate the performance of the SA version with a joint inversion of time-lapse concentration and electrical resistivity measurements in a hypothetical trinary hydrofacies aquifer characterization problem.

  11. Forward and Inverse Predictive Model for the Trajectory Tracking Control of a Lower Limb Exoskeleton for Gait Rehabilitation: Simulation modelling analysis

    NASA Astrophysics Data System (ADS)

    Zakaria, M. A.; Majeed, A. P. P. A.; Taha, Z.; Alim, M. M.; Baarath, K.

    2018-03-01

    The movement of a lower limb exoskeleton requires a reasonably accurate control method to allow for an effective gait therapy session to transpire. Trajectory tracking is a nontrivial means of passive rehabilitation technique to correct the motion of the patients’ impaired limb. This paper proposes an inverse predictive model that is coupled together with the forward kinematics of the exoskeleton to estimate the behaviour of the system. A conventional PID control system is used to converge the required joint angles based on the desired input from the inverse predictive model. It was demonstrated through the present study, that the inverse predictive model is capable of meeting the trajectory demand with acceptable error tolerance. The findings further suggest the ability of the predictive model of the exoskeleton to predict a correct joint angle command to the system.

  12. Quantifying the Uncertainties and Multi-parameter Trade-offs in Joint Inversion of Receiver Functions and Surface Wave Velocity and Ellipticity

    NASA Astrophysics Data System (ADS)

    Gao, C.; Lekic, V.

    2016-12-01

    When constraining the structure of the Earth's continental lithosphere, multiple seismic observables are often combined due to their complementary sensitivities.The transdimensional Bayesian (TB) approach in seismic inversion allows model parameter uncertainties and trade-offs to be quantified with few assumptions. TB sampling yields an adaptive parameterization that enables simultaneous inversion for different model parameters (Vp, Vs, density, radial anisotropy), without the need for strong prior information or regularization. We use a reversible jump Markov chain Monte Carlo (rjMcMC) algorithm to incorporate different seismic observables - surface wave dispersion (SWD), Rayleigh wave ellipticity (ZH ratio), and receiver functions - into the inversion for the profiles of shear velocity (Vs), compressional velocity (Vp), density (ρ), and radial anisotropy (ξ) beneath a seismic station. By analyzing all three data types individually and together, we show that TB sampling can eliminate the need for a fixed parameterization based on prior information, and reduce trade-offs in model estimates. We then explore the effect of different types of misfit functions for receiver function inversion, which is a highly non-unique problem. We compare the synthetic inversion results using the L2 norm, cross-correlation type and integral type misfit function by their convergence rates and retrieved seismic structures. In inversions in which only one type of model parameter (Vs for the case of SWD) is inverted, assumed scaling relationships are often applied to account for sensitivity to other model parameters (e.g. Vp, ρ, ξ). Here we show that under a TB framework, we can eliminate scaling assumptions, while simultaneously constraining multiple model parameters to varying degrees. Furthermore, we compare the performance of TB inversion when different types of model parameters either share the same or use independent parameterizations. We show that different parameterizations can lead to differences in retrieved model parameters, consistent with limited data constraints. We then quantitatively examine the model parameter trade-offs and find that trade-offs between Vp and radial anisotropy might limit our ability to constrain shallow-layer radial anisotropy using current seismic observables.

  13. Diagnosis of retrodiscal tissue in painful temporomandibular joint (TMJ) by fluid-attenuated inversion recovery (FLAIR) signal intensity.

    PubMed

    Kuroda, Migiwa; Otonari-Yamamoto, Mika; Sano, Tsukasa; Fujikura, Mamiko; Wakoh, Mamoru

    2014-09-09

    Aims: The purpose of the present study is to analyze the fluid-attenuated inversion recovery (FLAIR) signal intensity of the retrodiscal tissue in a painful temporomandibular joint (TMJ), and to develop a diagnostic system based on FLAIR data. Methodology: The study was based on 33 joints of 17 patients referred for MR imaging of the TMJ. Regions of interest were placed over retrodiscal tissue and gray matter (GM) on FLAIR images. Using signal intensities of GM as reference points, signal intensity ratios (SIR) of retrodiscal tissue were calculated. SIRs in painful TMJ were compared with those in painless TMJ. Wilcoxon's Rank Sum Test was used to analyze the difference in SIRs between the painful and painless groups (P<0·05). Results: The SIRs of retrodiscal tissue were significantly higher in painful joints than in painless joints. Conclusion: FLAIR sequences provide a high signal in patients having painful TMJ, and it suggests that retrodiscal tissue in painful TMJ contains elements such as protein.

  14. Diagnosis of retrodiscal tissue in painful temporomandibular joint (TMJ) by fluid-attenuated inversion recovery (FLAIR) signal intensity.

    PubMed

    Kuroda, Migiwa; Otonari-Yamamoto, Mika; Sano, Tsukasa; Fujikura, Mamiko; Wakoh, Mamoru

    2015-10-01

    The purpose of the present study is to analyze the fluid-attenuated inversion recovery (FLAIR) signal intensity of the retrodiscal tissue in a painful temporomandibular joint (TMJ), and to develop a diagnostic system based on FLAIR data. The study was based on 33 joints of 17 patients referred for MR imaging of the TMJ. Regions of interest were placed over retrodiscal tissue and gray matter (GM) on FLAIR images. Using signal intensities of GM as reference points, signal intensity ratios (SIR) of retrodiscal tissue were calculated. SIRs in painful TMJ were compared with those in painless TMJ. Wilcoxon's Rank Sum Test was used to analyze the difference in SIRs between the painful and painless groups (P<0·05). The SIRs of retrodiscal tissue were significantly higher in painful joints than in painless joints. FLAIR sequences provide a high signal in patients having painful TMJ, and it suggests that retrodiscal tissue in painful TMJ contains elements such as protein.

  15. Improving M-SBL for Joint Sparse Recovery Using a Subspace Penalty

    NASA Astrophysics Data System (ADS)

    Ye, Jong Chul; Kim, Jong Min; Bresler, Yoram

    2015-12-01

    The multiple measurement vector problem (MMV) is a generalization of the compressed sensing problem that addresses the recovery of a set of jointly sparse signal vectors. One of the important contributions of this paper is to reveal that the seemingly least related state-of-art MMV joint sparse recovery algorithms - M-SBL (multiple sparse Bayesian learning) and subspace-based hybrid greedy algorithms - have a very important link. More specifically, we show that replacing the $\\log\\det(\\cdot)$ term in M-SBL by a rank proxy that exploits the spark reduction property discovered in subspace-based joint sparse recovery algorithms, provides significant improvements. In particular, if we use the Schatten-$p$ quasi-norm as the corresponding rank proxy, the global minimiser of the proposed algorithm becomes identical to the true solution as $p \\rightarrow 0$. Furthermore, under the same regularity conditions, we show that the convergence to a local minimiser is guaranteed using an alternating minimization algorithm that has closed form expressions for each of the minimization steps, which are convex. Numerical simulations under a variety of scenarios in terms of SNR, and condition number of the signal amplitude matrix demonstrate that the proposed algorithm consistently outperforms M-SBL and other state-of-the art algorithms.

  16. SURFACE FLUID REGISTRATION OF CONFORMAL REPRESENTATION: APPLICATION TO DETECT DISEASE BURDEN AND GENETIC INFLUENCE ON HIPPOCAMPUS

    PubMed Central

    Shi, Jie; Thompson, Paul M.; Gutman, Boris; Wang, Yalin

    2013-01-01

    In this paper, we develop a new automated surface registration system based on surface conformal parameterization by holomorphic 1-forms, inverse consistentsurface fluid registration, and multivariate tensor-based morphometry (mTBM). First, we conformally map a surface onto a planar rectangle space with holomorphic 1-forms. Second, we compute surface conformal representation by combining its local conformal factor and mean curvature and linearly scale the dynamic range of the conformal representation to form the feature image of the surface. Third, we align the feature image with a chosen template image via the fluid image registration algorithm, which has been extended into the curvilinear coordinates to adjust for the distortion introduced by surface parameterization. The inverse consistent image registration algorithm is also incorporated in the system to jointly estimate the forward and inverse transformations between the study and template images. This alignment induces a corresponding deformation on the surface. We tested the system on Alzheimer's Disease Neuroimaging Initiative (ADNI) baseline dataset to study AD symptoms on hippocampus. In our system, by modeling a hippocampus as a 3D parametric surface, we nonlinearly registered each surface with a selected template surface. Then we used mTBM to analyze the morphometrydifference between diagnostic groups. Experimental results show that the new system has better performance than two publically available subcortical surface registration tools: FIRST and SPHARM. We also analyzed the genetic influence of the Apolipoprotein E ε4 allele (ApoE4),which is considered as the most prevalent risk factor for AD.Our work successfully detected statistically significant difference between ApoE4 carriers and non-carriers in both patients of mild cognitive impairment (MCI) and healthy control subjects. The results show evidence that the ApoE genotype may be associated with accelerated brain atrophy so that our workprovides a new MRI analysis tool that may help presymptomatic AD research. PMID:23587689

  17. Approximation Of Multi-Valued Inverse Functions Using Clustering And Sugeno Fuzzy Inference

    NASA Technical Reports Server (NTRS)

    Walden, Maria A.; Bikdash, Marwan; Homaifar, Abdollah

    1998-01-01

    Finding the inverse of a continuous function can be challenging and computationally expensive when the inverse function is multi-valued. Difficulties may be compounded when the function itself is difficult to evaluate. We show that we can use fuzzy-logic approximators such as Sugeno inference systems to compute the inverse on-line. To do so, a fuzzy clustering algorithm can be used in conjunction with a discriminating function to split the function data into branches for the different values of the forward function. These data sets are then fed into a recursive least-squares learning algorithm that finds the proper coefficients of the Sugeno approximators; each Sugeno approximator finds one value of the inverse function. Discussions about the accuracy of the approximation will be included.

  18. Dynamic Inversion based Control of a Docking Mechanism

    NASA Technical Reports Server (NTRS)

    Kulkarni, Nilesh V.; Ippolito, Corey; Krishnakumar, Kalmanje

    2006-01-01

    The problem of position and attitude control of the Stewart platform based docking mechanism is considered motivated by its future application in space missions requiring the autonomous docking capability. The control design is initiated based on the framework of the intelligent flight control architecture being developed at NASA Ames Research Center. In this paper, the baseline position and attitude control system is designed using dynamic inversion with proportional-integral augmentation. The inverse dynamics uses a Newton-Euler formulation that includes the platform dynamics, the dynamics of the individual legs along with viscous friction in the joints. Simulation results are presented using forward dynamics simulated by a commercial physics engine that builds the system as individual elements with appropriate joints and uses constrained numerical integration,

  19. A Common 16p11.2 Inversion Underlies the Joint Susceptibility to Asthma and Obesity

    PubMed Central

    González, Juan R.; Cáceres, Alejandro; Esko, Tonu; Cuscó, Ivon; Puig, Marta; Esnaola, Mikel; Reina, Judith; Siroux, Valerie; Bouzigon, Emmanuelle; Nadif, Rachel; Reinmaa, Eva; Milani, Lili; Bustamante, Mariona; Jarvis, Deborah; Antó, Josep M.; Sunyer, Jordi; Demenais, Florence; Kogevinas, Manolis; Metspalu, Andres; Cáceres, Mario; Pérez-Jurado, Luis A.

    2014-01-01

    The prevalence of asthma and obesity is increasing worldwide, and obesity is a well-documented risk factor for asthma. The mechanisms underlying this association and parallel time trends remain largely unknown but genetic factors may be involved. Here, we report on a common ∼0.45 Mb genomic inversion at 16p11.2 that can be accurately genotyped via SNP array data. We show that the inversion allele protects against the joint occurrence of asthma and obesity in five large independent studies (combined sample size of 317 cases and 543 controls drawn from a total of 5,809 samples; combined OR = 0.48, p = 5.5 × 10−6). Allele frequencies show remarkable worldwide population stratification, ranging from 10% in East Africa to 49% in Northern Europe, consistent with discordant and extreme genetic drifts or adaptive selections after human migration out of Africa. Inversion alleles strongly correlate with expression levels of neighboring genes, especially TUFM (p = 3.0 × 10−40) that encodes a mitochondrial protein regulator of energy balance and inhibitor of type 1 interferon, and other candidates for asthma (IL27) and obesity (APOB48R and SH2B1). Therefore, by affecting gene expression, the ∼0.45 Mb 16p11.2 inversion provides a genetic basis for the joint susceptibility to asthma and obesity, with a population attributable risk of 39.7%. Differential mitochondrial function and basal energy balance of inversion alleles might also underlie the potential selection signature that led to their uneven distribution in world populations. PMID:24560518

  20. SWIM: A Semi-Analytical Ocean Color Inversion Algorithm for Optically Shallow Waters

    NASA Technical Reports Server (NTRS)

    McKinna, Lachlan I. W.; Werdell, P. Jeremy; Fearns, Peter R. C. S.; Weeks, Scarla J.; Reichstetter, Martina; Franz, Bryan A.; Shea, Donald M.; Feldman, Gene C.

    2014-01-01

    Ocean color remote sensing provides synoptic-scale, near-daily observations of marine inherent optical properties (IOPs). Whilst contemporary ocean color algorithms are known to perform well in deep oceanic waters, they have difficulty operating in optically clear, shallow marine environments where light reflected from the seafloor contributes to the water-leaving radiance. The effect of benthic reflectance in optically shallow waters is known to adversely affect algorithms developed for optically deep waters [1, 2]. Whilst adapted versions of optically deep ocean color algorithms have been applied to optically shallow regions with reasonable success [3], there is presently no approach that directly corrects for bottom reflectance using existing knowledge of bathymetry and benthic albedo.To address the issue of optically shallow waters, we have developed a semi-analytical ocean color inversion algorithm: the Shallow Water Inversion Model (SWIM). SWIM uses existing bathymetry and a derived benthic albedo map to correct for bottom reflectance using the semi-analytical model of Lee et al [4]. The algorithm was incorporated into the NASA Ocean Biology Processing Groups L2GEN program and tested in optically shallow waters of the Great Barrier Reef, Australia. In-lieu of readily available in situ matchup data, we present a comparison between SWIM and two contemporary ocean color algorithms, the Generalized Inherent Optical Property Algorithm (GIOP) and the Quasi-Analytical Algorithm (QAA).

  1. Towards Seismic Tomography Based Upon Adjoint Methods

    NASA Astrophysics Data System (ADS)

    Tromp, J.; Liu, Q.; Tape, C.; Maggi, A.

    2006-12-01

    We outline the theory behind tomographic inversions based on 3D reference models, fully numerical 3D wave propagation, and adjoint methods. Our approach involves computing the Fréchet derivatives for tomographic inversions via the interaction between a forward wavefield, propagating from the source to the receivers, and an `adjoint' wavefield, propagating from the receivers back to the source. The forward wavefield is computed using a spectral-element method (SEM) and a heterogeneous wave-speed model, and stored as synthetic seismograms at particular receivers for which there is data. We specify an objective or misfit function that defines a measure of misfit between data and synthetics. For a given receiver, the differences between the data and the synthetics are time reversed and used as the source of the adjoint wavefield. For each earthquake, the interaction between the regular and adjoint wavefields is used to construct finite-frequency sensitivity kernels, which we call event kernel. These kernels may be thought of as weighted sums of measurement-specific banana-donut kernels, with weights determined by the measurements. The overall sensitivity is simply the sum of event kernels, which defines the misfit kernel. The misfit kernel is multiplied by convenient orthonormal basis functions that are embedded in the SEM code, resulting in the gradient of the misfit function, i.e., the Fréchet derivatives. The misfit kernel is multiplied by convenient orthonormal basis functions that are embedded in the SEM code, resulting in the gradient of the misfit function, i.e., the Fréchet derivatives. A conjugate gradient algorithm is used to iteratively improve the model while reducing the misfit function. Using 2D examples for Rayleigh wave phase-speed maps of southern California, we illustrate the construction of the gradient and the minimization algorithm, and consider various tomographic experiments, including source inversions, structural inversions, and joint source-structure inversions. We also illustrate the characteristics of these 3D finite-frequency kernels based upon adjoint simulations for a variety of global arrivals, e.g., Pdiff, P'P', and SKS, and we illustrate how the approach may be used to investigate body- and surface-wave anisotropy. In adjoint tomography any time segment in which the data and synthetics match reasonably well is suitable for measurement, and this implies a much greater number of phases per seismogram can be used compared to classical tomography in which the sensitivity of the measurements is determined analytically for specific arrivals, e.g., P. We use an automated picking algorithm based upon short-term/long-term averages and strict phase and amplitude anomaly criteria to determine arrivals and time windows suitable for measurement. For shallow global events the algorithm typically identifies of the order of 1000~windows suitable for measurement, whereas for a deep event the number can reach 4000. For southern California earthquakes the number of phases is of the order of 100 for a magnitude 4.0 event and up to 450 for a magnitude 5.0 event. We will show examples of event kernels for both global and regional earthquakes. These event kernels form the basis of adjoint tomography.

  2. 2D Seismic Imaging of Elastic Parameters by Frequency Domain Full Waveform Inversion

    NASA Astrophysics Data System (ADS)

    Brossier, R.; Virieux, J.; Operto, S.

    2008-12-01

    Thanks to recent advances in parallel computing, full waveform inversion is today a tractable seismic imaging method to reconstruct physical parameters of the earth interior at different scales ranging from the near- surface to the deep crust. We present a massively parallel 2D frequency-domain full-waveform algorithm for imaging visco-elastic media from multi-component seismic data. The forward problem (i.e. the resolution of the frequency-domain 2D PSV elastodynamics equations) is based on low-order Discontinuous Galerkin (DG) method (P0 and/or P1 interpolations). Thanks to triangular unstructured meshes, the DG method allows accurate modeling of both body waves and surface waves in case of complex topography for a discretization of 10 to 15 cells per shear wavelength. The frequency-domain DG system is solved efficiently for multiple sources with the parallel direct solver MUMPS. The local inversion procedure (i.e. minimization of residuals between observed and computed data) is based on the adjoint-state method which allows to efficiently compute the gradient of the objective function. Applying the inversion hierarchically from the low frequencies to the higher ones defines a multiresolution imaging strategy which helps convergence towards the global minimum. In place of expensive Newton algorithm, the combined use of the diagonal terms of the approximate Hessian matrix and optimization algorithms based on quasi-Newton methods (Conjugate Gradient, LBFGS, ...) allows to improve the convergence of the iterative inversion. The distribution of forward problem solutions over processors driven by a mesh partitioning performed by METIS allows to apply most of the inversion in parallel. We shall present the main features of the parallel modeling/inversion algorithm, assess its scalability and illustrate its performances with realistic synthetic case studies.

  3. Improvement of Forest Height Retrieval By Integration of Dual-Baseline PolInSAR Data And External DEM Data

    NASA Astrophysics Data System (ADS)

    Xie, Q.; Wang, C.; Zhu, J.; Fu, H.; Wang, C.

    2015-06-01

    In recent years, a lot of studies have shown that polarimetric synthetic aperture radar interferometry (PolInSAR) is a powerful technique for forest height mapping and monitoring. However, few researches address the problem of terrain slope effect, which will be one of the major limitations for forest height inversion in mountain forest area. In this paper, we present a novel forest height retrieval algorithm by integration of dual-baseline PolInSAR data and external DEM data. For the first time, we successfully expand the S-RVoG (Sloped-Random Volume over Ground) model for forest parameters inversion into the case of dual-baseline PolInSAR configuration. In this case, the proposed method not only corrects terrain slope variation effect efficiently, but also involves more observations to improve the accuracy of parameters inversion. In order to demonstrate the performance of the inversion algorithm, a set of quad-pol images acquired at the P-band in interferometric repeat-pass mode by the German Aerospace Center (DLR) with the Experimental SAR (E-SAR) system, in the frame of the BioSAR2008 campaign, has been used for the retrieval of forest height over Krycklan boreal forest in northern Sweden. At the same time, a high accuracy external DEM in the experimental area has been collected for computing terrain slope information, which subsequently is used as an inputting parameter in the S-RVoG model. Finally, in-situ ground truth heights in stand-level have been collected to validate the inversion result. The preliminary results show that the proposed inversion algorithm promises to provide much more accurate estimation of forest height than traditional dualbaseline inversion algorithms.

  4. Regularized two-step brain activity reconstruction from spatiotemporal EEG data

    NASA Astrophysics Data System (ADS)

    Alecu, Teodor I.; Voloshynovskiy, Sviatoslav; Pun, Thierry

    2004-10-01

    We are aiming at using EEG source localization in the framework of a Brain Computer Interface project. We propose here a new reconstruction procedure, targeting source (or equivalently mental task) differentiation. EEG data can be thought of as a collection of time continuous streams from sparse locations. The measured electric potential on one electrode is the result of the superposition of synchronized synaptic activity from sources in all the brain volume. Consequently, the EEG inverse problem is a highly underdetermined (and ill-posed) problem. Moreover, each source contribution is linear with respect to its amplitude but non-linear with respect to its localization and orientation. In order to overcome these drawbacks we propose a novel two-step inversion procedure. The solution is based on a double scale division of the solution space. The first step uses a coarse discretization and has the sole purpose of globally identifying the active regions, via a sparse approximation algorithm. The second step is applied only on the retained regions and makes use of a fine discretization of the space, aiming at detailing the brain activity. The local configuration of sources is recovered using an iterative stochastic estimator with adaptive joint minimum energy and directional consistency constraints.

  5. Systematic Quantification of Stabilizing Effects of Subtalar Joint Soft-Tissue Constraints in a Novel Cadaveric Model.

    PubMed

    Pellegrini, Manuel J; Glisson, Richard R; Wurm, Markus; Ousema, Paul H; Romash, Michael M; Nunley, James A; Easley, Mark E

    2016-05-18

    Distinguishing between ankle instability and subtalar joint instability is challenging because the contributions of the subtalar joint's soft-tissue constraints are poorly understood. This study quantified the effects on joint stability of systematic sectioning of these constraints followed by application of torsional and drawer loads simulating a manual clinical examination. Subtalar joint motion in response to carefully controlled inversion, eversion, internal rotation, and external rotation moments and multidirectional drawer forces was quantified in fresh-frozen cadaver limbs. Sequential measurements were obtained under axial load approximating a non-weight-bearing clinical setting with the foot in neutral, 10° of dorsiflexion, and 10° and 20° of plantar flexion. The contributions of the components of the inferior extensor retinaculum were documented after incremental sectioning. The calcaneofibular, cervical, and interosseous talocalcaneal ligaments were then sectioned sequentially, in two different orders, to produce five different ligament-insufficiency scenarios. Incremental detachment of the components of the inferior extensor retinaculum had no effect on subtalar motion independent of foot position. Regardless of the subsequent ligament-sectioning order, significant motion increases relative to the intact condition occurred only after transection of the calcaneofibular ligament. Sectioning of this ligament produced increased inversion and external rotation, which was most evident with the foot dorsiflexed. Calcaneofibular ligament disruption results in increases in subtalar inversion and external rotation that might be detectable during a manual examination. Insufficiency of other subtalar joint constraints may result in motion increases that are too subtle to be perceptible. If calcaneofibular ligament insufficiency is established, its reconstruction or repair should receive priority over that of other ankle or subtalar periarticular soft-tissue structures. Copyright © 2016 by The Journal of Bone and Joint Surgery, Incorporated.

  6. Efficient Jacobian inversion for the control of simple robot manipulators

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Bejczy, Antal K.

    1988-01-01

    Symbolic inversion of the Jacobian matrix for spherical wrist arms is investigated. It is shown that, taking advantage of the simple geometry of these arms, the closed-form solution of the system Q = J-1X, representing a transformation from task space to joint space, can be obtained very efficiently. The solutions for PUMA, Stanford, and a six-revolute-joint coplanar arm, along with all singular points, are presented. The solution for each joint variable is found as an explicit function of the singular points which provides a better insight into the effect of different singular points on the motion and force exertion of each individual joint. For the above arms, the computation cost of the solution is on the same order as the cost of forward kinematic solution and it is significantly reduced if forward kinematic solution is already obtained. A comparison with previous methods shows that this method is the most efficient to date.

  7. Recursive inverse factorization.

    PubMed

    Rubensson, Emanuel H; Bock, Nicolas; Holmström, Erik; Niklasson, Anders M N

    2008-03-14

    A recursive algorithm for the inverse factorization S(-1)=ZZ(*) of Hermitian positive definite matrices S is proposed. The inverse factorization is based on iterative refinement [A.M.N. Niklasson, Phys. Rev. B 70, 193102 (2004)] combined with a recursive decomposition of S. As the computational kernel is matrix-matrix multiplication, the algorithm can be parallelized and the computational effort increases linearly with system size for systems with sufficiently sparse matrices. Recent advances in network theory are used to find appropriate recursive decompositions. We show that optimization of the so-called network modularity results in an improved partitioning compared to other approaches. In particular, when the recursive inverse factorization is applied to overlap matrices of irregularly structured three-dimensional molecules.

  8. Joint transform correlator optical encryption system: Extensions of the recorded encrypted signal and its inverse Fourier transform

    NASA Astrophysics Data System (ADS)

    Galizzi, Gustavo E.; Cuadrado-Laborde, Christian

    2015-10-01

    In this work we study the joint transform correlator setup, finding two analytical expressions for the extensions of the joint power spectrum and its inverse Fourier transform. We found that an optimum efficiency is reached, when the bandwidth of the key code is equal to the sum of the bandwidths of the image plus the random phase mask (RPM). The quality of the decryption is also affected by the ratio between the bandwidths of the RPM and the input image, being better as this ratio increases. In addition, the effect on the decrypted image when the detection area is lower than the encrypted signal extension was analyzed. We illustrate these results through several numerical examples.

  9. Iterative inversion of deformation vector fields with feedback control.

    PubMed

    Dubey, Abhishek; Iliopoulos, Alexandros-Stavros; Sun, Xiaobai; Yin, Fang-Fang; Ren, Lei

    2018-05-14

    Often, the inverse deformation vector field (DVF) is needed together with the corresponding forward DVF in four-dimesional (4D) reconstruction and dose calculation, adaptive radiation therapy, and simultaneous deformable registration. This study aims at improving both accuracy and efficiency of iterative algorithms for DVF inversion, and advancing our understanding of divergence and latency conditions. We introduce a framework of fixed-point iteration algorithms with active feedback control for DVF inversion. Based on rigorous convergence analysis, we design control mechanisms for modulating the inverse consistency (IC) residual of the current iterate, to be used as feedback into the next iterate. The control is designed adaptively to the input DVF with the objective to enlarge the convergence area and expedite convergence. Three particular settings of feedback control are introduced: constant value over the domain throughout the iteration; alternating values between iteration steps; and spatially variant values. We also introduce three spectral measures of the displacement Jacobian for characterizing a DVF. These measures reveal the critical role of what we term the nontranslational displacement component (NTDC) of the DVF. We carry out inversion experiments with an analytical DVF pair, and with DVFs associated with thoracic CT images of six patients at end of expiration and end of inspiration. The NTDC-adaptive iterations are shown to attain a larger convergence region at a faster pace compared to previous nonadaptive DVF inversion iteration algorithms. By our numerical experiments, alternating control yields smaller IC residuals and inversion errors than constant control. Spatially variant control renders smaller residuals and errors by at least an order of magnitude, compared to other schemes, in no more than 10 steps. Inversion results also show remarkable quantitative agreement with analysis-based predictions. Our analysis captures properties of DVF data associated with clinical CT images, and provides new understanding of iterative DVF inversion algorithms with a simple residual feedback control. Adaptive control is necessary and highly effective in the presence of nonsmall NTDCs. The adaptive iterations or the spectral measures, or both, may potentially be incorporated into deformable image registration methods. © 2018 American Association of Physicists in Medicine.

  10. Stochastic joint inversion of geoelectrical cross-well data for salt tracer test monitoring to image the hydraulic conductivity field of heterogenous aquifers

    NASA Astrophysics Data System (ADS)

    Revil, A.; Jardani, A.; Dupont, J.

    2012-12-01

    The assessment of hydraulic conductivity of heterogeneous aquifers is a difficult task using traditional hydrogeological methods (e.g., steady state or transient pumping tests) due to their low spatial resolution associated with a low density of available piezometers. Geophysical measurements performed at the ground surface and in boreholes provide additional information for increasing the resolution and accuracy of the inverted hydraulic conductivity. We use a stochastic joint inversion of Direct Current (DC) resistivity and Self-Potential (SP) data plus in situ measurement of the salinity in a downstream well during a synthetic salt tracer experiment to reconstruct the hydraulic conductivity field of an heterogeneous aquifer. The pilot point parameterization is used to avoid over-parameterization of the inverse problem. Bounds on the model parameters are used to promote a consistent Markov chain Monte Carlo sampling of the hydrogeological parameters of the model. To evaluate the effectiveness of the inversion process, we compare several scenarios where the geophysical data are coupled or not to the hydrogeological data to map the hydraulic conductivity. We first test the effectiveness of the inversion of each type of data alone, and then we combine the methods two by two. We finally combine all the information together to show the value of each type of geophysical data in the joint inversion process because of their different sensitivity map. The results of the inversion reveal that the self-potential data improve the estimate of hydraulic conductivity especially when the self-potential data are combined to the salt concentration measurement in the second well or to the time-lapse electrical resistivity data. Various tests are also performed to quantify the uncertainty in the inversion when for instance the semi-variogram is not known and its parameters should be inverted as well.

  11. Analysis of muscle activity and ankle joint movement during the side-hop test.

    PubMed

    Yoshida, Masahiro; Taniguchi, Keigo; Katayose, Masaki

    2011-08-01

    Functional performance tests (FPTs) that consist of movements, such as hopping, landing, and cutting, provide useful measurements. Although some tests have been established for kinematic studies of the knee joint, very few tests have been established for the ankle joint. To use the FPT as a test battery for patients with an ankle sprain, it is necessary to document typical patterns of muscle activation and range of motion (ROM) of the ankle joint during FPTs. Therefore, the purpose of this study was to investigate the pattern of the ROM of the ankle inversion/eversion and the muscle activity of the peroneus longus muscle (PL) and the tibial anterior muscle (TA) in normal subjects during the side-hop test. To emphasize the characteristics of ROM and electromyography (EMG) at each phase, the side-hop tests were divided into 4 phases: lateral-hop contact phase (LC), lateral-hop flight phase (LF), medial hop contact phase (MC), and medial hop flight phase (MF), and the ROM of ankle inversion/eversion, a peak angle of ankle inversion, and Integral EMG (IEMG) of PL and TA compared among 4 phases. Fifteen male subjects with no symptoms of ankle joint problems participated in this research. The ROM of ankle inversion/eversion during the side-hop test was 27 ± 3.8° (mean ± SD), and there was a significant difference in the ROM of ankle inversion/eversion among 4 phases (p < 0.05). The phase in which the widest ROM was presented was the MF. A peak angle of the ankle inversion at MC was significantly greater than at LC and MF (p <0.05). A peak angle of the ankle inversion at LF was significantly greater than at LC and MF. The PL remained contracting with 50-160% of maximal voluntary contraction (MVC). The IEMGs of PL in both the contact phases were significantly greater than in both the flight phases (p < 0.05). In addition, the PL activity at LC was significantly greater than at MC. The TA remained contracting at 50-80% of MVC through the side-hop test. The IEMG of TA at both the contact phases was significantly greater than at 2 flight phases. However, there was no significant difference between LC and MF. Results of this study could be useful as basic data when evaluating the validity of the side-hop test for patients with ankle sprain.

  12. A multi-sensor data-driven methodology for all-sky passive microwave inundation retrieval

    NASA Astrophysics Data System (ADS)

    Takbiri, Zeinab; Ebtehaj, Ardeshir M.; Foufoula-Georgiou, Efi

    2017-06-01

    We present a multi-sensor Bayesian passive microwave retrieval algorithm for flood inundation mapping at high spatial and temporal resolutions. The algorithm takes advantage of observations from multiple sensors in optical, short-infrared, and microwave bands, thereby allowing for detection and mapping of the sub-pixel fraction of inundated areas under almost all-sky conditions. The method relies on a nearest-neighbor search and a modern sparsity-promoting inversion method that make use of an a priori dataset in the form of two joint dictionaries. These dictionaries contain almost overlapping observations by the Special Sensor Microwave Imager and Sounder (SSMIS) on board the Defense Meteorological Satellite Program (DMSP) F17 satellite and the Moderate Resolution Imaging Spectroradiometer (MODIS) on board the Aqua and Terra satellites. Evaluation of the retrieval algorithm over the Mekong Delta shows that it is capable of capturing to a good degree the inundation diurnal variability due to localized convective precipitation. At longer timescales, the results demonstrate consistency with the ground-based water level observations, denoting that the method is properly capturing inundation seasonal patterns in response to regional monsoonal rain. The calculated Euclidean distance, rank-correlation, and also copula quantile analysis demonstrate a good agreement between the outputs of the algorithm and the observed water levels at monthly and daily timescales. The current inundation products are at a resolution of 12.5 km and taken twice per day, but a higher resolution (order of 5 km and every 3 h) can be achieved using the same algorithm with the dictionary populated by the Global Precipitation Mission (GPM) Microwave Imager (GMI) products.

  13. Real-time marker-free motion capture system using blob feature analysis

    NASA Astrophysics Data System (ADS)

    Park, Chang-Joon; Kim, Sung-Eun; Kim, Hong-Seok; Lee, In-Ho

    2005-02-01

    This paper presents a real-time marker-free motion capture system which can reconstruct 3-dimensional human motions. The virtual character of the proposed system mimics the motion of an actor in real-time. The proposed system captures human motions by using three synchronized CCD cameras and detects the root and end-effectors of an actor such as a head, hands, and feet by exploiting the blob feature analysis. And then, the 3-dimensional positions of end-effectors are restored and tracked by using Kalman filter. At last, the positions of the intermediate joint are reconstructed by using anatomically constrained inverse kinematics algorithm. The proposed system was implemented under general lighting conditions and we confirmed that the proposed system could reconstruct motions of a lot of people wearing various clothes in real-time stably.

  14. A spatial operator algebra for manipulator modeling and control

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.; Kreutz, K.; Milman, M.

    1988-01-01

    A powerful new spatial operator algebra for modeling, control, and trajectory design of manipulators is discussed along with its implementation in the Ada programming language. Applications of this algebra to robotics include an operator representation of the manipulator Jacobian matrix; the robot dynamical equations formulated in terms of the spatial algebra, showing the complete equivalence between the recursive Newton-Euler formulations to robot dynamics; the operator factorization and inversion of the manipulator mass matrix which immediately results in O(N) recursive forward dynamics algorithms; the joint accelerations of a manipulator due to a tip contact force; the recursive computation of the equivalent mass matrix as seen at the tip of a manipulator; and recursive forward dynamics of a closed chain system. Finally, additional applications and current research involving the use of the spatial operator algebra are discussed in general terms.

  15. Radiative Transfer Modeling and Retrievals for Advanced Hyperspectral Sensors

    NASA Technical Reports Server (NTRS)

    Liu, Xu; Zhou, Daniel K.; Larar, Allen M.; Smith, William L., Sr.; Mango, Stephen A.

    2009-01-01

    A novel radiative transfer model and a physical inversion algorithm based on principal component analysis will be presented. Instead of dealing with channel radiances, the new approach fits principal component scores of these quantities. Compared to channel-based radiative transfer models, the new approach compresses radiances into a much smaller dimension making both forward modeling and inversion algorithm more efficient.

  16. Algorithms and Architectures for Elastic-Wave Inversion Final Report CRADA No. TC02144.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larsen, S.; Lindtjorn, O.

    2017-08-15

    This was a collaborative effort between Lawrence Livermore National Security, LLC as manager and operator of Lawrence Livermore National Laboratory (LLNL) and Schlumberger Technology Corporation (STC), to perform a computational feasibility study that investigates hardware platforms and software algorithms applicable to STC for Reverse Time Migration (RTM) / Reverse Time Inversion (RTI) of 3-D seismic data.

  17. Probabilistic inversion of expert assessments to inform projections about Antarctic ice sheet responses

    PubMed Central

    Wong, Tony E.; Keller, Klaus

    2017-01-01

    The response of the Antarctic ice sheet (AIS) to changing global temperatures is a key component of sea-level projections. Current projections of the AIS contribution to sea-level changes are deeply uncertain. This deep uncertainty stems, in part, from (i) the inability of current models to fully resolve key processes and scales, (ii) the relatively sparse available data, and (iii) divergent expert assessments. One promising approach to characterizing the deep uncertainty stemming from divergent expert assessments is to combine expert assessments, observations, and simple models by coupling probabilistic inversion and Bayesian inversion. Here, we present a proof-of-concept study that uses probabilistic inversion to fuse a simple AIS model and diverse expert assessments. We demonstrate the ability of probabilistic inversion to infer joint prior probability distributions of model parameters that are consistent with expert assessments. We then confront these inferred expert priors with instrumental and paleoclimatic observational data in a Bayesian inversion. These additional constraints yield tighter hindcasts and projections. We use this approach to quantify how the deep uncertainty surrounding expert assessments affects the joint probability distributions of model parameters and future projections. PMID:29287095

  18. Achilles tendon moment arm in humans is not affected by inversion/eversion of the foot: a short report.

    PubMed

    Wolfram, Susann; Morse, Christopher I; Winwood, Keith L; Hodson-Tole, Emma; McEwan, Islay M

    2018-01-01

    The triceps surae primarily acts as plantarflexor of the ankle joint. However, the group also causes inversion and eversion at the subtalar joint. Despite this, the Achilles tendon moment arm is generally measured without considering the potential influence of inversion/eversion of the foot during plantarflexion. This study investigated the effect of foot inversion and eversion on the plantarflexion Achilles tendon moment arm. Achilles tendon moment arms were determined using the centre-of-rotation method in magnetic resonance images of the left ankle of 11 participants. The foot was positioned at 15° dorsiflexion, 0° or 15° plantarflexion using a Styrofoam wedge. In each of these positions, the foot was either 10° inverted, neutral or 10° everted using an additional Styrofoam wedge. Achilles tendon moment arm in neutral foot position was 47.93 ± 4.54 mm and did not differ significantly when the foot was positioned in 10° inversion and 10° eversion. Hence, inversion/eversion position of the foot may not considerably affect the length of the Achilles tendon moment arm. This information could be useful in musculoskeletal models of the human lower leg and foot and when estimating Achilles tendon forces during plantarflexion with the foot positioned in inversion or eversion.

  19. Particle Swarm Optimization algorithms for geophysical inversion, practical hints

    NASA Astrophysics Data System (ADS)

    Garcia Gonzalo, E.; Fernandez Martinez, J.; Fernandez Alvarez, J.; Kuzma, H.; Menendez Perez, C.

    2008-12-01

    PSO is a stochastic optimization technique that has been successfully used in many different engineering fields. PSO algorithm can be physically interpreted as a stochastic damped mass-spring system (Fernandez Martinez and Garcia Gonzalo 2008). Based on this analogy we present a whole family of PSO algorithms and their respective first order and second order stability regions. Their performance is also checked using synthetic functions (Rosenbrock and Griewank) showing a degree of ill-posedness similar to that found in many geophysical inverse problems. Finally, we present the application of these algorithms to the analysis of a Vertical Electrical Sounding inverse problem associated to a seawater intrusion in a coastal aquifer in South Spain. We analyze the role of PSO parameters (inertia, local and global accelerations and discretization step), both in convergence curves and in the a posteriori sampling of the depth of an intrusion. Comparison is made with binary genetic algorithms and simulated annealing. As result of this analysis, practical hints are given to select the correct algorithm and to tune the corresponding PSO parameters. Fernandez Martinez, J.L., Garcia Gonzalo, E., 2008a. The generalized PSO: a new door to PSO evolution. Journal of Artificial Evolution and Applications. DOI:10.1155/2008/861275.

  20. 4D inversion of time-lapse magnetotelluric data sets for monitoring geothermal reservoir

    NASA Astrophysics Data System (ADS)

    Nam, Myung Jin; Song, Yoonho; Jang, Hannuree; Kim, Bitnarae

    2017-06-01

    The productivity of a geothermal reservoir, which is a function of the pore-space and fluid-flow path of the reservoir, varies since the properties of the reservoir changes with geothermal reservoir production. Because the variation in the reservoir properties causes changes in electrical resistivity, time-lapse (TL) three-dimensional (3D) magnetotelluric (MT) methods can be applied to monitor the productivity variation of a geothermal reservoir thanks to not only its sensitivity to the electrical resistivity but also its deep depth of survey penetration. For an accurate interpretation of TL MT-data sets, a four-dimensional (4D) MT inversion algorithm has been developed to simultaneously invert all vintage data considering time-coupling between vintages. However, the changes in electrical resistivity of deep geothermal reservoirs are usually small generating minimum variation in TL MT responses. Maximizing the sensitivity of inversion to the changes in resistivity is critical in the success of 4D MT inversion. Thus, we further developed a focused 4D MT inversion method by considering not only the location of a reservoir but also the distribution of newly-generated fractures during the production. For the evaluation of the 4D MT algorithm, we tested our 4D inversion algorithms using synthetic TL MT-data sets.

  1. Voxel inversion of airborne electromagnetic data for improved model integration

    NASA Astrophysics Data System (ADS)

    Fiandaca, Gianluca; Auken, Esben; Kirkegaard, Casper; Vest Christiansen, Anders

    2014-05-01

    Inversion of electromagnetic data has migrated from single site interpretations to inversions including entire surveys using spatial constraints to obtain geologically reasonable results. Though, the model space is usually linked to the actual observation points. For airborne electromagnetic (AEM) surveys the spatial discretization of the model space reflects the flight lines. On the contrary, geological and groundwater models most often refer to a regular voxel grid, not correlated to the geophysical model space, and the geophysical information has to be relocated for integration in (hydro)geological models. We have developed a new geophysical inversion algorithm working directly in a voxel grid disconnected from the actual measuring points, which then allows for informing directly geological/hydrogeological models. The new voxel model space defines the soil properties (like resistivity) on a set of nodes, and the distribution of the soil properties is computed everywhere by means of an interpolation function (e.g. inverse distance or kriging). Given this definition of the voxel model space, the 1D forward responses of the AEM data are computed as follows: 1) a 1D model subdivision, in terms of model thicknesses, is defined for each 1D data set, creating "virtual" layers. 2) the "virtual" 1D models at the sounding positions are finalized by interpolating the soil properties (the resistivity) in the center of the "virtual" layers. 3) the forward response is computed in 1D for each "virtual" model. We tested the new inversion scheme on an AEM survey carried out with the SkyTEM system close to Odder, in Denmark. The survey comprises 106054 dual mode AEM soundings, and covers an area of approximately 13 km X 16 km. The voxel inversion was carried out on a structured grid of 260 X 325 X 29 xyz nodes (50 m xy spacing), for a total of 2450500 inversion parameters. A classical spatially constrained inversion (SCI) was carried out on the same data set, using 106054 spatially constrained 1D models with 29 layers. For comparison, the SCI inversion models have been gridded on the same grid of the voxel inversion. The new voxel inversion and the classic SCI give similar data fit and inversion models. The voxel inversion decouples the geophysical model from the position of acquired data, and at the same time fits the data as well as the classic SCI inversion. Compared to the classic approach, the voxel inversion is better suited for informing directly (hydro)geological models and for sequential/Joint/Coupled (hydro)geological inversion. We believe that this new approach will facilitate the integration of geophysics, geology and hydrology for improved groundwater and environmental management.

  2. Anisotropic Lithospheric layering in the North American craton, revealed by Bayesian inversion of short and long period data

    NASA Astrophysics Data System (ADS)

    Roy, Corinna; Calo, Marco; Bodin, Thomas; Romanowicz, Barbara

    2016-04-01

    Competing hypotheses for the formation and evolution of continents are highly under debate, including the theory of underplating by hot plumes or accretion by shallow subduction in continental or arc settings. In order to support these hypotheses, documenting structural layering in the cratonic lithosphere becomes especially important. Recent studies of seismic-wave receiver function data have detected a structural boundary under continental cratons at 100-140 km depths, which is too shallow to be consistent with the lithosphere-asthenosphere boundary, as inferred from seismic tomography and other geophysical studies. This leads to the conclusion that 1) the cratonic lithosphere may be thinner than expected, contradicting tomographic and other geophysical or geochemical inferences, or 2) that the receiver function studies detect a mid-lithospheric discontinuity rather than the LAB. On the other hand, several recent studies documented significant changes in the direction of azimuthal anisotropy with depth that suggest layering in the anisotropic structure of the stable part of the North American continent. In particular, Yuan and Romanowicz (2010) combined long period surface wave and overtone data with core refracted shear wave (SKS) splitting measurements in a joint tomographic inversion. A question that arises is whether the anisotropic layering observed coincides with that obtained from receiver function studies. To address this question, we use a trans-dimensional Markov-chain Monte Carlo (MCMC) algorithm to generate probabilistic 1D radially and azimuthal anisotropic shear wave velocity profiles for selected stations in North America. In the algorithm we jointly invert short period (Ps Receiver Functions, surface wave dispersion for Love and Rayleigh waves) and long period data (SKS waveforms). By including three different data types, which sample different volumes of the Earth and have different sensitivities to 
structure, we overcome the problem of incompatible interpretations of models provided by only one data set. The resulting 1D profiles include both isotropic and anisotropic discontinuities in the upper mantle (above 350 km depth). The huge advantage of our procedure is the avoidance of any intermediate processing steps such as numerical deconvolution or the calculation of splitting parameters, which can be very sensitive to noise. Additionally, the number of layers, as well as the data noise and the presence of anisotropy are treated as unknowns in the transdimensional Monte Carlo Markov chain algorithm. We recently demonstrated the power of this approach in the case of two stations located in different tectonic settings (Bodin et al., 2015, submitted). Here we extend this approach to a broader range of settings within the north American continent.

  3. Investigation of the reconstruction accuracy of guided wave tomography using full waveform inversion

    NASA Astrophysics Data System (ADS)

    Rao, Jing; Ratassepp, Madis; Fan, Zheng

    2017-07-01

    Guided wave tomography is a promising tool to accurately determine the remaining wall thicknesses of corrosion damages, which are among the major concerns for many industries. Full Waveform Inversion (FWI) algorithm is an attractive guided wave tomography method, which uses a numerical forward model to predict the waveform of guided waves when propagating through corrosion defects, and an inverse model to reconstruct the thickness map from the ultrasonic signals captured by transducers around the defect. This paper discusses the reconstruction accuracy of the FWI algorithm on plate-like structures by using simulations as well as experiments. It was shown that this algorithm can obtain a resolution of around 0.7 wavelengths for defects with smooth depth variations from the acoustic modeling data, and about 1.5-2 wavelengths from the elastic modeling data. Further analysis showed that the reconstruction accuracy is also dependent on the shape of the defect. It was demonstrated that the algorithm maintains the accuracy in the case of multiple defects compared to conventional algorithms based on Born approximation.

  4. Sodium inversion recovery MRI on the knee joint at 7 T with an optimal control pulse.

    PubMed

    Lee, Jae-Seung; Xia, Ding; Madelin, Guillaume; Regatte, Ravinder R

    2016-01-01

    In the field of sodium magnetic resonance imaging (MRI), inversion recovery (IR) is a convenient and popular method to select sodium in different environments. For the knee joint, IR has been used to suppress the signal from synovial fluids, which improves the correlation between the sodium signal and the concentration of glycosaminoglycans (GAGs) in cartilage tissues. For the better inversion of the magnetization vector under the spatial variations of the B0 and B1 fields, the IR sequence usually employ adiabatic pulses as the inversion pulse. On the other hand, it has been shown that RF shapes robust against the variations of the B0 and B1 fields can be generated by numerical optimization based on optimal control theory. In this work, we compare the performance of fluid-suppressed sodium MRI on the knee joint in vivo, between one implemented with an adiabatic pulse in the IR sequence and the other with the adiabatic pulse replaced by an optimal-control shaped pulse. While the optimal-control pulse reduces the RF power deposited to the body by 58%, the quality of fluid suppression and the signal level of sodium within cartilage are similar between two implementations. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Effects of five hindfoot arthrodeses on foot and ankle motion: Measurements in cadaver specimens

    PubMed Central

    Zhang, Kun; Chen, Yanxi; Qiang, Minfei; Hao, Yini

    2016-01-01

    Single, double, and triple hindfoot arthrodeses are used to correct hindfoot deformities and relieve chronic pain. However, joint fusion may lead to dysfunction in adjacent articular surfaces. We compared range of motion in adjacent joints before and after arthrodesis to determine the effects of each procedure on joint motion. The theory of moment of couple, bending moment and balanced loading was applied to each of 16 fresh cadaver feet to induce dorsiflexion, plantarflexion, internal rotation, external rotation, inversion, and eversion. Range of motion was measured with a 3-axis coordinate measuring machine in a control foot and in feet after subtalar, talonavicular, calcaneocuboid, double, or triple arthrodesis. All arthrodeses restricted mainly internal-external rotation and inversion-eversion. The restriction in a double arthrodesis was more than that in a single arthrodesis, but that in a calcaneocuboid arthrodesis was relatively low. After triple arthrodeses, the restriction on dorsiflexion and plantarflexion movements was substantial, and internal-external rotation and inversion-eversion were almost lost. Considering that different arthrodesis procedures cause complex, three-dimensional hindfoot motion reductions, we recommend talonavicular or calcaneocuboid arthrodesis for patients with well-preserved functions of plantarflexion/dorsiflexion before operation, subtalar or calcaneocuboid arthrodesis for patients with well-preserved abduction/adduction, and talonavicular arthrodesis for patients with well-preserved eversion/inversion. PMID:27752084

  6. Kinect as a Tool for Gait Analysis: Validation of a Real-Time Joint Extraction Algorithm Working in Side View

    PubMed Central

    Cippitelli, Enea; Gasparrini, Samuele; Spinsante, Susanna; Gambi, Ennio

    2015-01-01

    The Microsoft Kinect sensor has gained attention as a tool for gait analysis for several years. Despite the many advantages the sensor provides, however, the lack of a native capability to extract joints from the side view of a human body still limits the adoption of the device to a number of relevant applications. This paper presents an algorithm to locate and estimate the trajectories of up to six joints extracted from the side depth view of a human body captured by the Kinect device. The algorithm is then applied to extract data that can be exploited to provide an objective score for the “Get Up and Go Test”, which is typically adopted for gait analysis in rehabilitation fields. Starting from the depth-data stream provided by the Microsoft Kinect sensor, the proposed algorithm relies on anthropometric models only, to locate and identify the positions of the joints. Differently from machine learning approaches, this solution avoids complex computations, which usually require significant resources. The reliability of the information about the joint position output by the algorithm is evaluated by comparison to a marker-based system. Tests show that the trajectories extracted by the proposed algorithm adhere to the reference curves better than the ones obtained from the skeleton generated by the native applications provided within the Microsoft Kinect (Microsoft Corporation, Redmond, WA, USA, 2013) and OpenNI (OpenNI organization, Tel Aviv, Israel, 2013) Software Development Kits. PMID:25594588

  7. Neuromusculoskeletal model self-calibration for on-line sequential bayesian moment estimation

    NASA Astrophysics Data System (ADS)

    Bueno, Diana R.; Montano, L.

    2017-04-01

    Objective. Neuromusculoskeletal models involve many subject-specific physiological parameters that need to be adjusted to adequately represent muscle properties. Traditionally, neuromusculoskeletal models have been calibrated with a forward-inverse dynamic optimization which is time-consuming and unfeasible for rehabilitation therapy. Non self-calibration algorithms have been applied to these models. To the best of our knowledge, the algorithm proposed in this work is the first on-line calibration algorithm for muscle models that allows a generic model to be adjusted to different subjects in a few steps. Approach. In this paper we propose a reformulation of the traditional muscle models that is able to sequentially estimate the kinetics (net joint moments), and also its full self-calibration (subject-specific internal parameters of the muscle from a set of arbitrary uncalibrated data), based on the unscented Kalman filter. The nonlinearity of the model as well as its calibration problem have obliged us to adopt the sum of Gaussians filter suitable for nonlinear systems. Main results. This sequential Bayesian self-calibration algorithm achieves a complete muscle model calibration using as input only a dataset of uncalibrated sEMG and kinematics data. The approach is validated experimentally using data from the upper limbs of 21 subjects. Significance. The results show the feasibility of neuromusculoskeletal model self-calibration. This study will contribute to a better understanding of the generalization of muscle models for subject-specific rehabilitation therapies. Moreover, this work is very promising for rehabilitation devices such as electromyography-driven exoskeletons or prostheses.

  8. Using Poisson-regularized inversion of Bremsstrahlung emission to extract full electron energy distribution functions from x-ray pulse-height detector data

    NASA Astrophysics Data System (ADS)

    Swanson, C.; Jandovitz, P.; Cohen, S. A.

    2018-02-01

    We measured Electron Energy Distribution Functions (EEDFs) from below 200 eV to over 8 keV and spanning five orders-of-magnitude in intensity, produced in a low-power, RF-heated, tandem mirror discharge in the PFRC-II apparatus. The EEDF was obtained from the x-ray energy distribution function (XEDF) using a novel Poisson-regularized spectrum inversion algorithm applied to pulse-height spectra that included both Bremsstrahlung and line emissions. The XEDF was measured using a specially calibrated Amptek Silicon Drift Detector (SDD) pulse-height system with 125 eV FWHM at 5.9 keV. The algorithm is found to out-perform current leading x-ray inversion algorithms when the error due to counting statistics is high.

  9. Joint Inversion of 3d Mt/gravity/magnetic at Pisagua Fault.

    NASA Astrophysics Data System (ADS)

    Bascur, J.; Saez, P.; Tapia, R.; Humpire, M.

    2017-12-01

    This work shows the results of a joint inversion at Pisagua Fault using 3D Magnetotellurics (MT), gravity and regional magnetic data. The MT survey has a poor coverage of study area with only 21 stations; however, it allows to detect a low resistivity zone aligned with the Pisagua Fault trace that it is interpreted as a damage zone. The integration of gravity and magnetic data, which have more dense sampling and coverage, adds more detail and resolution to the detected low resistivity structure and helps to improve the structure interpretation using the resulted models (density, magnetic-susceptibility and electrical resistivity). The joint inversion process minimizes a multiple target function which includes the data misfit, model roughness and coupling norms (crossgradient and direct relations) for all geophysical methods considered (MT, gravity and magnetic). This process is solved iteratively using the Gauss-Newton method which updates the model of each geophysical method improving its individual data misfit, model roughness and the coupling with the other geophysical models. For solving the model updates of magnetic and gravity methods were developed dedicated 3D inversion software codes which include the coupling norms with additionals geophysical parameters. The model update of the 3D MT is calculated using an iterative method which sequentially filters the priority model and the output model of a single 3D MT inversion process for obtaining the resistivity model coupled solution with the gravity and magnetic methods.

  10. Integrating Electromagnetic Data with Other Geophysical Observations for Enhanced Imaging of the Earth: A Tutorial and Review

    NASA Astrophysics Data System (ADS)

    Moorkamp, Max

    2017-09-01

    In this review, I discuss the basic principles of joint inversion and constrained inversion approaches and show a few instructive examples of applications of these approaches in the literature. Starting with some basic definitions of the terms joint inversion and constrained inversion, I use a simple three-layered model as a tutorial example that demonstrates the general properties of joint inversion with different coupling methods. In particular, I investigate to which extent combining different geophysical methods can restrict the set of acceptable models and under which circumstances the results can be biased. Some ideas on how to identify such biased results and how negative results can be interpreted conclude the tutorial part. The case studies in the second part have been selected to highlight specific issues such as choosing an appropriate parameter relationship to couple seismic and electromagnetic data and demonstrate the most commonly used approaches, e.g., the cross-gradient constraint and direct parameter coupling. Throughout the discussion, I try to identify topics for future work. Overall, it appears that integrating electromagnetic data with other observations has reached a level of maturity and is starting to move away from fundamental proof-of-concept studies to answering questions about the structure of the subsurface. With a wide selection of coupling methods suited to different geological scenarios, integrated approaches can be applied on all scales and have the potential to deliver new answers to important geological questions.

  11. Joint Optimization of Receiver Placement and Illuminator Selection for a Multiband Passive Radar Network.

    PubMed

    Xie, Rui; Wan, Xianrong; Hong, Sheng; Yi, Jianxin

    2017-06-14

    The performance of a passive radar network can be greatly improved by an optimal radar network structure. Generally, radar network structure optimization consists of two aspects, namely the placement of receivers in suitable places and selection of appropriate illuminators. The present study investigates issues concerning the joint optimization of receiver placement and illuminator selection for a passive radar network. Firstly, the required radar cross section (RCS) for target detection is chosen as the performance metric, and the joint optimization model boils down to the partition p -center problem (PPCP). The PPCP is then solved by a proposed bisection algorithm. The key of the bisection algorithm lies in solving the partition set covering problem (PSCP), which can be solved by a hybrid algorithm developed by coupling the convex optimization with the greedy dropping algorithm. In the end, the performance of the proposed algorithm is validated via numerical simulations.

  12. Switching algorithm for maglev train double-modular redundant positioning sensors.

    PubMed

    He, Ning; Long, Zhiqiang; Xue, Song

    2012-01-01

    High-resolution positioning for maglev trains is implemented by detecting the tooth-slot structure of the long stator installed along the rail, but there are large joint gaps between long stator sections. When a positioning sensor is below a large joint gap, its positioning signal is invalidated, thus double-modular redundant positioning sensors are introduced into the system. This paper studies switching algorithms for these redundant positioning sensors. At first, adaptive prediction is applied to the sensor signals. The prediction errors are used to trigger sensor switching. In order to enhance the reliability of the switching algorithm, wavelet analysis is introduced to suppress measuring disturbances without weakening the signal characteristics reflecting the stator joint gap based on the correlation between the wavelet coefficients of adjacent scales. The time delay characteristics of the method are analyzed to guide the algorithm simplification. Finally, the effectiveness of the simplified switching algorithm is verified through experiments.

  13. Switching Algorithm for Maglev Train Double-Modular Redundant Positioning Sensors

    PubMed Central

    He, Ning; Long, Zhiqiang; Xue, Song

    2012-01-01

    High-resolution positioning for maglev trains is implemented by detecting the tooth-slot structure of the long stator installed along the rail, but there are large joint gaps between long stator sections. When a positioning sensor is below a large joint gap, its positioning signal is invalidated, thus double-modular redundant positioning sensors are introduced into the system. This paper studies switching algorithms for these redundant positioning sensors. At first, adaptive prediction is applied to the sensor signals. The prediction errors are used to trigger sensor switching. In order to enhance the reliability of the switching algorithm, wavelet analysis is introduced to suppress measuring disturbances without weakening the signal characteristics reflecting the stator joint gap based on the correlation between the wavelet coefficients of adjacent scales. The time delay characteristics of the method are analyzed to guide the algorithm simplification. Finally, the effectiveness of the simplified switching algorithm is verified through experiments. PMID:23112657

  14. Joint demosaicking and zooming using moderate spectral correlation and consistent edge map

    NASA Astrophysics Data System (ADS)

    Zhou, Dengwen; Dong, Weiming; Chen, Wengang

    2014-07-01

    The recently published joint demosaicking and zooming algorithms for single-sensor digital cameras all overfit the popular Kodak test images, which have been found to have higher spectral correlation than typical color images. Their performance perhaps significantly degrades on other datasets, such as the McMaster test images, which have weak spectral correlation. A new joint demosaicking and zooming algorithm is proposed for the Bayer color filter array (CFA) pattern, in which the edge direction information (edge map) extracted from the raw CFA data is consistently used in demosaicking and zooming. It also moderately utilizes the spectral correlation between color planes. The experimental results confirm that the proposed algorithm produces an excellent performance on both the Kodak and McMaster datasets in terms of both subjective and objective measures. Our algorithm also has high computational efficiency. It provides a better tradeoff among adaptability, performance, and computational cost compared to the existing algorithms.

  15. A joint tracking method for NSCC based on WLS algorithm

    NASA Astrophysics Data System (ADS)

    Luo, Ruidan; Xu, Ying; Yuan, Hong

    2017-12-01

    Navigation signal based on compound carrier (NSCC), has the flexible multi-carrier scheme and various scheme parameters configuration, which enables it to possess significant efficiency of navigation augmentation in terms of spectral efficiency, tracking accuracy, multipath mitigation capability and anti-jamming reduction compared with legacy navigation signals. Meanwhile, the typical scheme characteristics can provide auxiliary information for signal synchronism algorithm design. This paper, based on the characteristics of NSCC, proposed a kind of joint tracking method utilizing Weighted Least Square (WLS) algorithm. In this method, the LS algorithm is employed to jointly estimate each sub-carrier frequency shift with the frequency-Doppler linear relationship, by utilizing the known sub-carrier frequency. Besides, the weighting matrix is set adaptively according to the sub-carrier power to ensure the estimation accuracy. Both the theory analysis and simulation results illustrate that the tracking accuracy and sensitivity of this method outperforms the single-carrier algorithm with lower SNR.

  16. A space efficient flexible pivot selection approach to evaluate determinant and inverse of a matrix.

    PubMed

    Jafree, Hafsa Athar; Imtiaz, Muhammad; Inayatullah, Syed; Khan, Fozia Hanif; Nizami, Tajuddin

    2014-01-01

    This paper presents new simple approaches for evaluating determinant and inverse of a matrix. The choice of pivot selection has been kept arbitrary thus they reduce the error while solving an ill conditioned system. Computation of determinant of a matrix has been made more efficient by saving unnecessary data storage and also by reducing the order of the matrix at each iteration, while dictionary notation [1] has been incorporated for computing the matrix inverse thereby saving unnecessary calculations. These algorithms are highly class room oriented, easy to use and implemented by students. By taking the advantage of flexibility in pivot selection, one may easily avoid development of the fractions by most. Unlike the matrix inversion method [2] and [3], the presented algorithms obviate the use of permutations and inverse permutations.

  17. 2.5D complex resistivity modeling and inversion using unstructured grids

    NASA Astrophysics Data System (ADS)

    Xu, Kaijun; Sun, Jie

    2016-04-01

    The characteristic of complex resistivity on rock and ore has been recognized by people for a long time. Generally we have used the Cole-Cole Model(CCM) to describe complex resistivity. It has been proved that the electrical anomaly of geologic body can be quantitative estimated by CCM parameters such as direct resistivity(ρ0), chargeability(m), time constant(τ) and frequency dependence(c). Thus it is very important to obtain the complex parameters of geologic body. It is difficult to approximate complex structures and terrain using traditional rectangular grid. In order to enhance the numerical accuracy and rationality of modeling and inversion, we use an adaptive finite-element algorithm for forward modeling of the frequency-domain 2.5D complex resistivity and implement the conjugate gradient algorithm in the inversion of 2.5D complex resistivity. An adaptive finite element method is applied for solving the 2.5D complex resistivity forward modeling of horizontal electric dipole source. First of all, the CCM is introduced into the Maxwell's equations to calculate the complex resistivity electromagnetic fields. Next, the pseudo delta function is used to distribute electric dipole source. Then the electromagnetic fields can be expressed in terms of the primary fields caused by layered structure and the secondary fields caused by inhomogeneities anomalous conductivity. At last, we calculated the electromagnetic fields response of complex geoelectric structures such as anticline, syncline, fault. The modeling results show that adaptive finite-element methods can automatically improve mesh generation and simulate complex geoelectric models using unstructured grids. The 2.5D complex resistivity invertion is implemented based the conjugate gradient algorithm.The conjugate gradient algorithm doesn't need to compute the sensitivity matrix but directly computes the sensitivity matrix or its transpose multiplying vector. In addition, the inversion target zones are segmented with fine grids and the background zones are segmented with big grid, the method can reduce the grid amounts of inversion, it is very helpful to improve the computational efficiency. The inversion results verify the validity and stability of conjugate gradient inversion algorithm. The results of theoretical calculation indicate that the modeling and inversion of 2.5D complex resistivity using unstructured grids are feasible. Using unstructured grids can improve the accuracy of modeling, but the large number of grids inversion is extremely time-consuming, so the parallel computation for the inversion is necessary. Acknowledgments: We thank to the support of the National Natural Science Foundation of China(41304094).

  18. Estimating the Propagation of Interdependent Cascading Outages with Multi-Type Branching Processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qi, Junjian; Ju, Wenyun; Sun, Kai

    In this paper, the multi-type branching process is applied to describe the statistics and interdependencies of line outages, the load shed, and isolated buses. The offspring mean matrix of the multi-type branching process is estimated by the Expectation Maximization (EM) algorithm and can quantify the extent of outage propagation. The joint distribution of two types of outages is estimated by the multi-type branching process via the Lagrange-Good inversion. The proposed model is tested with data generated by the AC OPA cascading simulations on the IEEE 118-bus system. The largest eigenvalues of the offspring mean matrix indicate that the system ismore » closer to criticality when considering the interdependence of different types of outages. Compared with empirically estimating the joint distribution of the total outages, good estimate is obtained by using the multitype branching process with a much smaller number of cascades, thus greatly improving the efficiency. It is shown that the multitype branching process can effectively predict the distribution of the load shed and isolated buses and their conditional largest possible total outages even when there are no data of them.« less

  19. Probing numerical Laplace inversion methods for two and three-site molecular exchange between interconnected pore structures.

    PubMed

    Silletta, Emilia V; Franzoni, María B; Monti, Gustavo A; Acosta, Rodolfo H

    2018-01-01

    Two-dimension (2D) Nuclear Magnetic Resonance relaxometry experiments are a powerful tool extensively used to probe the interaction among different pore structures, mostly in inorganic systems. The analysis of the collected experimental data generally consists of a 2D numerical inversion of time-domain data where T 2 -T 2 maps are generated. Through the years, different algorithms for the numerical inversion have been proposed. In this paper, two different algorithms for numerical inversion are tested and compared under different conditions of exchange dynamics; the method based on Butler-Reeds-Dawson (BRD) algorithm and the fast-iterative shrinkage-thresholding algorithm (FISTA) method. By constructing a theoretical model, the algorithms were tested for a two- and three-site porous media, varying the exchange rates parameters, the pore sizes and the signal to noise ratio. In order to test the methods under realistic experimental conditions, a challenging organic system was chosen. The molecular exchange rates of water confined in hierarchical porous polymeric networks were obtained, for a two- and three-site porous media. Data processed with the BRD method was found to be accurate only under certain conditions of the exchange parameters, while data processed with the FISTA method is precise for all the studied parameters, except when SNR conditions are extreme. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Optimization of a double inversion recovery sequence for noninvasive synovium imaging of joint effusion in the knee.

    PubMed

    Jahng, Geon-Ho; Jin, Wook; Yang, Dal Mo; Ryu, Kyung Nam

    2011-05-01

    We wanted to optimize a double inversion recovery (DIR) sequence to image joint effusion regions of the knee, especially intracapsular or intrasynovial imaging in the suprapatellar bursa and patellofemoral joint space. Computer simulations were performed to determine the optimum inversion times (TI) for suppressing both fat and water signals, and a DIR sequence was optimized based on the simulations for distinguishing synovitis from fluid. In vivo studies were also performed on individuals who showed joint effusion on routine knee MR images to demonstrate the feasibility of using the DIR sequence with a 3T whole-body MR scanner. To compare intracapsular or intrasynovial signals on the DIR images, intermediate density-weighted images and/or post-enhanced T1-weighted images were acquired. The timings to enhance the synovial contrast from the fluid components were TI1 = 2830 ms and TI2 = 254 ms for suppressing the water and fat signals, respectively. Improved contrast for the intrasynovial area in the knees was observed with the DIR turbo spin-echo pulse sequence compared to the intermediate density-weighted sequence. Imaging contrast obtained noninvasively with the DIR sequence was similar to that of the post-enhanced T1-weighted sequence. The DIR sequence may be useful for delineating synovium without using contrast materials.

  1. Effect of body weight support variation on muscle activities during robot assisted gait: a dynamic simulation study.

    PubMed

    Hussain, Shahid; Jamwal, Prashant K; Ghayesh, Mergen H

    2017-05-01

    While body weight support (BWS) intonation is vital during conventional gait training of neurologically challenged subjects, it is important to evaluate its effect during robot assisted gait training. In the present research we have studied the effect of BWS intonation on muscle activities during robotic gait training using dynamic simulations. Two dimensional (2-D) musculoskeletal model of human gait was developed conjointly with another 2-D model of a robotic orthosis capable of actuating hip, knee and ankle joints simultaneously. The musculoskeletal model consists of eight major muscle groups namely; soleus (SOL), gastrocnemius (GAS), tibialis anterior (TA), hamstrings (HAM), vasti (VAS), gluteus maximus (GLU), uniarticular hip flexors (iliopsoas, IP), and Rectus Femoris (RF). BWS was provided at levels of 0, 20, 40 and 60% during the simulations. In order to obtain a feasible set of muscle activities during subsequent gait cycles, an inverse dynamics algorithm along with a quadratic minimization algorithm was implemented. The dynamic parameters of the robot assisted human gait such as joint angle trajectories, ground contact force (GCF), human limb joint torques and robot induced torques at different levels of BWS were derived. The patterns of muscle activities at variable BWS were derived and analysed. For most part of the gait cycle (GC) the muscle activation patterns are quite similar for all levels of BWS as is apparent from the mean of muscle activities for the complete GC. Effect of BWS variation during robot assisted gait on muscle activities was studied by developing dynamic simulation. It is expected that the proposed dynamic simulation approach will provide important inferences and information about the muscle function variations consequent upon a change in BWS during robot assisted gait. This information shall be quite important while investigating the influence of BWS intonation on neuromuscular parameters of interest during robotic gait training.

  2. Interpolation bias for the inverse compositional Gauss-Newton algorithm in digital image correlation

    NASA Astrophysics Data System (ADS)

    Su, Yong; Zhang, Qingchuan; Xu, Xiaohai; Gao, Zeren; Wu, Shangquan

    2018-01-01

    It is believed that the classic forward additive Newton-Raphson (FA-NR) algorithm and the recently introduced inverse compositional Gauss-Newton (IC-GN) algorithm give rise to roughly equal interpolation bias. Questioning the correctness of this statement, this paper presents a thorough analysis of interpolation bias for the IC-GN algorithm. A theoretical model is built to analytically characterize the dependence of interpolation bias upon speckle image, target image interpolation, and reference image gradient estimation. The interpolation biases of the FA-NR algorithm and the IC-GN algorithm can be significantly different, whose relative difference can exceed 80%. For the IC-GN algorithm, the gradient estimator can strongly affect the interpolation bias; the relative difference can reach 178%. Since the mean bias errors are insensitive to image noise, the theoretical model proposed remains valid in the presence of noise. To provide more implementation details, source codes are uploaded as a supplement.

  3. Operational space trajectory tracking control of robot manipulators endowed with a primary controller of synthetic joint velocity.

    PubMed

    Moreno-Valenzuela, Javier; González-Hernández, Luis

    2011-01-01

    In this paper, a new control algorithm for operational space trajectory tracking control of robot arms is introduced. The new algorithm does not require velocity measurement and is based on (1) a primary controller which incorporates an algorithm to obtain synthesized velocity from joint position measurements and (2) a secondary controller which computes the desired joint acceleration and velocity required to achieve operational space motion control. The theory of singularly perturbed systems is crucial for the analysis of the closed-loop system trajectories. In addition, the practical viability of the proposed algorithm is explored through real-time experiments in a two degrees-of-freedom horizontal planar direct-drive arm. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.

  4. Total-variation based velocity inversion with Bregmanized operator splitting algorithm

    NASA Astrophysics Data System (ADS)

    Zand, Toktam; Gholami, Ali

    2018-04-01

    Many problems in applied geophysics can be formulated as a linear inverse problem. The associated problems, however, are large-scale and ill-conditioned. Therefore, regularization techniques are needed to be employed for solving them and generating a stable and acceptable solution. We consider numerical methods for solving such problems in this paper. In order to tackle the ill-conditioning of the problem we use blockiness as a prior information of the subsurface parameters and formulate the problem as a constrained total variation (TV) regularization. The Bregmanized operator splitting (BOS) algorithm as a combination of the Bregman iteration and the proximal forward backward operator splitting method is developed to solve the arranged problem. Two main advantages of this new algorithm are that no matrix inversion is required and that a discrepancy stopping criterion is used to stop the iterations, which allow efficient solution of large-scale problems. The high performance of the proposed TV regularization method is demonstrated using two different experiments: 1) velocity inversion from (synthetic) seismic data which is based on Born approximation, 2) computing interval velocities from RMS velocities via Dix formula. Numerical examples are presented to verify the feasibility of the proposed method for high-resolution velocity inversion.

  5. Joint Calibration of 3d Laser Scanner and Digital Camera Based on Dlt Algorithm

    NASA Astrophysics Data System (ADS)

    Gao, X.; Li, M.; Xing, L.; Liu, Y.

    2018-04-01

    Design a calibration target that can be scanned by 3D laser scanner while shot by digital camera, achieving point cloud and photos of a same target. A method to joint calibrate 3D laser scanner and digital camera based on Direct Linear Transformation algorithm was proposed. This method adds a distortion model of digital camera to traditional DLT algorithm, after repeating iteration, it can solve the inner and external position element of the camera as well as the joint calibration of 3D laser scanner and digital camera. It comes to prove that this method is reliable.

  6. Calculating tissue shear modulus and pressure by 2D log-elastographic methods

    NASA Astrophysics Data System (ADS)

    McLaughlin, Joyce R.; Zhang, Ning; Manduca, Armando

    2010-08-01

    Shear modulus imaging, often called elastography, enables detection and characterization of tissue abnormalities. In this paper the data are two displacement components obtained from successive MR or ultrasound data sets acquired while the tissue is excited mechanically. A 2D plane strain elastic model is assumed to govern the 2D displacement, u. The shear modulus, μ, is unknown and whether or not the first Lamé parameter, λ, is known the pressure p = λ∇ sdot u which is present in the plane strain model cannot be measured and is unreliably computed from measured data and can be shown to be an order one quantity in the units kPa. So here we present a 2D log-elastographic inverse algorithm that (1) simultaneously reconstructs the shear modulus, μ, and p, which together satisfy a first-order partial differential equation system, with the goal of imaging μ (2) controls potential exponential growth in the numerical error and (3) reliably reconstructs the quantity p in the inverse algorithm as compared to the same quantity computed with a forward algorithm. This work generalizes the log-elastographic algorithm in Lin et al (2009 Inverse Problems 25) which uses one displacement component, is derived assuming that the component satisfies the wave equation and is tested on synthetic data computed with the wave equation model. The 2D log-elastographic algorithm is tested on 2D synthetic data and 2D in vivo data from Mayo Clinic. We also exhibit examples to show that the 2D log-elastographic algorithm improves the quality of the recovered images as compared to the log-elastographic and direct inversion algorithms.

  7. A Subspace Pursuit–based Iterative Greedy Hierarchical Solution to the Neuromagnetic Inverse Problem

    PubMed Central

    Babadi, Behtash; Obregon-Henao, Gabriel; Lamus, Camilo; Hämäläinen, Matti S.; Brown, Emery N.; Purdon, Patrick L.

    2013-01-01

    Magnetoencephalography (MEG) is an important non-invasive method for studying activity within the human brain. Source localization methods can be used to estimate spatiotemporal activity from MEG measurements with high temporal resolution, but the spatial resolution of these estimates is poor due to the ill-posed nature of the MEG inverse problem. Recent developments in source localization methodology have emphasized temporal as well as spatial constraints to improve source localization accuracy, but these methods can be computationally intense. Solutions emphasizing spatial sparsity hold tremendous promise, since the underlying neurophysiological processes generating MEG signals are often sparse in nature, whether in the form of focal sources, or distributed sources representing large-scale functional networks. Recent developments in the theory of compressed sensing (CS) provide a rigorous framework to estimate signals with sparse structure. In particular, a class of CS algorithms referred to as greedy pursuit algorithms can provide both high recovery accuracy and low computational complexity. Greedy pursuit algorithms are difficult to apply directly to the MEG inverse problem because of the high-dimensional structure of the MEG source space and the high spatial correlation in MEG measurements. In this paper, we develop a novel greedy pursuit algorithm for sparse MEG source localization that overcomes these fundamental problems. This algorithm, which we refer to as the Subspace Pursuit-based Iterative Greedy Hierarchical (SPIGH) inverse solution, exhibits very low computational complexity while achieving very high localization accuracy. We evaluate the performance of the proposed algorithm using comprehensive simulations, as well as the analysis of human MEG data during spontaneous brain activity and somatosensory stimuli. These studies reveal substantial performance gains provided by the SPIGH algorithm in terms of computational complexity, localization accuracy, and robustness. PMID:24055554

  8. Digital Oblique Remote Ionospheric Sensing (DORIS) Program Development

    DTIC Science & Technology

    1992-04-01

    waveforms. A new with the ARTIST software (Reinisch and Iluang. autoscaling technique for oblique ionograms 1983, Gamache et al., 1985) which is...development and performance of a complete oblique ionogram autoscaling and inversion algorithm is presented. The inver.i-,n algorithm uses a three...OTIH radar. 14. SUBJECT TERMS 15. NUMBER OF PAGES Oblique Propagation; Oblique lonogram Autoscaling ; i Electron Density Profile Inversion; Simulated 16

  9. SWIM: A Semi-Analytical Ocean Color Inversion Algorithm for Optically Shallow Waters

    NASA Technical Reports Server (NTRS)

    McKinna, Lachlan I. W.; Werdell, P. Jeremy; Fearns, Peter R. C. S.; Weeks, Scarla J.; Reichstetter, Martina; Franz, Bryan A.; Bailey, Sean W.; Shea, Donald M.; Feldman, Gene C.

    2014-01-01

    In clear shallow waters, light that is transmitted downward through the water column can reflect off the sea floor and thereby influence the water-leaving radiance signal. This effect can confound contemporary ocean color algorithms designed for deep waters where the seafloor has little or no effect on the water-leaving radiance. Thus, inappropriate use of deep water ocean color algorithms in optically shallow regions can lead to inaccurate retrievals of inherent optical properties (IOPs) and therefore have a detrimental impact on IOP-based estimates of marine parameters, including chlorophyll-a and the diffuse attenuation coefficient. In order to improve IOP retrievals in optically shallow regions, a semi-analytical inversion algorithm, the Shallow Water Inversion Model (SWIM), has been developed. Unlike established ocean color algorithms, SWIM considers both the water column depth and the benthic albedo. A radiative transfer study was conducted that demonstrated how SWIM and two contemporary ocean color algorithms, the Generalized Inherent Optical Properties algorithm (GIOP) and Quasi-Analytical Algorithm (QAA), performed in optically deep and shallow scenarios. The results showed that SWIM performed well, whilst both GIOP and QAA showed distinct positive bias in IOP retrievals in optically shallow waters. The SWIM algorithm was also applied to a test region: the Great Barrier Reef, Australia. Using a single test scene and time series data collected by NASA's MODIS-Aqua sensor (2002-2013), a comparison of IOPs retrieved by SWIM, GIOP and QAA was conducted.

  10. Probabilistic joint inversion of waveforms and polarity data for double-couple focal mechanisms of local earthquakes

    NASA Astrophysics Data System (ADS)

    Wéber, Zoltán

    2018-06-01

    Estimating the mechanisms of small (M < 4) earthquakes is quite challenging. A common scenario is that neither the available polarity data alone nor the well predictable near-station seismograms alone are sufficient to obtain reliable focal mechanism solutions for weak events. To handle this situation we introduce here a new method that jointly inverts waveforms and polarity data following a probabilistic approach. The procedure called joint waveform and polarity (JOWAPO) inversion maps the posterior probability density of the model parameters and estimates the maximum likelihood double-couple mechanism, the optimal source depth and the scalar seismic moment of the investigated event. The uncertainties of the solution are described by confidence regions. We have validated the method on two earthquakes for which well-determined focal mechanisms are available. The validation tests show that including waveforms in the inversion considerably reduces the uncertainties of the usually poorly constrained polarity solutions. The JOWAPO method performs best when it applies waveforms from at least two seismic stations. If the number of the polarity data is large enough, even single-station JOWAPO inversion can produce usable solutions. When only a few polarities are available, however, single-station inversion may result in biased mechanisms. In this case some caution must be taken when interpreting the results. We have successfully applied the JOWAPO method to an earthquake in North Hungary, whose mechanism could not be estimated by long-period waveform inversion. Using 17 P-wave polarities and waveforms at two nearby stations, the JOWAPO method produced a well-constrained focal mechanism. The solution is very similar to those obtained previously for four other events that occurred in the same earthquake sequence. The analysed event has a strike-slip mechanism with a P axis oriented approximately along an NE-SW direction.

  11. A common 16p11.2 inversion underlies the joint susceptibility to asthma and obesity.

    PubMed

    González, Juan R; Cáceres, Alejandro; Esko, Tonu; Cuscó, Ivon; Puig, Marta; Esnaola, Mikel; Reina, Judith; Siroux, Valerie; Bouzigon, Emmanuelle; Nadif, Rachel; Reinmaa, Eva; Milani, Lili; Bustamante, Mariona; Jarvis, Deborah; Antó, Josep M; Sunyer, Jordi; Demenais, Florence; Kogevinas, Manolis; Metspalu, Andres; Cáceres, Mario; Pérez-Jurado, Luis A

    2014-03-06

    The prevalence of asthma and obesity is increasing worldwide, and obesity is a well-documented risk factor for asthma. The mechanisms underlying this association and parallel time trends remain largely unknown but genetic factors may be involved. Here, we report on a common ~0.45 Mb genomic inversion at 16p11.2 that can be accurately genotyped via SNP array data. We show that the inversion allele protects against the joint occurrence of asthma and obesity in five large independent studies (combined sample size of 317 cases and 543 controls drawn from a total of 5,809 samples; combined OR = 0.48, p = 5.5 × 10(-6)). Allele frequencies show remarkable worldwide population stratification, ranging from 10% in East Africa to 49% in Northern Europe, consistent with discordant and extreme genetic drifts or adaptive selections after human migration out of Africa. Inversion alleles strongly correlate with expression levels of neighboring genes, especially TUFM (p = 3.0 × 10(-40)) that encodes a mitochondrial protein regulator of energy balance and inhibitor of type 1 interferon, and other candidates for asthma (IL27) and obesity (APOB48R and SH2B1). Therefore, by affecting gene expression, the ~0.45 Mb 16p11.2 inversion provides a genetic basis for the joint susceptibility to asthma and obesity, with a population attributable risk of 39.7%. Differential mitochondrial function and basal energy balance of inversion alleles might also underlie the potential selection signature that led to their uneven distribution in world populations. Copyright © 2014 The American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.

  12. Comparison of custom-moulded ankle orthosis with hinged joints and off-the-shelf ankle braces in preventing ankle sprain in lateral cutting movements.

    PubMed

    Lee, Winson C C; Kobayashi, Toshiki; Choy, Barton T S; Leung, Aaron K L

    2012-06-01

    A custom moulded ankle orthosis with hinged joints potentially offers a better control over the subtalar joint and the ankle joint during lateral cutting movements, due to total contact design and increase in material strength. To test the above hypothesis by comparing it to three other available orthoses. Repeated measures. Eight subjects with a history of ankle sprains (Grade 2), and 11 subjects without such history performed lateral cutting movements in four test conditions: 1) non-orthotic, 2) custom-moulded ankle orthosis with hinges, 3) Sport-Stirrup, and 4) elastic ankle sleeve with plastic support. A VICON motion analysis system was used to study the motions at the ankle and subtalar joints. The custom-moulded ankle orthosis significantly lowered the inversion angle at initial contact (p = 0.006) and the peak inversion angle (p = 0.000) during lateral cutting movements in comparison to non-orthotic condition, while the other two orthoses did not. The three orthoses did not affect the plantarflexion motions, which had been suggested by previous studies to be important in shock wave attenuation. The custom-moulded ankle orthosis with hinges could better control inversion and thus expected to better prevent ankle sprain in lateral cutting movements. Custom-moulded ankle orthoses are not commonly used in preventing ankle sprains. This study raises the awareness of the use of custom-moulded ankle orthoses which are expected to better prevent ankle sprains.

  13. Gravity inversion of a fault by Particle swarm optimization (PSO).

    PubMed

    Toushmalani, Reza

    2013-01-01

    Particle swarm optimization is a heuristic global optimization method and also an optimization algorithm, which is based on swarm intelligence. It comes from the research on the bird and fish flock movement behavior. In this paper we introduce and use this method in gravity inverse problem. We discuss the solution for the inverse problem of determining the shape of a fault whose gravity anomaly is known. Application of the proposed algorithm to this problem has proven its capability to deal with difficult optimization problems. The technique proved to work efficiently when tested to a number of models.

  14. A combined direct/inverse three-dimensional transonic wing design method for vector computers

    NASA Technical Reports Server (NTRS)

    Weed, R. A.; Carlson, L. A.; Anderson, W. K.

    1984-01-01

    A three-dimensional transonic-wing design algorithm for vector computers is developed, and the results of sample computations are presented graphically. The method incorporates the direct/inverse scheme of Carlson (1975), a Cartesian grid system with boundary conditions applied at a mean plane, and a potential-flow solver based on the conservative form of the full potential equation and using the ZEBRA II vectorizable solution algorithm of South et al. (1980). The accuracy and consistency of the method with regard to direct and inverse analysis and trailing-edge closure are verified in the test computations.

  15. A new stochastic algorithm for inversion of dust aerosol size distribution

    NASA Astrophysics Data System (ADS)

    Wang, Li; Li, Feng; Yang, Ma-ying

    2015-08-01

    Dust aerosol size distribution is an important source of information about atmospheric aerosols, and it can be determined from multiwavelength extinction measurements. This paper describes a stochastic inverse technique based on artificial bee colony (ABC) algorithm to invert the dust aerosol size distribution by light extinction method. The direct problems for the size distribution of water drop and dust particle, which are the main elements of atmospheric aerosols, are solved by the Mie theory and the Lambert-Beer Law in multispectral region. And then, the parameters of three widely used functions, i.e. the log normal distribution (L-N), the Junge distribution (J-J), and the normal distribution (N-N), which can provide the most useful representation of aerosol size distributions, are inversed by the ABC algorithm in the dependent model. Numerical results show that the ABC algorithm can be successfully applied to recover the aerosol size distribution with high feasibility and reliability even in the presence of random noise.

  16. Path planning for assembly of strut-based structures. Thesis

    NASA Technical Reports Server (NTRS)

    Muenger, Rolf

    1991-01-01

    A path planning method with collision avoidance for a general single chain nonredundant or redundant robot is proposed. Joint range boundary overruns are also avoided. The result is a sequence of joint vectors which are passed to a trajectory planner. A potential field algorithm in joint space computes incremental joint vectors delta-q = delta-q(sub a) + delta-q(sub c) + delta-q(sub r). Adding delta-q to the robot's current joint vector leads to the next step in the path. Delta-q(sub a) is obtained by computing the minimum norm solution of the underdetermined linear system J delta-q(sub a) = x(sub a) where x(sub a) is a translational and rotational force vector that attracts the robot to its goal position and orientation. J is the manipulator Jacobian. Delta-q(sub c) is a collision avoidance term encompassing collisions between the robot (links and payload) and obstacles in the environment as well as collisions among links and payload of the robot themselves. It is obtained in joint space directly. Delta-q(sub r) is a function of the current joint vector and avoids joint range overruns. A higher level discrete search over candidate safe positions is used to provide alternatives in case the potential field algorithm encounters a local minimum and thus fails to reach the goal. The best first search algorithm A* is used for graph search. Symmetry properties of the payload and equivalent rotations are exploited to further enlarge the number of alternatives passed to the potential field algorithm.

  17. Registration of knee joint surfaces for the in vivo study of joint injuries based on magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Cheng, Rita W. T.; Habib, Ayman F.; Frayne, Richard; Ronsky, Janet L.

    2006-03-01

    In-vivo quantitative assessments of joint conditions and health status can help to increase understanding of the pathology of osteoarthritis, a degenerative joint disease that affects a large population each year. Magnetic resonance imaging (MRI) provides a non-invasive and accurate means to assess and monitor joint properties, and has become widely used for diagnosis and biomechanics studies. Quantitative analyses and comparisons of MR datasets require accurate alignment of anatomical structures, thus image registration becomes a necessary procedure for these applications. This research focuses on developing a registration technique for MR knee joint surfaces to allow quantitative study of joint injuries and health status. It introduces a novel idea of translating techniques originally developed for geographic data in the field of photogrammetry and remote sensing to register 3D MR data. The proposed algorithm works with surfaces that are represented by randomly distributed points with no requirement of known correspondences. The algorithm performs matching locally by identifying corresponding surface elements, and solves for the transformation parameters relating the surfaces by minimizing normal distances between them. This technique was used in three applications to: 1) register temporal MR data to verify the feasibility of the algorithm to help monitor diseases, 2) quantify patellar movement with respect to the femur based on the transformation parameters, and 3) quantify changes in contact area locations between the patellar and femoral cartilage at different knee flexion angles. The results indicate accurate registration and the proposed algorithm can be applied for in-vivo study of joint injuries with MRI.

  18. Two-dimensional probabilistic inversion of plane-wave electromagnetic data: methodology, model constraints and joint inversion with electrical resistivity data

    NASA Astrophysics Data System (ADS)

    Rosas-Carbajal, Marina; Linde, Niklas; Kalscheuer, Thomas; Vrugt, Jasper A.

    2014-03-01

    Probabilistic inversion methods based on Markov chain Monte Carlo (MCMC) simulation are well suited to quantify parameter and model uncertainty of nonlinear inverse problems. Yet, application of such methods to CPU-intensive forward models can be a daunting task, particularly if the parameter space is high dimensional. Here, we present a 2-D pixel-based MCMC inversion of plane-wave electromagnetic (EM) data. Using synthetic data, we investigate how model parameter uncertainty depends on model structure constraints using different norms of the likelihood function and the model constraints, and study the added benefits of joint inversion of EM and electrical resistivity tomography (ERT) data. Our results demonstrate that model structure constraints are necessary to stabilize the MCMC inversion results of a highly discretized model. These constraints decrease model parameter uncertainty and facilitate model interpretation. A drawback is that these constraints may lead to posterior distributions that do not fully include the true underlying model, because some of its features exhibit a low sensitivity to the EM data, and hence are difficult to resolve. This problem can be partly mitigated if the plane-wave EM data is augmented with ERT observations. The hierarchical Bayesian inverse formulation introduced and used herein is able to successfully recover the probabilistic properties of the measurement data errors and a model regularization weight. Application of the proposed inversion methodology to field data from an aquifer demonstrates that the posterior mean model realization is very similar to that derived from a deterministic inversion with similar model constraints.

  19. A Generic 1D Forward Modeling and Inversion Algorithm for TEM Sounding with an Arbitrary Horizontal Loop

    NASA Astrophysics Data System (ADS)

    Li, Zhanhui; Huang, Qinghua; Xie, Xingbing; Tang, Xingong; Chang, Liao

    2016-08-01

    We present a generic 1D forward modeling and inversion algorithm for transient electromagnetic (TEM) data with an arbitrary horizontal transmitting loop and receivers at any depth in a layered earth. Both the Hankel and sine transforms required in the forward algorithm are calculated using the filter method. The adjoint-equation method is used to derive the formulation of data sensitivity at any depth in non-permeable media. The inversion algorithm based on this forward modeling algorithm and sensitivity formulation is developed using the Gauss-Newton iteration method combined with the Tikhonov regularization. We propose a new data-weighting method to minimize the initial model dependence that enhances the convergence stability. On a laptop with a CPU of i7-5700HQ@3.5 GHz, the inversion iteration of a 200 layered input model with a single receiver takes only 0.34 s, while it increases to only 0.53 s for the data from four receivers at a same depth. For the case of four receivers at different depths, the inversion iteration runtime increases to 1.3 s. Modeling the data with an irregular loop and an equal-area square loop indicates that the effect of the loop geometry is significant at early times and vanishes gradually along the diffusion of TEM field. For a stratified earth, inversion of data from more than one receiver is useful in noise reducing to get a more credible layered earth. However, for a resistive layer shielded below a conductive layer, increasing the number of receivers on the ground does not have significant improvement in recovering the resistive layer. Even with a down-hole TEM sounding, the shielded resistive layer cannot be recovered if all receivers are above the shielded resistive layer. However, our modeling demonstrates remarkable improvement in detecting the resistive layer with receivers in or under this layer.

  20. Computational modeling to predict mechanical function of joints: application to the lower leg with simulation of two cadaver studies.

    PubMed

    Liacouras, Peter C; Wayne, Jennifer S

    2007-12-01

    Computational models of musculoskeletal joints and limbs can provide useful information about joint mechanics. Validated models can be used as predictive devices for understanding joint function and serve as clinical tools for predicting the outcome of surgical procedures. A new computational modeling approach was developed for simulating joint kinematics that are dictated by bone/joint anatomy, ligamentous constraints, and applied loading. Three-dimensional computational models of the lower leg were created to illustrate the application of this new approach. Model development began with generating three-dimensional surfaces of each bone from CT images and then importing into the three-dimensional solid modeling software SOLIDWORKS and motion simulation package COSMOSMOTION. Through SOLIDWORKS and COSMOSMOTION, each bone surface file was filled to create a solid object and positioned necessary components added, and simulations executed. Three-dimensional contacts were added to inhibit intersection of the bones during motion. Ligaments were represented as linear springs. Model predictions were then validated by comparison to two different cadaver studies, syndesmotic injury and repair and ankle inversion following ligament transection. The syndesmotic injury model was able to predict tibial rotation, fibular rotation, and anterior/posterior displacement. In the inversion simulation, calcaneofibular ligament extension and angles of inversion compared well. Some experimental data proved harder to simulate accurately, due to certain software limitations and lack of complete experimental data. Other parameters that could not be easily obtained experimentally can be predicted and analyzed by the computational simulations. In the syndesmotic injury study, the force generated in the tibionavicular and calcaneofibular ligaments reduced with the insertion of the staple, indicating how this repair technique changes joint function. After transection of the calcaneofibular ligament in the inversion stability study, a major increase in force was seen in several of the ligaments on the lateral aspect of the foot and ankle, indicating the recruitment of other structures to permit function after injury. Overall, the computational models were able to predict joint kinematics of the lower leg with particular focus on the ankle complex. This same approach can be taken to create models of other limb segments such as the elbow and wrist. Additional parameters can be calculated in the models that are not easily obtained experimentally such as ligament forces, force transmission across joints, and three-dimensional movement of all bones. Muscle activation can be incorporated in the model through the action of applied forces within the software for future studies.

  1. Cortex Inspired Model for Inverse Kinematics Computation for a Humanoid Robotic Finger

    PubMed Central

    Gentili, Rodolphe J.; Oh, Hyuk; Molina, Javier; Reggia, James A.; Contreras-Vidal, José L.

    2013-01-01

    In order to approach human hand performance levels, artificial anthropomorphic hands/fingers have increasingly incorporated human biomechanical features. However, the performance of finger reaching movements to visual targets involving the complex kinematics of multi-jointed, anthropomorphic actuators is a difficult problem. This is because the relationship between sensory and motor coordinates is highly nonlinear, and also often includes mechanical coupling of the two last joints. Recently, we developed a cortical model that learns the inverse kinematics of a simulated anthropomorphic finger. Here, we expand this previous work by assessing if this cortical model is able to learn the inverse kinematics for an actual anthropomorphic humanoid finger having its two last joints coupled and controlled by pneumatic muscles. The findings revealed that single 3D reaching movements, as well as more complex patterns of motion of the humanoid finger, were accurately and robustly performed by this cortical model while producing kinematics comparable to those of humans. This work contributes to the development of a bioinspired controller providing adaptive, robust and flexible control of dexterous robotic and prosthetic hands. PMID:23366569

  2. A simulation based method to assess inversion algorithms for transverse relaxation data

    NASA Astrophysics Data System (ADS)

    Ghosh, Supriyo; Keener, Kevin M.; Pan, Yong

    2008-04-01

    NMR relaxometry is a very useful tool for understanding various chemical and physical phenomena in complex multiphase systems. A Carr-Purcell-Meiboom-Gill (CPMG) [P.T. Callaghan, Principles of Nuclear Magnetic Resonance Microscopy, Clarendon Press, Oxford, 1991] experiment is an easy and quick way to obtain transverse relaxation constant (T2) in low field. Most of the samples usually have a distribution of T2 values. Extraction of this distribution of T2s from the noisy decay data is essentially an ill-posed inverse problem. Various inversion approaches have been used to solve this problem, to date. A major issue in using an inversion algorithm is determining how accurate the computed distribution is. A systematic analysis of an inversion algorithm, UPEN [G.C. Borgia, R.J.S. Brown, P. Fantazzini, Uniform-penalty inversion of multiexponential decay data, Journal of Magnetic Resonance 132 (1998) 65-77; G.C. Borgia, R.J.S. Brown, P. Fantazzini, Uniform-penalty inversion of multiexponential decay data II. Data spacing, T2 data, systematic data errors, and diagnostics, Journal of Magnetic Resonance 147 (2000) 273-285] was performed by means of simulated CPMG data generation. Through our simulation technique and statistical analyses, the effects of various experimental parameters on the computed distribution were evaluated. We converged to the true distribution by matching up the inversion results from a series of true decay data and a noisy simulated data. In addition to simulation studies, the same approach was also applied on real experimental data to support the simulation results.

  3. Joint Inversion of Geochemical Data and Geophysical Logs for Lithology Identification in CCSD Main Hole

    NASA Astrophysics Data System (ADS)

    Deng, Chengxiang; Pan, Heping; Luo, Miao

    2017-12-01

    The Chinese Continental Scientific Drilling (CCSD) main hole is located in the Sulu ultrahigh-pressure metamorphic (UHPM) belt, providing significant opportunities for studying the metamorphic strata structure, kinetics process and tectonic evolution. Lithology identification is the primary and crucial stage for above geoscientific researches. To release the burden of log analyst and improve the efficiency of lithology interpretation, many algorithms have been developed to automate the process of lithology prediction. While traditional statistical techniques, such as discriminant analysis and K-nearest neighbors classifier, are incompetent in extracting nonlinear features of metamorphic rocks from complex geophysical log data; artificial intelligence algorithms are capable of solving nonlinear problems, but most of the algorithms suffer from tuning parameters to be global optimum to establish model rather than local optimum, and also encounter challenges in making the balance between training accuracy and generalization ability. Optimization methods have been applied extensively in the inversion of reservoir parameters of sedimentary formations using well logs. However, it is difficult to obtain accurate solution from the logging response equations of optimization method because of the strong overlapping of nonstationary log signals when applied in metamorphic formations. As oxide contents of each kinds of metamorphic rocks are relatively less overlapping, this study explores an approach, set in a metamorphic formation model and using the Broyden Fletcher Goldfarb Shanno (BFGS) optimization algorithm to identify lithology from oxide data. We first incorporate 11 geophysical logs and lab-collected geochemical data of 47 core samples to construct oxide profile of CCSD main hole by using backwards stepwise multiple regression method, which eliminates irrelevant input logs step by step for higher statistical significance and accuracy. Then we establish oxide response equations in accordance with the metamorphic formation model and employ BFGS algorithm to minimize the objective function. Finally, we identify lithology according to the composition content which accounts for the largest proportion. The results show that lithology identified by the method of this paper is consistent with core description. Moreover, this method demonstrates the benefits of using oxide content as an adhesive to connect logging data with lithology, can make the metamorphic formation model more understandable and accurate, and avoid selecting complex formation model and building nonlinear logging response equations.

  4. The State of Stress Beyond the Borehole

    NASA Astrophysics Data System (ADS)

    Johnson, P. A.; Coblentz, D. D.; Maceira, M.; Delorey, A. A.; Guyer, R. A.

    2015-12-01

    The state of stress controls all in-situ reservoir activities and yet we lack the quantitative means to measure it. This problem is important in light of the fact that the subsurface provides more than 80 percent of the energy used in the United States and serves as a reservoir for geological carbon sequestration, used fuel disposition, and nuclear waste storage. Adaptive control of subsurface fractures and fluid flow is a crosscutting challenge being addressed by the new Department of Energy SubTER Initiative that has the potential to transform subsurface energy production and waste storage strategies. Our methodology to address the above mentioned matter is based on a novel Advance Multi-Physics Tomographic (AMT) approach for determining the state of stress, thereby facilitating our ability to monitor and control subsurface geomechanical processes. We developed the AMT algorithm for deriving state-of-stress from integrated density and seismic velocity models and demonstrate the feasibility by applying the AMT approach to synthetic data sets to assess accuracy and resolution of the method as a function of the quality and type of geophysical data. With this method we can produce regional- to basin-scale maps of the background state of stress and identify regions where stresses are changing. Our approach is based on our major advances in the joint inversion of gravity and seismic data to obtain the elastic properties for the subsurface; and coupling afterwards the output from this joint-inversion with theoretical model such that strain (and subsequently) stress can be computed. Ultimately we will obtain the differential state of stress over time to identify and monitor critically stressed faults and evolving regions within the reservoir, and relate them to anthropogenic activities such as fluid/gas injection.

  5. Joint Source Location and Focal Mechanism Inversion: efficiency, accuracy and applications

    NASA Astrophysics Data System (ADS)

    Liang, C.; Yu, Y.

    2017-12-01

    The analysis of induced seismicity has become a common practice to evaluate the results of hydraulic fracturing treatment. Liang et al (2016) proposed a joint Source Scanning Algorithms (jSSA for short) to obtain microseismic events and focal mechanisms simultaneously. The jSSA is superior over traditional SSA in many aspects, but the computation cost is too significant to be applied in real time monitoring. In this study, we have developed several scanning schemas to reduce computation time. A multi-stage scanning schema is proved to be able to improve the efficiency significantly while also retain its accuracy. A series of tests have been carried out by using both real field data and synthetic data to evaluate the accuracy of the method and its dependence on noise level, source depths, focal mechanisms and other factors. The surface-based arrays have better constraints on horizontal location errors (<20m) and angular errors of P axes (within 10 degree, for S/N>0.5). For sources with varying rakes, dips, strikes and depths, the errors are mostly controlled by the partition of positive and negative polarities in different quadrants. More evenly partitioned polarities in different quadrants yield better results in both locations and focal mechanisms. Nevertheless, even with bad resolutions for some FMs, the optimized jSSA method can still improve location accuracies significantly. Based on much more densely distributed events and focal mechanisms, a gridded stress inversion is conducted to get a evenly distributed stress field. The full potential of the jSSA has yet to be explored in different directions, especially in earthquake seismology as seismic array becoming incleasingly dense.

  6. Two-layer Crustal Structure of the Contiguous United States from Joint Inversion of USArray Receiver Functions and Gravity

    NASA Astrophysics Data System (ADS)

    Ma, X.; Lowry, A. R.

    2015-12-01

    The composition and thickness of crustal layering is fundamental to understanding the evolution and dynamics of continental lithosphere. Lowry and Pérez-Gussinyé (2011) found that the western Cordillera of the United States, characterized by active deformation and high heat flow, is strongly correlated with low bulk crustal seismic velocity ratio. They interpreted this observation as evidence that quartz controls continental tectonism and deformation. We will present new imaging of two-layer crustal composition and structure from cross-correlation of observed receiver functions and model synthetics. The cross-correlation coefficient of the two-layer model increases significantly relative to an assumed one-layer model, and the lower crustal thickness map from raw two-layer modeling (prior to Bayesian filtering with gravity models and Optimal Interpolation) clearly shows Colorado plateau and Appalachian boundaries, which are not apparent in upper crustal models, and also the high vP/vS fill the most of middle continental region while low vP/vS are on the west and east continental edge. In the presentation, we will show results of a new algorithm for joint Bayesian inversion of thickness and vP/vS of two-layer continental crustal structure. Recent thermodynamical modeling of geophysical models based on lab experiment data (Guerri et al., 2015) found that a large impedance contrast can be expected in the midcrust due to a phase transition that decreases plagioclase and increases clinopyroxene, without invoking any change in crustal chemistry. The depth of the transition depends on pressure, temperature and hydration, and in this presentation we will compare predictions of layer thicknesses and vP/vS predicted by mineral thermodynamics to those we observe in the USArray footprint.

  7. The double universal joint wrist on a manipulator: Solution of inverse position kinematics and singularity analysis

    NASA Technical Reports Server (NTRS)

    Williams, Robert L., III

    1992-01-01

    This paper presents three methods to solve the inverse position kinematics position problem of the double universal joint attached to a manipulator: (1) an analytical solution for two specific cases; (2) an approximate closed form solution based on ignoring the wrist offset; and (3) an iterative method which repeats closed form position and orientation calculations until the solution is achieved. Several manipulators are used to demonstrate the solution methods: cartesian, cylindrical, spherical, and an anthropomorphic articulated arm, based on the Flight Telerobotic Servicer (FTS) arm. A singularity analysis is presented for the double universal joint wrist attached to the above manipulator arms. While the double universal joint wrist standing alone is singularity-free in orientation, the singularity analysis indicates the presence of coupled position/orientation singularities of the spherical and articulated manipulators with the wrist. The cartesian and cylindrical manipulators with the double universal joint wrist were found to be singularity-free. The methods of this paper can be implemented in a real-time controller for manipulators with the double universal joint wrist. Such mechanically dextrous systems could be used in telerobotic and industrial applications, but further work is required to avoid the singularities.

  8. Walking patterns and hip contact forces in patients with hip dysplasia.

    PubMed

    Skalshøi, Ole; Iversen, Christian Hauskov; Nielsen, Dennis Brandborg; Jacobsen, Julie; Mechlenburg, Inger; Søballe, Kjeld; Sørensen, Henrik

    2015-10-01

    Several studies have investigated walking characteristics in hip dysplasia patients, but so far none have described all hip rotational degrees of freedom during the whole gait cycle. This descriptive study reports 3D joint angles and torques, and furthermore extends previous studies with muscle and joint contact forces in 32 hip dysplasia patients and 32 matching controls. 3D motion capture data from walking and standing trials were analysed. Hip, knee, ankle and pelvis angles were calculated with inverse kinematics for both standing and walking trials. Hip, knee and ankle torques were calculated with inverse dynamics, while hip muscle and joint contact forces were calculated with static optimisation for the walking trials. No differences were found between the two groups while standing. While walking, patients showed decreased hip extension, increased ankle pronation and increased hip abduction and external rotation torques. Furthermore, hip muscle forces were generally lower and shifted to more posteriorly situated muscles, while the hip joint contact force was lower and directed more superiorly. During walking, patients showed lower and more superiorly directed hip joint contact force, which might alleviate pain from an antero-superiorly degenerated joint. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Adjoint-state inversion of electric resistivity tomography data of seawater intrusion at the Argentona coastal aquifer (Spain)

    NASA Astrophysics Data System (ADS)

    Fernández-López, Sheila; Carrera, Jesús; Ledo, Juanjo; Queralt, Pilar; Luquot, Linda; Martínez, Laura; Bellmunt, Fabián

    2016-04-01

    Seawater intrusion in aquifers is a complex phenomenon that can be characterized with the help of electric resistivity tomography (ERT) because of the low resistivity of seawater, which underlies the freshwater floating on top. The problem is complex because of the need for joint inversion of electrical and hydraulic (density dependent flow) data. Here we present an adjoint-state algorithm to treat electrical data. This method is a common technique to obtain derivatives of an objective function, depending on potentials with respect to model parameters. The main advantages of it are its simplicity in stationary problems and the reduction of computational cost respect others methodologies. The relationship between the concentration of chlorides and the resistivity values of the field is well known. Also, these resistivities are related to the values of potentials measured using ERT. Taking this into account, it will be possible to define the different resistivities zones from the field data of potential distribution using the basis of inverse problem. In this case, the studied zone is situated in Argentona (Baix Maresme, Catalonia), where the values of chlorides obtained in some wells of the zone are too high. The adjoint-state method will be used to invert the measured data using a new finite element code in C ++ language developed in an open-source framework called Kratos. Finally, the information obtained numerically with our code will be checked with the information obtained with other codes.

  10. A joint swarm intelligence algorithm for multi-user detection in MIMO-OFDM system

    NASA Astrophysics Data System (ADS)

    Hu, Fengye; Du, Dakun; Zhang, Peng; Wang, Zhijun

    2014-11-01

    In the multi-input multi-output orthogonal frequency division multiplexing (MIMO-OFDM) system, traditional multi-user detection (MUD) algorithms that usually used to suppress multiple access interference are difficult to balance system detection performance and the complexity of the algorithm. To solve this problem, this paper proposes a joint swarm intelligence algorithm called Ant Colony and Particle Swarm Optimisation (AC-PSO) by integrating particle swarm optimisation (PSO) and ant colony optimisation (ACO) algorithms. According to simulation results, it has been shown that, with low computational complexity, the MUD for the MIMO-OFDM system based on AC-PSO algorithm gains comparable MUD performance with maximum likelihood algorithm. Thus, the proposed AC-PSO algorithm provides a satisfactory trade-off between computational complexity and detection performance.

  11. Fast super-resolution estimation of DOA and DOD in bistatic MIMO Radar with off-grid targets

    NASA Astrophysics Data System (ADS)

    Zhang, Dong; Zhang, Yongshun; Zheng, Guimei; Feng, Cunqian; Tang, Jun

    2018-05-01

    In this paper, we focus on the problem of joint DOA and DOD estimation in Bistatic MIMO Radar using sparse reconstruction method. In traditional ways, we usually convert the 2D parameter estimation problem into 1D parameter estimation problem by Kronecker product which will enlarge the scale of the parameter estimation problem and bring more computational burden. Furthermore, it requires that the targets must fall on the predefined grids. In this paper, a 2D-off-grid model is built which can solve the grid mismatch problem of 2D parameters estimation. Then in order to solve the joint 2D sparse reconstruction problem directly and efficiently, three kinds of fast joint sparse matrix reconstruction methods are proposed which are Joint-2D-OMP algorithm, Joint-2D-SL0 algorithm and Joint-2D-SOONE algorithm. Simulation results demonstrate that our methods not only can improve the 2D parameter estimation accuracy but also reduce the computational complexity compared with the traditional Kronecker Compressed Sensing method.

  12. A Geophysical Inversion Model Enhancement Technique Based on the Blind Deconvolution

    NASA Astrophysics Data System (ADS)

    Zuo, B.; Hu, X.; Li, H.

    2011-12-01

    A model-enhancement technique is proposed to enhance the geophysical inversion model edges and details without introducing any additional information. Firstly, the theoretic correctness of the proposed geophysical inversion model-enhancement technique is discussed. An inversion MRM (model resolution matrix) convolution approximating PSF (Point Spread Function) method is designed to demonstrate the correctness of the deconvolution model enhancement method. Then, a total-variation regularization blind deconvolution geophysical inversion model-enhancement algorithm is proposed. In previous research, Oldenburg et al. demonstrate the connection between the PSF and the geophysical inverse solution. Alumbaugh et al. propose that more information could be provided by the PSF if we return to the idea of it behaving as an averaging or low pass filter. We consider the PSF as a low pass filter to enhance the inversion model basis on the theory of the PSF convolution approximation. Both the 1D linear and the 2D magnetotelluric inversion examples are used to analyze the validity of the theory and the algorithm. To prove the proposed PSF convolution approximation theory, the 1D linear inversion problem is considered. It shows the ratio of convolution approximation error is only 0.15%. The 2D synthetic model enhancement experiment is presented. After the deconvolution enhancement, the edges of the conductive prism and the resistive host become sharper, and the enhancement result is closer to the actual model than the original inversion model according the numerical statistic analysis. Moreover, the artifacts in the inversion model are suppressed. The overall precision of model increases 75%. All of the experiments show that the structure details and the numerical precision of inversion model are significantly improved, especially in the anomalous region. The correlation coefficient between the enhanced inversion model and the actual model are shown in Fig. 1. The figure illustrates that more information and details structure of the actual model are enhanced through the proposed enhancement algorithm. Using the proposed enhancement method can help us gain a clearer insight into the results of the inversions and help make better informed decisions.

  13. Inverse consistent non-rigid image registration based on robust point set matching

    PubMed Central

    2014-01-01

    Background Robust point matching (RPM) has been extensively used in non-rigid registration of images to robustly register two sets of image points. However, except for the location at control points, RPM cannot estimate the consistent correspondence between two images because RPM is a unidirectional image matching approach. Therefore, it is an important issue to make an improvement in image registration based on RPM. Methods In our work, a consistent image registration approach based on the point sets matching is proposed to incorporate the property of inverse consistency and improve registration accuracy. Instead of only estimating the forward transformation between the source point sets and the target point sets in state-of-the-art RPM algorithms, the forward and backward transformations between two point sets are estimated concurrently in our algorithm. The inverse consistency constraints are introduced to the cost function of RPM and the fuzzy correspondences between two point sets are estimated based on both the forward and backward transformations simultaneously. A modified consistent landmark thin-plate spline registration is discussed in detail to find the forward and backward transformations during the optimization of RPM. The similarity of image content is also incorporated into point matching in order to improve image matching. Results Synthetic data sets, medical images are employed to demonstrate and validate the performance of our approach. The inverse consistent errors of our algorithm are smaller than RPM. Especially, the topology of transformations is preserved well for our algorithm for the large deformation between point sets. Moreover, the distance errors of our algorithm are similar to that of RPM, and they maintain a downward trend as whole, which demonstrates the convergence of our algorithm. The registration errors for image registrations are evaluated also. Again, our algorithm achieves the lower registration errors in same iteration number. The determinant of the Jacobian matrix of the deformation field is used to analyse the smoothness of the forward and backward transformations. The forward and backward transformations estimated by our algorithm are smooth for small deformation. For registration of lung slices and individual brain slices, large or small determinant of the Jacobian matrix of the deformation fields are observed. Conclusions Results indicate the improvement of the proposed algorithm in bi-directional image registration and the decrease of the inverse consistent errors of the forward and the reverse transformations between two images. PMID:25559889

  14. Contribution of tibiofemoral joint contact to net loads at the knee in gait.

    PubMed

    Walter, Jonathan P; Korkmaz, Nuray; Fregly, Benjamin J; Pandy, Marcus G

    2015-07-01

    Inverse dynamics analysis is commonly used to estimate the net loads at a joint during human motion. Most lower-limb models of movement represent the knee as a simple hinge joint when calculating muscle forces. This approach is limited because it neglects the contributions from tibiofemoral joint contact forces and may therefore lead to errors in estimated muscle forces. The aim of this study was to quantify the contributions of tibiofemoral joint contact loads to the net knee loads calculated from inverse dynamics for multiple subjects and multiple gait patterns. Tibiofemoral joint contact loads were measured in four subjects with instrumented implants as each subject walked at their preferred speed (normal gait) and performed prescribed gait modifications designed to treat medial knee osteoarthritis. Tibiofemoral contact loads contributed substantially to the net knee extension and knee adduction moments in normal gait with mean values of 16% and 54%, respectively. These findings suggest that knee-contact kinematics and loads should be included in lower-limb models of movement for more accurate determination of muscle forces. The results of this study may be used to guide the development of more realistic lower-limb models that account for the effects of tibiofemoral joint contact at the knee. © 2015 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.

  15. Inverse dynamics of a 3 degree of freedom spatial flexible manipulator

    NASA Technical Reports Server (NTRS)

    Bayo, Eduardo; Serna, M.

    1989-01-01

    A technique is presented for solving the inverse dynamics and kinematics of 3 degree of freedom spatial flexible manipulator. The proposed method finds the joint torques necessary to produce a specified end effector motion. Since the inverse dynamic problem in elastic manipulators is closely coupled to the inverse kinematic problem, the solution of the first also renders the displacements and rotations at any point of the manipulator, including the joints. Furthermore the formulation is complete in the sense that it includes all the nonlinear terms due to the large rotation of the links. The Timoshenko beam theory is used to model the elastic characteristics, and the resulting equations of motion are discretized using the finite element method. An iterative solution scheme is proposed that relies on local linearization of the problem. The solution of each linearization is carried out in the frequency domain. The performance and capabilities of this technique are tested through simulation analysis. Results show the potential use of this method for the smooth motion control of space telerobots.

  16. Kinematics and design of a class of parallel manipulators

    NASA Astrophysics Data System (ADS)

    Hertz, Roger Barry

    1998-12-01

    This dissertation is concerned with the kinematic analysis and design of a class of three degree-of-freedom, spatial parallel manipulators. The class of manipulators is characterized by two platforms, between which are three legs, each possessing a succession of revolute, spherical, and revolute joints. The class is termed the "revolute-spherical-revolute" class of parallel manipulators. Two members of this class are examined. The first mechanism is a double-octahedral variable-geometry truss, and the second is termed a double tripod. The history the mechanisms is explored---the variable-geometry truss dates back to 1984, while predecessors of the double tripod mechanism date back to 1869. This work centers on the displacement analysis of these three-degree-of-freedom mechanisms. Two types of problem are solved: the forward displacement analysis (forward kinematics) and the inverse displacement analysis (inverse kinematics). The kinematic model of the class of mechanism is general in nature. A classification scheme for the revolute-spherical-revolute class of mechanism is introduced, which uses dominant geometric features to group designs into 8 different sub-classes. The forward kinematics problem is discussed: given a set of independently controllable input variables, solve for the relative position and orientation between the two platforms. For the variable-geometry truss, the controllable input variables are assumed to be the linear (prismatic) joints. For the double tripod, the controllable input variables are the three revolute joints adjacent to the base (proximal) platform. Multiple solutions are presented to the forward kinematics problem, indicating that there are many different positions (assemblies) that the manipulator can assume with equivalent inputs. For the double tripod these solutions can be expressed as a 16th degree polynomial in one unknown, while for the variable-geometry truss there exist two 16th degree polynomials, giving rise to 256 solutions. For special cases of the double tripod, the forward kinematics problem is shown to have a closed-form solution. Numerical examples are presented for the solution to the forward kinematics. A double tripod is presented that admits 16 unique and real forward kinematics solutions. Another example for a variable geometry truss is given that possesses 64 real solutions: 8 for each 16th order polynomial. The inverse kinematics problem is also discussed: given the relative position of the hand (end-effector), which is rigidly attached to one platform, solve for the independently controlled joint variables. Iterative solutions are proposed for both the variable-geometry truss and the double tripod. For special cases of both mechanisms, closed-form solutions are given. The practical problems of designing, building, and controlling a double-tripod manipulator are addressed. The resulting manipulator is a first-of-its kind prototype of a tapered (asymmetric) double-tripod manipulator. Real-time forward and inverse kinematics algorithms on an industrial robot controller is presented. The resulting performance of the prototype is impressive, since it was to achieve a maximum tool-tip speed of 4064 mm/s, maximum acceleration of 5 g, and a cycle time of 1.2 seconds for a typical pick-and-place pattern.

  17. Tomographic inversion of satellite photometry

    NASA Technical Reports Server (NTRS)

    Solomon, S. C.; Hays, P. B.; Abreu, V. J.

    1984-01-01

    An inversion algorithm capable of reconstructing the volume emission rate of thermospheric airglow features from satellite photometry has been developed. The accuracy and resolution of this technique are investigated using simulated data, and the inversions of several sets of observations taken by the Visible Airglow Experiment are presented.

  18. A comparative study of controlled random search algorithms with application to inverse aerofoil design

    NASA Astrophysics Data System (ADS)

    Manzanares-Filho, N.; Albuquerque, R. B. F.; Sousa, B. S.; Santos, L. G. C.

    2018-06-01

    This article presents a comparative study of some versions of the controlled random search algorithm (CRSA) in global optimization problems. The basic CRSA, originally proposed by Price in 1977 and improved by Ali et al. in 1997, is taken as a starting point. Then, some new modifications are proposed to improve the efficiency and reliability of this global optimization technique. The performance of the algorithms is assessed using traditional benchmark test problems commonly invoked in the literature. This comparative study points out the key features of the modified algorithm. Finally, a comparison is also made in a practical engineering application, namely the inverse aerofoil shape design.

  19. Assessing performance of flaw characterization methods through uncertainty propagation

    NASA Astrophysics Data System (ADS)

    Miorelli, R.; Le Bourdais, F.; Artusi, X.

    2018-04-01

    In this work, we assess the inversion performance in terms of crack characterization and localization based on synthetic signals associated to ultrasonic and eddy current physics. More precisely, two different standard iterative inversion algorithms are used to minimize the discrepancy between measurements (i.e., the tested data) and simulations. Furthermore, in order to speed up the computational time and get rid of the computational burden often associated to iterative inversion algorithms, we replace the standard forward solver by a suitable metamodel fit on a database built offline. In a second step, we assess the inversion performance by adding uncertainties on a subset of the database parameters and then, through the metamodel, we propagate these uncertainties within the inversion procedure. The fast propagation of uncertainties enables efficiently evaluating the impact due to the lack of knowledge on some parameters employed to describe the inspection scenarios, which is a situation commonly encountered in the industrial NDE context.

  20. Using Poisson-regularized inversion of Bremsstrahlung emission to extract full electron energy distribution functions from x-ray pulse-height detector data

    DOE PAGES

    Swanson, C.; Jandovitz, P.; Cohen, S. A.

    2018-02-27

    We measured Electron Energy Distribution Functions (EEDFs) from below 200 eV to over 8 keV and spanning five orders-of-magnitude in intensity, produced in a low-power, RF-heated, tandem mirror discharge in the PFRC-II apparatus. The EEDF was obtained from the x-ray energy distribution function (XEDF) using a novel Poisson-regularized spectrum inversion algorithm applied to pulse-height spectra that included both Bremsstrahlung and line emissions. The XEDF was measured using a specially calibrated Amptek Silicon Drift Detector (SDD) pulse-height system with 125 eV FWHM at 5.9 keV. Finally, the algorithm is found to out-perform current leading x-ray inversion algorithms when the error duemore » to counting statistics is high.« less

  1. Using Poisson-regularized inversion of Bremsstrahlung emission to extract full electron energy distribution functions from x-ray pulse-height detector data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swanson, C.; Jandovitz, P.; Cohen, S. A.

    We measured Electron Energy Distribution Functions (EEDFs) from below 200 eV to over 8 keV and spanning five orders-of-magnitude in intensity, produced in a low-power, RF-heated, tandem mirror discharge in the PFRC-II apparatus. The EEDF was obtained from the x-ray energy distribution function (XEDF) using a novel Poisson-regularized spectrum inversion algorithm applied to pulse-height spectra that included both Bremsstrahlung and line emissions. The XEDF was measured using a specially calibrated Amptek Silicon Drift Detector (SDD) pulse-height system with 125 eV FWHM at 5.9 keV. Finally, the algorithm is found to out-perform current leading x-ray inversion algorithms when the error duemore » to counting statistics is high.« less

  2. Developing the fuzzy c-means clustering algorithm based on maximum entropy for multitarget tracking in a cluttered environment

    NASA Astrophysics Data System (ADS)

    Chen, Xiao; Li, Yaan; Yu, Jing; Li, Yuxing

    2018-01-01

    For fast and more effective implementation of tracking multiple targets in a cluttered environment, we propose a multiple targets tracking (MTT) algorithm called maximum entropy fuzzy c-means clustering joint probabilistic data association that combines fuzzy c-means clustering and the joint probabilistic data association (PDA) algorithm. The algorithm uses the membership value to express the probability of the target originating from measurement. The membership value is obtained through fuzzy c-means clustering objective function optimized by the maximum entropy principle. When considering the effect of the public measurement, we use a correction factor to adjust the association probability matrix to estimate the state of the target. As this algorithm avoids confirmation matrix splitting, it can solve the high computational load problem of the joint PDA algorithm. The results of simulations and analysis conducted for tracking neighbor parallel targets and cross targets in a different density cluttered environment show that the proposed algorithm can realize MTT quickly and efficiently in a cluttered environment. Further, the performance of the proposed algorithm remains constant with increasing process noise variance. The proposed algorithm has the advantages of efficiency and low computational load, which can ensure optimum performance when tracking multiple targets in a dense cluttered environment.

  3. Earthquake Rupture Process Inferred from Joint Inversion of 1-Hz GPS and Strong Motion Data: The 2008 Iwate-Miyagi Nairiku, Japan, Earthquake

    NASA Astrophysics Data System (ADS)

    Yokota, Y.; Koketsu, K.; Hikima, K.; Miyazaki, S.

    2009-12-01

    1-Hz GPS data can be used as a ground displacement seismogram. The capability of high-rate GPS to record seismic wave fields for large magnitude (M8 class) earthquakes has been demonstrated [Larson et al., 2003]. Rupture models were inferred solely and supplementarily from 1-Hz GPS data [Miyazaki et al., 2004; Ji et al., 2004; Kobayashi et al., 2006]. However, none of the previous studies have succeeded in inferring the source process of the medium-sized (M6 class) earthquake solely from 1-Hz GPS data. We first compared 1-Hz GPS data with integrated strong motion waveforms for the 2008 Iwate-Miyagi Nairiku, Japan, earthquake. We performed a waveform inversion for the rupture process using 1-Hz GPS data only [Yokota et al., 2009]. We here discuss the rupture processes inferred from the inversion of 1-Hz GPS data of GEONET only, the inversion of strong motion data of K-NET and KiK-net only, and the joint inversion of 1-Hz GPS and strong motion data. The data were inverted to infer the rupture process of the earthquake using the inversion codes by Yoshida et al. [1996] with the revisions by Hikima and Koketsu [2005]. In the 1-Hz GPS inversion result, the total seismic moment is 2.7×1019 Nm (Mw: 6.9) and the maximum slip is 5.1 m. These results are approximately equal to 2.4×1019 Nm and 4.5 m from the inversion of strong motion data. The difference in the slip distribution on the northern fault segment may come from long-period motions possibly recorded only in 1-Hz GPS data. In the joint inversion result, the total seismic moment is 2.5×1019 Nm and the maximum slip is 5.4 m. These values also agree well with the result of 1-Hz GPS inversion. In all the series of snapshots that show the dynamic features of the rupture process, the rupture propagated bilaterally from the hypocenter to the south and north. The northern rupture speed is faster than the northern one. These agreements demonstrate the ability of 1-Hz GPS data to infer not only static, but also dynamic features of a medium-sized (M6 class) earthquake, although some details, such as the shallow extension of the southern asperity, are missing, due possibly to their limitations such as the sampling interval of 1 s and the sparse GPS stations distiribution in the near field of the earthquake. The result of the joint inversion indiates that these minor discrepancies can be reduced by the introduction of strong motion data. Continuous GPS monitoring at a much higher rate (e.g., 10 Hz) will also be helpful for reducing the minor discrepancies.

  4. 3-D Magnetotelluric Forward Modeling And Inversion Incorporating Topography By Using Vector Finite-Element Method Combined With Divergence Corrections Based On The Magnetic Field (VFEH++)

    NASA Astrophysics Data System (ADS)

    Shi, X.; Utada, H.; Jiaying, W.

    2009-12-01

    The vector finite-element method combined with divergence corrections based on the magnetic field H, referred to as VFEH++ method, is developed to simulate the magnetotelluric (MT) responses of 3-D conductivity models. The advantages of the new VFEH++ method are the use of edge-elements to eliminate the vector parasites and the divergence corrections to explicitly guarantee the divergence-free conditions in the whole modeling domain. 3-D MT topographic responses are modeling using the new VFEH++ method, and are compared with those calculated by other numerical methods. The results show that MT responses can be modeled highly accurate using the VFEH+ +method. The VFEH++ algorithm is also employed for the 3-D MT data inversion incorporating topography. The 3-D MT inverse problem is formulated as a minimization problem of the regularized misfit function. In order to avoid the huge memory requirement and very long time for computing the Jacobian sensitivity matrix for Gauss-Newton method, we employ the conjugate gradient (CG) approach to solve the inversion equation. In each iteration of CG algorithm, the cost computation is the product of the Jacobian sensitivity matrix with a model vector x or its transpose with a data vector y, which can be transformed into two pseudo-forwarding modeling. This avoids the full explicitly Jacobian matrix calculation and storage which leads to considerable savings in the memory required by the inversion program in PC computer. The performance of CG algorithm will be illustrated by several typical 3-D models with horizontal earth surface and topographic surfaces. The results show that the VFEH++ and CG algorithms can be effectively employed to 3-D MT field data inversion.

  5. A vision-based end-point control for a two-link flexible manipulator. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Obergfell, Klaus

    1991-01-01

    The measurement and control of the end-effector position of a large two-link flexible manipulator are investigated. The system implementation is described and an initial algorithm for static end-point positioning is discussed. Most existing robots are controlled through independent joint controllers, while the end-effector position is estimated from the joint positions using a kinematic relation. End-point position feedback can be used to compensate for uncertainty and structural deflections. Such feedback is especially important for flexible robots. Computer vision is utilized to obtain end-point position measurements. A look-and-move control structure alleviates the disadvantages of the slow and variable computer vision sampling frequency. This control structure consists of an inner joint-based loop and an outer vision-based loop. A static positioning algorithm was implemented and experimentally verified. This algorithm utilizes the manipulator Jacobian to transform a tip position error to a joint error. The joint error is then used to give a new reference input to the joint controller. The convergence of the algorithm is demonstrated experimentally under payload variation. A Landmark Tracking System (Dickerson, et al 1990) is used for vision-based end-point measurements. This system was modified and tested. A real-time control system was implemented on a PC and interfaced with the vision system and the robot.

  6. Nonlinear inversion of borehole-radar tomography data to reconstruct velocity and attenuation distribution in earth materials

    USGS Publications Warehouse

    Zhou, C.; Liu, L.; Lane, J.W.

    2001-01-01

    A nonlinear tomographic inversion method that uses first-arrival travel-time and amplitude-spectra information from cross-hole radar measurements was developed to simultaneously reconstruct electromagnetic velocity and attenuation distribution in earth materials. Inversion methods were developed to analyze single cross-hole tomography surveys and differential tomography surveys. Assuming the earth behaves as a linear system, the inversion methods do not require estimation of source radiation pattern, receiver coupling, or geometrical spreading. The data analysis and tomographic inversion algorithm were applied to synthetic test data and to cross-hole radar field data provided by the US Geological Survey (USGS). The cross-hole radar field data were acquired at the USGS fractured-rock field research site at Mirror Lake near Thornton, New Hampshire, before and after injection of a saline tracer, to monitor the transport of electrically conductive fluids in the image plane. Results from the synthetic data test demonstrate the algorithm computational efficiency and indicate that the method robustly can reconstruct electromagnetic (EM) wave velocity and attenuation distribution in earth materials. The field test results outline zones of velocity and attenuation anomalies consistent with the finding of previous investigators; however, the tomograms appear to be quite smooth. Further work is needed to effectively find the optimal smoothness criterion in applying the Tikhonov regularization in the nonlinear inversion algorithms for cross-hole radar tomography. ?? 2001 Elsevier Science B.V. All rights reserved.

  7. Predicting tibiotalar and subtalar joint angles from skin-marker data with dual-fluoroscopy as a reference standard.

    PubMed

    Nichols, Jennifer A; Roach, Koren E; Fiorentino, Niccolo M; Anderson, Andrew E

    2016-09-01

    Evidence suggests that the tibiotalar and subtalar joints provide near six degree-of-freedom (DOF) motion. Yet, kinematic models frequently assume one DOF at each of these joints. In this study, we quantified the accuracy of kinematic models to predict joint angles at the tibiotalar and subtalar joints from skin-marker data. Models included 1 or 3 DOF at each joint. Ten asymptomatic subjects, screened for deformities, performed 1.0m/s treadmill walking and a balanced, single-leg heel-rise. Tibiotalar and subtalar joint angles calculated by inverse kinematics for the 1 and 3 DOF models were compared to those measured directly in vivo using dual-fluoroscopy. Results demonstrated that, for each activity, the average error in tibiotalar joint angles predicted by the 1 DOF model were significantly smaller than those predicted by the 3 DOF model for inversion/eversion and internal/external rotation. In contrast, neither model consistently demonstrated smaller errors when predicting subtalar joint angles. Additionally, neither model could accurately predict discrete angles for the tibiotalar and subtalar joints on a per-subject basis. Differences between model predictions and dual-fluoroscopy measurements were highly variable across subjects, with joint angle errors in at least one rotation direction surpassing 10° for 9 out of 10 subjects. Our results suggest that both the 1 and 3 DOF models can predict trends in tibiotalar joint angles on a limited basis. However, as currently implemented, neither model can predict discrete tibiotalar or subtalar joint angles for individual subjects. Inclusion of subject-specific attributes may improve the accuracy of these models. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Nonlinear Rayleigh wave inversion based on the shuffled frog-leaping algorithm

    NASA Astrophysics Data System (ADS)

    Sun, Cheng-Yu; Wang, Yan-Yan; Wu, Dun-Shi; Qin, Xiao-Jun

    2017-12-01

    At present, near-surface shear wave velocities are mainly calculated through Rayleigh wave dispersion-curve inversions in engineering surface investigations, but the required calculations pose a highly nonlinear global optimization problem. In order to alleviate the risk of falling into a local optimal solution, this paper introduces a new global optimization method, the shuffle frog-leaping algorithm (SFLA), into the Rayleigh wave dispersion-curve inversion process. SFLA is a swarm-intelligence-based algorithm that simulates a group of frogs searching for food. It uses a few parameters, achieves rapid convergence, and is capability of effective global searching. In order to test the reliability and calculation performance of SFLA, noise-free and noisy synthetic datasets were inverted. We conducted a comparative analysis with other established algorithms using the noise-free dataset, and then tested the ability of SFLA to cope with data noise. Finally, we inverted a real-world example to examine the applicability of SFLA. Results from both synthetic and field data demonstrated the effectiveness of SFLA in the interpretation of Rayleigh wave dispersion curves. We found that SFLA is superior to the established methods in terms of both reliability and computational efficiency, so it offers great potential to improve our ability to solve geophysical inversion problems.

  9. 3D joint inversion of gravity-gradient and borehole gravity data

    NASA Astrophysics Data System (ADS)

    Geng, Meixia; Yang, Qingjie; Huang, Danian

    2017-12-01

    Borehole gravity is increasingly used in mineral exploration due to the advent of slim-hole gravimeters. Given the full-tensor gradiometry data available nowadays, joint inversion of surface and borehole data is a logical next step. Here, we base our inversions on cokriging, which is a geostatistical method of estimation where the error variance is minimised by applying cross-correlation between several variables. In this study, the density estimates are derived using gravity-gradient data, borehole gravity and known densities along the borehole as a secondary variable and the density as the primary variable. Cokriging is non-iterative and therefore is computationally efficient. In addition, cokriging inversion provides estimates of the error variance for each model, which allows direct assessment of the inverse model. Examples are shown involving data from a single borehole, from multiple boreholes, and combinations of borehole gravity and gravity-gradient data. The results clearly show that the depth resolution of gravity-gradient inversion can be improved significantly by including borehole data in addition to gravity-gradient data. However, the resolution of borehole data falls off rapidly as the distance between the borehole and the feature of interest increases. In the case where the borehole is far away from the target of interest, the inverted result can be improved by incorporating gravity-gradient data, especially all five independent components for inversion.

  10. Focal mechanism determination for induced seismicity using the neighbourhood algorithm

    NASA Astrophysics Data System (ADS)

    Tan, Yuyang; Zhang, Haijiang; Li, Junlun; Yin, Chen; Wu, Furong

    2018-06-01

    Induced seismicity is widely detected during hydraulic fracture stimulation. To better understand the fracturing process, a thorough knowledge of the source mechanism is required. In this study, we develop a new method to determine the focal mechanism for induced seismicity. Three misfit functions are used in our method to measure the differences between observed and modeled data from different aspects, including the waveform, P wave polarity and S/P amplitude ratio. We minimize these misfit functions simultaneously using the neighbourhood algorithm. Through synthetic data tests, we show the ability of our method to yield reliable focal mechanism solutions and study the effect of velocity inaccuracy and location error on the solutions. To mitigate the impact of the uncertainties, we develop a joint inversion method to find the optimal source depth and focal mechanism simultaneously. Using the proposed method, we determine the focal mechanisms of 40 stimulation induced seismic events in an oil/gas field in Oman. By investigating the results, we find that the reactivation of pre-existing faults is the main cause of the induced seismicity in the monitored area. Other observations obtained from the focal mechanism solutions are also consistent with earlier studies in the same area.

  11. Joint seismic data denoising and interpolation with double-sparsity dictionary learning

    NASA Astrophysics Data System (ADS)

    Zhu, Lingchen; Liu, Entao; McClellan, James H.

    2017-08-01

    Seismic data quality is vital to geophysical applications, so that methods of data recovery, including denoising and interpolation, are common initial steps in the seismic data processing flow. We present a method to perform simultaneous interpolation and denoising, which is based on double-sparsity dictionary learning. This extends previous work that was for denoising only. The original double-sparsity dictionary learning algorithm is modified to track the traces with missing data by defining a masking operator that is integrated into the sparse representation of the dictionary. A weighted low-rank approximation algorithm is adopted to handle the dictionary updating as a sparse recovery optimization problem constrained by the masking operator. Compared to traditional sparse transforms with fixed dictionaries that lack the ability to adapt to complex data structures, the double-sparsity dictionary learning method learns the signal adaptively from selected patches of the corrupted seismic data, while preserving compact forward and inverse transform operators. Numerical experiments on synthetic seismic data indicate that this new method preserves more subtle features in the data set without introducing pseudo-Gibbs artifacts when compared to other directional multi-scale transform methods such as curvelets.

  12. Segmentation of hand radiographs using fast marching methods

    NASA Astrophysics Data System (ADS)

    Chen, Hong; Novak, Carol L.

    2006-03-01

    Rheumatoid Arthritis is one of the most common chronic diseases. Joint space width in hand radiographs is evaluated to assess joint damage in order to monitor progression of disease and response to treatment. Manual measurement of joint space width is time-consuming and highly prone to inter- and intra-observer variation. We propose a method for automatic extraction of finger bone boundaries using fast marching methods for quantitative evaluation of joint space width. The proposed algorithm includes two stages: location of hand joints followed by extraction of bone boundaries. By setting the propagation speed of the wave front as a function of image intensity values, the fast marching algorithm extracts the skeleton of the hands, in which each branch corresponds to a finger. The finger joint locations are then determined by using the image gradients along the skeletal branches. In order to extract bone boundaries at joints, the gradient magnitudes are utilized for setting the propagation speed, and the gradient phases are used for discriminating the boundaries of adjacent bones. The bone boundaries are detected by searching for the fastest paths from one side of each joint to the other side. Finally, joint space width is computed based on the extracted upper and lower bone boundaries. The algorithm was evaluated on a test set of 8 two-hand radiographs, including images from healthy patients and from patients suffering from arthritis, gout and psoriasis. Using our method, 97% of 208 joints were accurately located and 89% of 416 bone boundaries were correctly extracted.

  13. Recursive flexible multibody system dynamics using spatial operators

    NASA Technical Reports Server (NTRS)

    Jain, A.; Rodriguez, G.

    1992-01-01

    This paper uses spatial operators to develop new spatially recursive dynamics algorithms for flexible multibody systems. The operator description of the dynamics is identical to that for rigid multibody systems. Assumed-mode models are used for the deformation of each individual body. The algorithms are based on two spatial operator factorizations of the system mass matrix. The first (Newton-Euler) factorization of the mass matrix leads to recursive algorithms for the inverse dynamics, mass matrix evaluation, and composite-body forward dynamics for the systems. The second (innovations) factorization of the mass matrix, leads to an operator expression for the mass matrix inverse and to a recursive articulated-body forward dynamics algorithm. The primary focus is on serial chains, but extensions to general topologies are also described. A comparison of computational costs shows that the articulated-body, forward dynamics algorithm is much more efficient than the composite-body algorithm for most flexible multibody systems.

  14. Stochastic static fault slip inversion from geodetic data with non-negativity and bounds constraints

    NASA Astrophysics Data System (ADS)

    Nocquet, J.-M.

    2018-04-01

    Despite surface displacements observed by geodesy are linear combinations of slip at faults in an elastic medium, determining the spatial distribution of fault slip remains a ill-posed inverse problem. A widely used approach to circumvent the illness of the inversion is to add regularization constraints in terms of smoothing and/or damping so that the linear system becomes invertible. However, the choice of regularization parameters is often arbitrary, and sometimes leads to significantly different results. Furthermore, the resolution analysis is usually empirical and cannot be made independently of the regularization. The stochastic approach of inverse problems (Tarantola & Valette 1982; Tarantola 2005) provides a rigorous framework where the a priori information about the searched parameters is combined with the observations in order to derive posterior probabilities of the unkown parameters. Here, I investigate an approach where the prior probability density function (pdf) is a multivariate Gaussian function, with single truncation to impose positivity of slip or double truncation to impose positivity and upper bounds on slip for interseismic modeling. I show that the joint posterior pdf is similar to the linear untruncated Gaussian case and can be expressed as a Truncated Multi-Variate Normal (TMVN) distribution. The TMVN form can then be used to obtain semi-analytical formulas for the single, two-dimensional or n-dimensional marginal pdf. The semi-analytical formula involves the product of a Gaussian by an integral term that can be evaluated using recent developments in TMVN probabilities calculations (e.g. Genz & Bretz 2009). Posterior mean and covariance can also be efficiently derived. I show that the Maximum Posterior (MAP) can be obtained using a Non-Negative Least-Squares algorithm (Lawson & Hanson 1974) for the single truncated case or using the Bounded-Variable Least-Squares algorithm (Stark & Parker 1995) for the double truncated case. I show that the case of independent uniform priors can be approximated using TMVN. The numerical equivalence to Bayesian inversions using Monte Carlo Markov Chain (MCMC) sampling is shown for a synthetic example and a real case for interseismic modeling in Central Peru. The TMVN method overcomes several limitations of the Bayesian approach using MCMC sampling. First, the need of computer power is largely reduced. Second, unlike Bayesian MCMC based approach, marginal pdf, mean, variance or covariance are obtained independently one from each other. Third, the probability and cumulative density functions can be obtained with any density of points. Finally, determining the Maximum Posterior (MAP) is extremely fast.

  15. Stochastic inversion of ocean color data using the cross-entropy method.

    PubMed

    Salama, Mhd Suhyb; Shen, Fang

    2010-01-18

    Improving the inversion of ocean color data is an ever continuing effort to increase the accuracy of derived inherent optical properties. In this paper we present a stochastic inversion algorithm to derive inherent optical properties from ocean color, ship and space borne data. The inversion algorithm is based on the cross-entropy method where sets of inherent optical properties are generated and converged to the optimal set using iterative process. The algorithm is validated against four data sets: simulated, noisy simulated in-situ measured and satellite match-up data sets. Statistical analysis of validation results is based on model-II regression using five goodness-of-fit indicators; only R2 and root mean square of error (RMSE) are mentioned hereafter. Accurate values of total absorption coefficient are derived with R2 > 0.91 and RMSE, of log transformed data, less than 0.55. Reliable values of the total backscattering coefficient are also obtained with R2 > 0.7 (after removing outliers) and RMSE < 0.37. The developed algorithm has the ability to derive reliable results from noisy data with R2 above 0.96 for the total absorption and above 0.84 for the backscattering coefficients. The algorithm is self contained and easy to implement and modify to derive the variability of chlorophyll-a absorption that may correspond to different phytoplankton species. It gives consistently accurate results and is therefore worth considering for ocean color global products.

  16. Genetic Algorithms Evolve Optimized Transforms for Signal Processing Applications

    DTIC Science & Technology

    2005-04-01

    coefficient sets describing inverse transforms and matched forward/ inverse transform pairs that consistently outperform wavelets for image compression and reconstruction applications under conditions subject to quantization error.

  17. Delineation of a quick clay zone at Smørgrav, Norway, with electromagnetic methods under geotechnical constraints

    NASA Astrophysics Data System (ADS)

    Kalscheuer, Thomas; Bastani, Mehrdad; Donohue, Shane; Persson, Lena; Aspmo Pfaffhuber, Andreas; Reiser, Fabienne; Ren, Zhengyong

    2013-05-01

    In many coastal areas of North America and Scandinavia, post-glacial clay sediments have emerged above sea level due to iso-static uplift. These clays are often destabilised by fresh water leaching and transformed to so-called quick clays as at the investigated area at Smørgrav, Norway. Slight mechanical disturbances of these materials may trigger landslides. Since the leaching increases the electrical resistivity of quick clay as compared to normal marine clay, the application of electromagnetic (EM) methods is of particular interest in the study of quick clay structures. For the first time, single and joint inversions of direct-current resistivity (DCR), radiomagnetotelluric (RMT) and controlled-source audiomagnetotelluric (CSAMT) data were applied to delineate a zone of quick clay. The resulting 2-D models of electrical resistivity correlate excellently with previously published data from a ground conductivity metre and resistivity logs from two resistivity cone penetration tests (RCPT) into marine clay and quick clay. The RCPT log into the central part of the quick clay identifies the electrical resistivity of the quick clay structure to lie between 10 and 80 Ω m. In combination with the 2-D inversion models, it becomes possible to delineate the vertical and horizontal extent of the quick clay zone. As compared to the inversions of single data sets, the joint inversion model exhibits sharper resistivity contrasts and its resistivity values are more characteristic of the expected geology. In our preferred joint inversion model, there is a clear demarcation between dry soil, marine clay, quick clay and bedrock, which consists of alum shale and limestone.

  18. A Joint Method of Envelope Inversion Combined with Hybrid-domain Full Waveform Inversion

    NASA Astrophysics Data System (ADS)

    CUI, C.; Hou, W.

    2017-12-01

    Full waveform inversion (FWI) aims to construct high-precision subsurface models by fully using the information in seismic records, including amplitude, travel time, phase and so on. However, high non-linearity and the absence of low frequency information in seismic data lead to the well-known cycle skipping problem and make inversion easily fall into local minima. In addition, those 3D inversion methods that are based on acoustic approximation ignore the elastic effects in real seismic field, and make inversion harder. As a result, the accuracy of final inversion results highly relies on the quality of initial model. In order to improve stability and quality of inversion results, multi-scale inversion that reconstructs subsurface model from low to high frequency are applied. But, the absence of very low frequencies (< 3Hz) in field data is still bottleneck in the FWI. By extracting ultra low-frequency data from field data, envelope inversion is able to recover low wavenumber model with a demodulation operator (envelope operator), though the low frequency data does not really exist in field data. To improve the efficiency and viability of the inversion, in this study, we proposed a joint method of envelope inversion combined with hybrid-domain FWI. First, we developed 3D elastic envelope inversion, and the misfit function and the corresponding gradient operator were derived. Then we performed hybrid-domain FWI with envelope inversion result as initial model which provides low wavenumber component of model. Here, forward modeling is implemented in the time domain and inversion in the frequency domain. To accelerate the inversion, we adopt CPU/GPU heterogeneous computing techniques. There were two levels of parallelism. In the first level, the inversion tasks are decomposed and assigned to each computation node by shot number. In the second level, GPU multithreaded programming is used for the computation tasks in each node, including forward modeling, envelope extraction, DFT (discrete Fourier transform) calculation and gradients calculation. Numerical tests demonstrated that the combined envelope inversion + hybrid-domain FWI could obtain much faithful and accurate result than conventional hybrid-domain FWI. The CPU/GPU heterogeneous parallel computation could improve the performance speed.

  19. Joint Power Charging and Routing in Wireless Rechargeable Sensor Networks.

    PubMed

    Jia, Jie; Chen, Jian; Deng, Yansha; Wang, Xingwei; Aghvami, Abdol-Hamid

    2017-10-09

    The development of wireless power transfer (WPT) technology has inspired the transition from traditional battery-based wireless sensor networks (WSNs) towards wireless rechargeable sensor networks (WRSNs). While extensive efforts have been made to improve charging efficiency, little has been done for routing optimization. In this work, we present a joint optimization model to maximize both charging efficiency and routing structure. By analyzing the structure of the optimization model, we first decompose the problem and propose a heuristic algorithm to find the optimal charging efficiency for the predefined routing tree. Furthermore, by coding the many-to-one communication topology as an individual, we further propose to apply a genetic algorithm (GA) for the joint optimization of both routing and charging. The genetic operations, including tree-based recombination and mutation, are proposed to obtain a fast convergence. Our simulation results show that the heuristic algorithm reduces the number of resident locations and the total moving distance. We also show that our proposed algorithm achieves a higher charging efficiency compared with existing algorithms.

  20. Joint Power Charging and Routing in Wireless Rechargeable Sensor Networks

    PubMed Central

    Jia, Jie; Chen, Jian; Deng, Yansha; Wang, Xingwei; Aghvami, Abdol-Hamid

    2017-01-01

    The development of wireless power transfer (WPT) technology has inspired the transition from traditional battery-based wireless sensor networks (WSNs) towards wireless rechargeable sensor networks (WRSNs). While extensive efforts have been made to improve charging efficiency, little has been done for routing optimization. In this work, we present a joint optimization model to maximize both charging efficiency and routing structure. By analyzing the structure of the optimization model, we first decompose the problem and propose a heuristic algorithm to find the optimal charging efficiency for the predefined routing tree. Furthermore, by coding the many-to-one communication topology as an individual, we further propose to apply a genetic algorithm (GA) for the joint optimization of both routing and charging. The genetic operations, including tree-based recombination and mutation, are proposed to obtain a fast convergence. Our simulation results show that the heuristic algorithm reduces the number of resident locations and the total moving distance. We also show that our proposed algorithm achieves a higher charging efficiency compared with existing algorithms. PMID:28991200

  1. Embedding Term Similarity and Inverse Document Frequency into a Logical Model of Information Retrieval.

    ERIC Educational Resources Information Center

    Losada, David E.; Barreiro, Alvaro

    2003-01-01

    Proposes an approach to incorporate term similarity and inverse document frequency into a logical model of information retrieval. Highlights include document representation and matching; incorporating term similarity into the measure of distance; new algorithms for implementation; inverse document frequency; and logical versus classical models of…

  2. Basic Operational Robotics Instructional System

    NASA Technical Reports Server (NTRS)

    Todd, Brian Keith; Fischer, James; Falgout, Jane; Schweers, John

    2013-01-01

    The Basic Operational Robotics Instructional System (BORIS) is a six-degree-of-freedom rotational robotic manipulator system simulation used for training of fundamental robotics concepts, with in-line shoulder, offset elbow, and offset wrist. BORIS is used to provide generic robotics training to aerospace professionals including flight crews, flight controllers, and robotics instructors. It uses forward kinematic and inverse kinematic algorithms to simulate joint and end-effector motion, combined with a multibody dynamics model, moving-object contact model, and X-Windows based graphical user interfaces, coordinated in the Trick Simulation modeling environment. The motivation for development of BORIS was the need for a generic system for basic robotics training. Before BORIS, introductory robotics training was done with either the SRMS (Shuttle Remote Manipulator System) or SSRMS (Space Station Remote Manipulator System) simulations. The unique construction of each of these systems required some specialized training that distracted students from the ideas and goals of the basic robotics instruction.

  3. JacksonBot - Design, Simulation and Optimal Control of an Action Painting Robot

    NASA Astrophysics Data System (ADS)

    Raschke, Michael; Mombaur, Katja; Schubert, Alexander

    We present the robotics platform JacksonBot which is capable to produce paintings inspired by the Action Painting style of Jackson Pollock. A dynamically moving robot arm splashes color from a container at the end effector on the canvas. The paintings produced by this platform rely on a combination of the algorithmic generation of robot arm motions with random effects of the splashing color. The robot can be considered as a complex and powerful tool to generate art works programmed by a user. Desired end effector motions can be prescribed either by mathematical functions, by point sequences or by data glove motions. We have evaluated the effect of different shapes of input motions on the resulting painting. In order to compute the robot joint trajectories necessary to move along a desired end effector path, we use an optimal control based approach to solve the inverse kinematics problem.

  4. Interpretation of magnetotelluric resistivity and phase soundings over horizontal layers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patella, D.

    1976-02-01

    The present paper deals with a new inverse method for quantitatively interpreting magnetotelluric apparent resistivity and phase-lag sounding curves over horizontally stratified earth sections. The recurrent character of the general formula relating the wave impedance of an (n-l)-layered medium to that of an n-layered medium suggests the use of the method of reduction to a lower boundary plane, as originally termed by Koefoed in the case of dc resistivity soundings. The layering parameters are so directly derived by a simple iterative procedure. The method is applicable for any number of layers but only when both apparent resistivity and phase-lag soundingmore » curves are jointly available. Moreover no sophisticated algorithm is required: a simple desk electronic calculator together with a sheet of two-layer apparent resistivity and phase-lag master curves are sufficient to reproduce earth sections which, in the range of equivalence, are all consistent with field data.« less

  5. Assessment of two-dimensional induced accelerations from measured kinematic and kinetic data.

    PubMed

    Hof, A L; Otten, E

    2005-11-01

    A simple algorithm is presented to calculate the induced accelerations of body segments in human walking for the sagittal plane. The method essentially consists of setting up 2x4 force equations, 4 moment equations, 2x3 joint constraint equations and two constraints related to the foot-ground interaction. Data needed for the equations are, next to masses and moments of inertia, the positions of ankle, knee and hip. This set of equations is put in the form of an 18x18 matrix or 20x20 matrix, the solution of which can be found by inversion. By applying input vectors related to gravity, to centripetal accelerations or to muscle moments, the 'induced' accelerations and reaction forces related to these inputs can be found separately. The method was tested for walking in one subject. Good agreement was found with published results obtained by much more complicated three-dimensional forward dynamic models.

  6. An automated method for depth-dependent crustal anisotropy detection with receiver function

    NASA Astrophysics Data System (ADS)

    Licciardi, Andrea; Piana Agostinetti, Nicola

    2015-04-01

    Crustal seismic anisotropy can be generated by a variety of geological factors (e.g. alignment of minerals/cracks, presence of fluids etc...). In the case of transversely isotropic media approximation, information about strength and orientation of the anisotropic symmetry axis (including dip) can be extracted from the analysis of P-to-S conversions by means of teleseismic receiver functions (RF). Classically this has been achieved through probabilistic inversion encoding a forward solver for anisotropic media. This approach strongly relies on apriori choices regarding Earth's crust parameterization and velocity structure, requires an extensive knowledge of the RF method and involves time consuming trial and error steps. We present an automated method for reducing the non-uniqueness in this kind of inversions and for retrieving depth-dependent seismic anisotropy parameters in the crust with a resolution of some hundreds of meters. The method involves a multi-frequency approach (for better absolute Vs determination) and the decomposition of the RF data-set in its azimuthal harmonics (to separate the effects of isotropic and anisotropic component). A first inversion of the isotropic component (Zero-order harmonics) by means of a Reversible jump Markov Chain Monte Carlo (RjMCMC) provides the posterior probability distribution for the position of the velocity jumps at depth, from which information on the number of layers and the S-wave velocity structure below a broadband seismic station can be extracted. This information together with that encoded in the first order harmonic is jointly used in an automated way to: (1) determine the number of anisotropic layers and their approximate position at depth, and (2) narrow the search boundaries for layer thickness and S-wave velocity. Finaly, an inversion is carried out with a Neighbourhood Algorithm (NA), where the free parameters are represented by the anisotropic structure beneath the seismic station. We tested the method against synthetic RF with correlated Gaussian noise to investigate the resolution power for multiple and thin (1-5 km) anisotropic layers in the crust. The algorithm correctly retrieves the true models for the number and the position of the anisotropic layers, their strength and orientation of the anisotropic symmetry axis, although the trend direction is better constrained than the dip angle. The method is then applied to a real data-set and the results compared with previous RF studies.

  7. Joint Inversion of Crustal and Uppermost Mantle Structure in Western China

    DTIC Science & Technology

    2013-11-02

    western China from this project. Approved for public release; distribution is unlimited. 21 References Ammon, C. J., G. E. Randall , and G. Zandt, On...the non-uniqueness of receiver function inversions, J. Geophys. Res., 95(B10), pp. 15303-15318, 1990. Benson , G., et al., Processing seismic

  8. Modeling and optimization of joint quality for laser transmission joint of thermoplastic using an artificial neural network and a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Xiao; Zhang, Cheng; Li, Pin; Wang, Kai; Hu, Yang; Zhang, Peng; Liu, Huixia

    2012-11-01

    A central composite rotatable experimental design(CCRD) is conducted to design experiments for laser transmission joining of thermoplastic-Polycarbonate (PC). The artificial neural network was used to establish the relationships between laser transmission joining process parameters (the laser power, velocity, clamp pressure, scanning number) and joint strength and joint seam width. The developed mathematical models are tested by analysis of variance (ANOVA) method to check their adequacy and the effects of process parameters on the responses and the interaction effects of key process parameters on the quality are analyzed and discussed. Finally, the desirability function coupled with genetic algorithm is used to carry out the optimization of the joint strength and joint width. The results show that the predicted results of the optimization are in good agreement with the experimental results, so this study provides an effective method to enhance the joint quality.

  9. OCT structure, COB location and magmatic type of the S Angolan & SE Brazilian margins from integrated quantitative analysis of deep seismic reflection and gravity anomaly data

    NASA Astrophysics Data System (ADS)

    Cowie, Leanne; Kusznir, Nick; Horn, Brian

    2014-05-01

    Integrated quantitative analysis using deep seismic reflection data and gravity inversion have been applied to the S Angolan and SE Brazilian margins to determine OCT structure, COB location and magmatic type. Knowledge of these margin parameters are of critical importance for understanding rifted continental margin formation processes and in evaluating petroleum systems in deep-water frontier oil and gas exploration. The OCT structure, COB location and magmatic type of the S Angolan and SE Brazilian rifted continental margins are much debated; exhumed and serpentinised mantle have been reported at these margins. Gravity anomaly inversion, incorporating a lithosphere thermal gravity anomaly correction, has been used to determine Moho depth, crustal basement thickness and continental lithosphere thinning. Residual Depth Anomaly (RDA) analysis has been used to investigate OCT bathymetric anomalies with respect to expected oceanic bathymetries and subsidence analysis has been used to determine the distribution of continental lithosphere thinning. These techniques have been validated for profiles Lusigal 12 and ISE-01 on the Iberian margin. In addition a joint inversion technique using deep seismic reflection and gravity anomaly data has been applied to the ION-GXT BS1-575 SE Brazil and ION-GXT CS1-2400 S Angola deep seismic reflection lines. The joint inversion method solves for coincident seismic and gravity Moho in the time domain and calculates the lateral variations in crustal basement densities and velocities along the seismic profiles. Gravity inversion, RDA and subsidence analysis along the ION-GXT BS1-575 profile, which crosses the Sao Paulo Plateau and Florianopolis Ridge of the SE Brazilian margin, predict the COB to be located SE of the Florianopolis Ridge. Integrated quantitative analysis shows no evidence for exhumed mantle on this margin profile. The joint inversion technique predicts oceanic crustal thicknesses of between 7 and 8 km thickness with normal oceanic basement seismic velocities and densities. Beneath the Sao Paulo Plateau and Florianopolis Ridge, joint inversion predicts crustal basement thicknesses between 10-15km with high values of basement density and seismic velocities under the Sao Paulo Plateau which are interpreted as indicating a significant magmatic component within the crustal basement. The Sao Paulo Plateau and Florianopolis Ridge are separated by a thin region of crustal basement beneath the salt interpreted as a regional transtensional structure. Sediment corrected RDAs and gravity derived "synthetic" RDAs are of a similar magnitude on oceanic crust, implying negligible mantle dynamic topography. Gravity inversion, RDA and subsidence analysis along the S Angolan ION-GXT CS1-2400 profile suggests that exhumed mantle, corresponding to a magma poor margin, is absent..The thickness of earliest oceanic crust, derived from gravity and deep seismic reflection data, is approximately 7km consistent with the global average oceanic crustal thicknesses. The joint inversion predicts a small difference between oceanic and continental crustal basement density and seismic velocity, with the change in basement density and velocity corresponding to the COB independently determined from RDA and subsidence analysis. The difference between the sediment corrected RDA and that predicted from gravity inversion crustal thickness variation implies that this margin is experiencing approximately 500m of anomalous uplift attributed to mantle dynamic uplift.

  10. Beam-column joint shear prediction using hybridized deep learning neural network with genetic algorithm

    NASA Astrophysics Data System (ADS)

    Mundher Yaseen, Zaher; Abdulmohsin Afan, Haitham; Tran, Minh-Tung

    2018-04-01

    Scientifically evidenced that beam-column joints are a critical point in the reinforced concrete (RC) structure under the fluctuation loads effects. In this novel hybrid data-intelligence model developed to predict the joint shear behavior of exterior beam-column structure frame. The hybrid data-intelligence model is called genetic algorithm integrated with deep learning neural network model (GA-DLNN). The genetic algorithm is used as prior modelling phase for the input approximation whereas the DLNN predictive model is used for the prediction phase. To demonstrate this structural problem, experimental data is collected from the literature that defined the dimensional and specimens’ properties. The attained findings evidenced the efficitveness of the hybrid GA-DLNN in modelling beam-column joint shear problem. In addition, the accurate prediction achived with less input variables owing to the feasibility of the evolutionary phase.

  11. Development of a sensor coordinated kinematic model for neural network controller training

    NASA Technical Reports Server (NTRS)

    Jorgensen, Charles C.

    1990-01-01

    A robotic benchmark problem useful for evaluating alternative neural network controllers is presented. Specifically, it derives two camera models and the kinematic equations of a multiple degree of freedom manipulator whose end effector is under observation. The mapping developed include forward and inverse translations from binocular images to 3-D target position and the inverse kinematics of mapping point positions into manipulator commands in joint space. Implementation is detailed for a three degree of freedom manipulator with one revolute joint at the base and two prismatic joints on the arms. The example is restricted to operate within a unit cube with arm links of 0.6 and 0.4 units respectively. The development is presented in the context of more complex simulations and a logical path for extension of the benchmark to higher degree of freedom manipulators is presented.

  12. Determination of rock-sample anisotropy from P- and S-wave traveltime inversion

    NASA Astrophysics Data System (ADS)

    Pšenčík, Ivan; Růžek, Bohuslav; Lokajíček, Tomáš; Svitek, Tomáš

    2018-04-01

    We determine anisotropy of a rock sample from laboratory measurements of P- and S-wave traveltimes using weak-anisotropy approximation and parametri-zation of the medium by a special set of anisotropy parameters. For the traveltime inversion we use first-order velocity expressions in the weak-anisotropy approximation, which allow to deal with P and S waves separately. Each wave is described by 15 anisotropy parameters, 9 of which are common for both waves. The parameters allow an approximate construction of separate P- or common S-wave phase-velocity surfaces. Common S wave concept is used to simplify the treatment of S waves. In order to obtain all 21 anisotropy parameters, P- and S-wave traveltimes must be inverted jointly. The proposed inversion scheme has several advantages. As a consequence of the use of weak-anisotropy approximation and assumed homogeneity of the rock sample, equations used for the inversion are linear. Thus the inversion procedure is non-iterative. In the approximation used, phase and ray velocities are equal in their magnitude and direction. Thus analysis whether the measured velocity is the ray or phase velocity is unnecessary. Another advantage of the proposed inversion scheme is that, thanks to the use of the common S-wave concept, it does not require identification of S-wave modes. It is sufficient to know the two S-wave traveltimes without specification, to which S-wave mode they belong. The inversion procedure is tested first on synthetic traveltimes and then used for the inversion of traveltimes measured in laboratory. In both cases, we perform first the inversion of P-wave traveltimes alone and then joint inversion of P- and S-wave traveltimes, and compare the results.

  13. Integrated segmentation of cellular structures

    NASA Astrophysics Data System (ADS)

    Ajemba, Peter; Al-Kofahi, Yousef; Scott, Richard; Donovan, Michael; Fernandez, Gerardo

    2011-03-01

    Automatic segmentation of cellular structures is an essential step in image cytology and histology. Despite substantial progress, better automation and improvements in accuracy and adaptability to novel applications are needed. In applications utilizing multi-channel immuno-fluorescence images, challenges include misclassification of epithelial and stromal nuclei, irregular nuclei and cytoplasm boundaries, and over and under-segmentation of clustered nuclei. Variations in image acquisition conditions and artifacts from nuclei and cytoplasm images often confound existing algorithms in practice. In this paper, we present a robust and accurate algorithm for jointly segmenting cell nuclei and cytoplasm using a combination of ideas to reduce the aforementioned problems. First, an adaptive process that includes top-hat filtering, Eigenvalues-of-Hessian blob detection and distance transforms is used to estimate the inverse illumination field and correct for intensity non-uniformity in the nuclei channel. Next, a minimum-error-thresholding based binarization process and seed-detection combining Laplacian-of-Gaussian filtering constrained by a distance-map-based scale selection is used to identify candidate seeds for nuclei segmentation. The initial segmentation using a local maximum clustering algorithm is refined using a minimum-error-thresholding technique. Final refinements include an artifact removal process specifically targeted at lumens and other problematic structures and a systemic decision process to reclassify nuclei objects near the cytoplasm boundary as epithelial or stromal. Segmentation results were evaluated using 48 realistic phantom images with known ground-truth. The overall segmentation accuracy exceeds 94%. The algorithm was further tested on 981 images of actual prostate cancer tissue. The artifact removal process worked in 90% of cases. The algorithm has now been deployed in a high-volume histology analysis application.

  14. Final Report: Quantification of Uncertainty in Extreme Scale Computations (QUEST)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marzouk, Youssef; Conrad, Patrick; Bigoni, Daniele

    QUEST (\\url{www.quest-scidac.org}) is a SciDAC Institute that is focused on uncertainty quantification (UQ) in large-scale scientific computations. Our goals are to (1) advance the state of the art in UQ mathematics, algorithms, and software; and (2) provide modeling, algorithmic, and general UQ expertise, together with software tools, to other SciDAC projects, thereby enabling and guiding a broad range of UQ activities in their respective contexts. QUEST is a collaboration among six institutions (Sandia National Laboratories, Los Alamos National Laboratory, the University of Southern California, Massachusetts Institute of Technology, the University of Texas at Austin, and Duke University) with a historymore » of joint UQ research. Our vision encompasses all aspects of UQ in leadership-class computing. This includes the well-founded setup of UQ problems; characterization of the input space given available data/information; local and global sensitivity analysis; adaptive dimensionality and order reduction; forward and inverse propagation of uncertainty; handling of application code failures, missing data, and hardware/software fault tolerance; and model inadequacy, comparison, validation, selection, and averaging. The nature of the UQ problem requires the seamless combination of data, models, and information across this landscape in a manner that provides a self-consistent quantification of requisite uncertainties in predictions from computational models. Accordingly, our UQ methods and tools span an interdisciplinary space across applied math, information theory, and statistics. The MIT QUEST effort centers on statistical inference and methods for surrogate or reduced-order modeling. MIT personnel have been responsible for the development of adaptive sampling methods, methods for approximating computationally intensive models, and software for both forward uncertainty propagation and statistical inverse problems. A key software product of the MIT QUEST effort is the MIT Uncertainty Quantification library, called MUQ (\\url{muq.mit.edu}).« less

  15. Kinematics and control of redundant robotic arm based on dielectric elastomer actuators

    NASA Astrophysics Data System (ADS)

    Branz, Francesco; Antonello, Andrea; Carron, Andrea; Carli, Ruggero; Francesconi, Alessandro

    2015-04-01

    Soft robotics is a promising field and its application to space mechanisms could represent a breakthrough in space technologies by enabling new operative scenarios (e.g. soft manipulators, capture systems). Dielectric Elastomers Actuators have been under deep study for a number of years and have shown several advantages that could be of key importance for space applications. Among such advantages the most notable are high conversion efficiency, distributed actuation, self-sensing capability, multi-degree-of-freedom design, light weight and low cost. The big potentialities of double cone actuators have been proven in terms of good performances (i.e. stroke and force/torque), ease of manufacturing and durability. In this work the kinematic, dynamic and control design of a two-joint redundant robotic arm is presented. Two double cone actuators are assembled in series to form a two-link design. Each joint has two degrees of freedom (one rotational and one translational) for a total of four. The arm is designed to move in a 2-D environment (i.e. the horizontal plane) with 4 DoF, consequently having two degrees of redundancy. The redundancy is exploited in order to minimize the joint loads. The kinematic design with redundant Jacobian inversion is presented. The selected control algorithm is described along with the results of a number of dynamic simulations that have been executed for performance verification. Finally, an experimental setup is presented based on a flexible structure that counteracts gravity during testing in order to better emulate future zero-gravity applications.

  16. Optimisation in radiotherapy. III: Stochastic optimisation algorithms and conclusions.

    PubMed

    Ebert, M

    1997-12-01

    This is the final article in a three part examination of optimisation in radiotherapy. Previous articles have established the bases and form of the radiotherapy optimisation problem, and examined certain types of optimisation algorithm, namely, those which perform some form of ordered search of the solution space (mathematical programming), and those which attempt to find the closest feasible solution to the inverse planning problem (deterministic inversion). The current paper examines algorithms which search the space of possible irradiation strategies by stochastic methods. The resulting iterative search methods move about the solution space by sampling random variates, which gradually become more constricted as the algorithm converges upon the optimal solution. This paper also discusses the implementation of optimisation in radiotherapy practice.

  17. 1D-VAR Retrieval Using Superchannels

    NASA Technical Reports Server (NTRS)

    Liu, Xu; Zhou, Daniel; Larar, Allen; Smith, William L.; Schluessel, Peter; Mango, Stephen; SaintGermain, Karen

    2008-01-01

    Since modern ultra-spectral remote sensors have thousands of channels, it is difficult to include all of them in a 1D-var retrieval system. We will describe a physical inversion algorithm, which includes all available channels for the atmospheric temperature, moisture, cloud, and surface parameter retrievals. Both the forward model and the inversion algorithm compress the channel radiances into super channels. These super channels are obtained by projecting the radiance spectra onto a set of pre-calculated eigenvectors. The forward model provides both super channel properties and jacobian in EOF space directly. For ultra-spectral sensors such as Infrared Atmospheric Sounding Interferometer (IASI) and the NPOESS Airborne Sounder Testbed Interferometer (NAST), a compression ratio of more than 80 can be achieved, leading to a significant reduction in computations involved in an inversion process. Results will be shown applying the algorithm to real IASI and NAST data.

  18. Inverse transport calculations in optical imaging with subspace optimization algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding, Tian, E-mail: tding@math.utexas.edu; Ren, Kui, E-mail: ren@math.utexas.edu

    2014-09-15

    Inverse boundary value problems for the radiative transport equation play an important role in optics-based medical imaging techniques such as diffuse optical tomography (DOT) and fluorescence optical tomography (FOT). Despite the rapid progress in the mathematical theory and numerical computation of these inverse problems in recent years, developing robust and efficient reconstruction algorithms remains a challenging task and an active research topic. We propose here a robust reconstruction method that is based on subspace minimization techniques. The method splits the unknown transport solution (or a functional of it) into low-frequency and high-frequency components, and uses singular value decomposition to analyticallymore » recover part of low-frequency information. Minimization is then applied to recover part of the high-frequency components of the unknowns. We present some numerical simulations with synthetic data to demonstrate the performance of the proposed algorithm.« less

  19. ERBE Geographic Scene and Monthly Snow Data

    NASA Technical Reports Server (NTRS)

    Coleman, Lisa H.; Flug, Beth T.; Gupta, Shalini; Kizer, Edward A.; Robbins, John L.

    1997-01-01

    The Earth Radiation Budget Experiment (ERBE) is a multisatellite system designed to measure the Earth's radiation budget. The ERBE data processing system consists of several software packages or sub-systems, each designed to perform a particular task. The primary task of the Inversion Subsystem is to reduce satellite altitude radiances to fluxes at the top of the Earth's atmosphere. To accomplish this, angular distribution models (ADM's) are required. These ADM's are a function of viewing and solar geometry and of the scene type as determined by the ERBE scene identification algorithm which is a part of the Inversion Subsystem. The Inversion Subsystem utilizes 12 scene types which are determined by the ERBE scene identification algorithm. The scene type is found by combining the most probable cloud cover, which is determined statistically by the scene identification algorithm, with the underlying geographic scene type. This Contractor Report describes how the geographic scene type is determined on a monthly basis.

  20. Transitionless driving on adiabatic search algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oh, Sangchul, E-mail: soh@qf.org.qa; Kais, Sabre, E-mail: kais@purdue.edu; Department of Chemistry, Department of Physics and Birck Nanotechnology Center, Purdue University, West Lafayette, Indiana 47907

    We study quantum dynamics of the adiabatic search algorithm with the equivalent two-level system. Its adiabatic and non-adiabatic evolution is studied and visualized as trajectories of Bloch vectors on a Bloch sphere. We find the change in the non-adiabatic transition probability from exponential decay for the short running time to inverse-square decay in asymptotic running time. The scaling of the critical running time is expressed in terms of the Lambert W function. We derive the transitionless driving Hamiltonian for the adiabatic search algorithm, which makes a quantum state follow the adiabatic path. We demonstrate that a uniform transitionless driving Hamiltonian,more » approximate to the exact time-dependent driving Hamiltonian, can alter the non-adiabatic transition probability from the inverse square decay to the inverse fourth power decay with the running time. This may open up a new but simple way of speeding up adiabatic quantum dynamics.« less

Top