Sample records for tikhonov regularization method

  1. Solving ill-posed control problems by stabilized finite element methods: an alternative to Tikhonov regularization

    NASA Astrophysics Data System (ADS)

    Burman, Erik; Hansbo, Peter; Larson, Mats G.

    2018-03-01

    Tikhonov regularization is one of the most commonly used methods for the regularization of ill-posed problems. In the setting of finite element solutions of elliptic partial differential control problems, Tikhonov regularization amounts to adding suitably weighted least squares terms of the control variable, or derivatives thereof, to the Lagrangian determining the optimality system. In this note we show that the stabilization methods for discretely ill-posed problems developed in the setting of convection-dominated convection-diffusion problems, can be highly suitable for stabilizing optimal control problems, and that Tikhonov regularization will lead to less accurate discrete solutions. We consider some inverse problems for Poisson’s equation as an illustration and derive new error estimates both for the reconstruction of the solution from the measured data and reconstruction of the source term from the measured data. These estimates include both the effect of the discretization error and error in the measurements.

  2. Generalizations of Tikhonov's regularized method of least squares to non-Euclidean vector norms

    NASA Astrophysics Data System (ADS)

    Volkov, V. V.; Erokhin, V. I.; Kakaev, V. V.; Onufrei, A. Yu.

    2017-09-01

    Tikhonov's regularized method of least squares and its generalizations to non-Euclidean norms, including polyhedral, are considered. The regularized method of least squares is reduced to mathematical programming problems obtained by "instrumental" generalizations of the Tikhonov lemma on the minimal (in a certain norm) solution of a system of linear algebraic equations with respect to an unknown matrix. Further studies are needed for problems concerning the development of methods and algorithms for solving reduced mathematical programming problems in which the objective functions and admissible domains are constructed using polyhedral vector norms.

  3. A method to deconvolve stellar rotational velocities II. The probability distribution function via Tikhonov regularization

    NASA Astrophysics Data System (ADS)

    Christen, Alejandra; Escarate, Pedro; Curé, Michel; Rial, Diego F.; Cassetti, Julia

    2016-10-01

    Aims: Knowing the distribution of stellar rotational velocities is essential for understanding stellar evolution. Because we measure the projected rotational speed v sin I, we need to solve an ill-posed problem given by a Fredholm integral of the first kind to recover the "true" rotational velocity distribution. Methods: After discretization of the Fredholm integral we apply the Tikhonov regularization method to obtain directly the probability distribution function for stellar rotational velocities. We propose a simple and straightforward procedure to determine the Tikhonov parameter. We applied Monte Carlo simulations to prove that the Tikhonov method is a consistent estimator and asymptotically unbiased. Results: This method is applied to a sample of cluster stars. We obtain confidence intervals using a bootstrap method. Our results are in close agreement with those obtained using the Lucy method for recovering the probability density distribution of rotational velocities. Furthermore, Lucy estimation lies inside our confidence interval. Conclusions: Tikhonov regularization is a highly robust method that deconvolves the rotational velocity probability density function from a sample of v sin I data directly without the need for any convergence criteria.

  4. Using Tikhonov Regularization for Spatial Projections from CSR Regularized Spherical Harmonic GRACE Solutions

    NASA Astrophysics Data System (ADS)

    Save, H.; Bettadpur, S. V.

    2013-12-01

    It has been demonstrated before that using Tikhonov regularization produces spherical harmonic solutions from GRACE that have very little residual stripes while capturing all the signal observed by GRACE within the noise level. This paper demonstrates a two-step process and uses Tikhonov regularization to remove the residual stripes in the CSR regularized spherical harmonic coefficients when computing the spatial projections. We discuss methods to produce mass anomaly grids that have no stripe features while satisfying the necessary condition of capturing all observed signal within the GRACE noise level.

  5. 3D first-arrival traveltime tomography with modified total variation regularization

    NASA Astrophysics Data System (ADS)

    Jiang, Wenbin; Zhang, Jie

    2018-02-01

    Three-dimensional (3D) seismic surveys have become a major tool in the exploration and exploitation of hydrocarbons. 3D seismic first-arrival traveltime tomography is a robust method for near-surface velocity estimation. A common approach for stabilizing the ill-posed inverse problem is to apply Tikhonov regularization to the inversion. However, the Tikhonov regularization method recovers smooth local structures while blurring the sharp features in the model solution. We present a 3D first-arrival traveltime tomography method with modified total variation (MTV) regularization to preserve sharp velocity contrasts and improve the accuracy of velocity inversion. To solve the minimization problem of the new traveltime tomography method, we decouple the original optimization problem into two following subproblems: a standard traveltime tomography problem with the traditional Tikhonov regularization and a L2 total variation problem. We apply the conjugate gradient method and split-Bregman iterative method to solve these two subproblems, respectively. Our synthetic examples show that the new method produces higher resolution models than the conventional traveltime tomography with Tikhonov regularization. We apply the technique to field data. The stacking section shows significant improvements with static corrections from the MTV traveltime tomography.

  6. Analysis of the Tikhonov regularization to retrieve thermal conductivity depth-profiles from infrared thermography data

    NASA Astrophysics Data System (ADS)

    Apiñaniz, Estibaliz; Mendioroz, Arantza; Salazar, Agustín; Celorrio, Ricardo

    2010-09-01

    We analyze the ability of the Tikhonov regularization to retrieve different shapes of in-depth thermal conductivity profiles, usually encountered in hardened materials, from surface temperature data. Exponential, oscillating, and sigmoidal profiles are studied. By performing theoretical experiments with added white noises, the influence of the order of the Tikhonov functional and of the parameters that need to be tuned to carry out the inversion are investigated. The analysis shows that the Tikhonov regularization is very well suited to reconstruct smooth profiles but fails when the conductivity exhibits steep slopes. We check a natural alternative regularization, the total variation functional, which gives much better results for sigmoidal profiles. Accordingly, a strategy to deal with real data is proposed in which we introduce this total variation regularization. This regularization is applied to the inversion of real data corresponding to a case hardened AISI1018 steel plate, giving much better anticorrelation of the retrieved conductivity with microindentation test data than the Tikhonov regularization. The results suggest that this is a promising way to improve the reliability of local inversion methods.

  7. On convergence and convergence rates for Ivanov and Morozov regularization and application to some parameter identification problems in elliptic PDEs

    NASA Astrophysics Data System (ADS)

    Kaltenbacher, Barbara; Klassen, Andrej

    2018-05-01

    In this paper we provide a convergence analysis of some variational methods alternative to the classical Tikhonov regularization, namely Ivanov regularization (also called the method of quasi solutions) with some versions of the discrepancy principle for choosing the regularization parameter, and Morozov regularization (also called the method of the residuals). After motivating nonequivalence with Tikhonov regularization by means of an example, we prove well-definedness of the Ivanov and the Morozov method, convergence in the sense of regularization, as well as convergence rates under variational source conditions. Finally, we apply these results to some linear and nonlinear parameter identification problems in elliptic boundary value problems.

  8. Unfolding sphere size distributions with a density estimator based on Tikhonov regularization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weese, J.; Korat, E.; Maier, D.

    1997-12-01

    This report proposes a method for unfolding sphere size distributions given a sample of radii that combines the advantages of a density estimator with those of Tikhonov regularization methods. The following topics are discusses in this report to achieve this method: the relation between the profile and the sphere size distribution; the method for unfolding sphere size distributions; the results based on simulations; and the experimental data comparison.

  9. A Tikhonov Regularization Scheme for Focus Rotations with Focused Ultrasound Phased Arrays

    PubMed Central

    Hughes, Alec; Hynynen, Kullervo

    2016-01-01

    Phased arrays have a wide range of applications in focused ultrasound therapy. By using an array of individually-driven transducer elements, it is possible to steer a focus through space electronically and compensate for acoustically heterogeneous media with phase delays. In this paper, the concept of focusing an ultrasound phased array is expanded to include a method to control the orientation of the focus using a Tikhonov regularization scheme. It is then shown that the Tikhonov regularization parameter used to solve the ill-posed focus rotation problem plays an important role in the balance between quality focusing and array efficiency. Finally, the technique is applied to the synthesis of multiple foci, showing that this method allows for multiple independent spatial rotations. PMID:27913323

  10. A Tikhonov Regularization Scheme for Focus Rotations With Focused Ultrasound-Phased Arrays.

    PubMed

    Hughes, Alec; Hynynen, Kullervo

    2016-12-01

    Phased arrays have a wide range of applications in focused ultrasound therapy. By using an array of individually driven transducer elements, it is possible to steer a focus through space electronically and compensate for acoustically heterogeneous media with phase delays. In this paper, the concept of focusing an ultrasound-phased array is expanded to include a method to control the orientation of the focus using a Tikhonov regularization scheme. It is then shown that the Tikhonov regularization parameter used to solve the ill-posed focus rotation problem plays an important role in the balance between quality focusing and array efficiency. Finally, the technique is applied to the synthesis of multiple foci, showing that this method allows for multiple independent spatial rotations.

  11. The existence results and Tikhonov regularization method for generalized mixed variational inequalities in Banach spaces

    NASA Astrophysics Data System (ADS)

    Wang, Min

    2017-06-01

    This paper aims to establish the Tikhonov regularization method for generalized mixed variational inequalities in Banach spaces. For this purpose, we firstly prove a very general existence result for generalized mixed variational inequalities, provided that the mapping involved has the so-called mixed variational inequality property and satisfies a rather weak coercivity condition. Finally, we establish the Tikhonov regularization method for generalized mixed variational inequalities. Our findings extended the results for the generalized variational inequality problem (for short, GVIP( F, K)) in R^n spaces (He in Abstr Appl Anal, 2012) to the generalized mixed variational inequality problem (for short, GMVIP(F,φ , K)) in reflexive Banach spaces. On the other hand, we generalized the corresponding results for the generalized mixed variational inequality problem (for short, GMVIP(F,φ ,K)) in R^n spaces (Fu and He in J Sichuan Norm Univ (Nat Sci) 37:12-17, 2014) to reflexive Banach spaces.

  12. Optimal Tikhonov regularization for DEER spectroscopy

    NASA Astrophysics Data System (ADS)

    Edwards, Thomas H.; Stoll, Stefan

    2018-03-01

    Tikhonov regularization is the most commonly used method for extracting distance distributions from experimental double electron-electron resonance (DEER) spectroscopy data. This method requires the selection of a regularization parameter, α , and a regularization operator, L. We analyze the performance of a large set of α selection methods and several regularization operators, using a test set of over half a million synthetic noisy DEER traces. These are generated from distance distributions obtained from in silico double labeling of a protein crystal structure of T4 lysozyme with the spin label MTSSL. We compare the methods and operators based on their ability to recover the model distance distributions from the noisy time traces. The results indicate that several α selection methods perform quite well, among them the Akaike information criterion and the generalized cross validation method with either the first- or second-derivative operator. They perform significantly better than currently utilized L-curve methods.

  13. On constraining pilot point calibration with regularization in PEST

    USGS Publications Warehouse

    Fienen, M.N.; Muffels, C.T.; Hunt, R.J.

    2009-01-01

    Ground water model calibration has made great advances in recent years with practical tools such as PEST being instrumental for making the latest techniques available to practitioners. As models and calibration tools get more sophisticated, however, the power of these tools can be misapplied, resulting in poor parameter estimates and/or nonoptimally calibrated models that do not suit their intended purpose. Here, we focus on an increasingly common technique for calibrating highly parameterized numerical models - pilot point parameterization with Tikhonov regularization. Pilot points are a popular method for spatially parameterizing complex hydrogeologic systems; however, additional flexibility offered by pilot points can become problematic if not constrained by Tikhonov regularization. The objective of this work is to explain and illustrate the specific roles played by control variables in the PEST software for Tikhonov regularization applied to pilot points. A recent study encountered difficulties implementing this approach, but through examination of that analysis, insight into underlying sources of potential misapplication can be gained and some guidelines for overcoming them developed. ?? 2009 National Ground Water Association.

  14. Identification of moving vehicle forces on bridge structures via moving average Tikhonov regularization

    NASA Astrophysics Data System (ADS)

    Pan, Chu-Dong; Yu, Ling; Liu, Huan-Lin

    2017-08-01

    Traffic-induced moving force identification (MFI) is a typical inverse problem in the field of bridge structural health monitoring. Lots of regularization-based methods have been proposed for MFI. However, the MFI accuracy obtained from the existing methods is low when the moving forces enter into and exit a bridge deck due to low sensitivity of structural responses to the forces at these zones. To overcome this shortcoming, a novel moving average Tikhonov regularization method is proposed for MFI by combining with the moving average concepts. Firstly, the bridge-vehicle interaction moving force is assumed as a discrete finite signal with stable average value (DFS-SAV). Secondly, the reasonable signal feature of DFS-SAV is quantified and introduced for improving the penalty function (∣∣x∣∣2 2) defined in the classical Tikhonov regularization. Then, a feasible two-step strategy is proposed for selecting regularization parameter and balance coefficient defined in the improved penalty function. Finally, both numerical simulations on a simply-supported beam and laboratory experiments on a hollow tube beam are performed for assessing the accuracy and the feasibility of the proposed method. The illustrated results show that the moving forces can be accurately identified with a strong robustness. Some related issues, such as selection of moving window length, effect of different penalty functions, and effect of different car speeds, are discussed as well.

  15. Physics-driven Spatiotemporal Regularization for High-dimensional Predictive Modeling: A Novel Approach to Solve the Inverse ECG Problem

    NASA Astrophysics Data System (ADS)

    Yao, Bing; Yang, Hui

    2016-12-01

    This paper presents a novel physics-driven spatiotemporal regularization (STRE) method for high-dimensional predictive modeling in complex healthcare systems. This model not only captures the physics-based interrelationship between time-varying explanatory and response variables that are distributed in the space, but also addresses the spatial and temporal regularizations to improve the prediction performance. The STRE model is implemented to predict the time-varying distribution of electric potentials on the heart surface based on the electrocardiogram (ECG) data from the distributed sensor network placed on the body surface. The model performance is evaluated and validated in both a simulated two-sphere geometry and a realistic torso-heart geometry. Experimental results show that the STRE model significantly outperforms other regularization models that are widely used in current practice such as Tikhonov zero-order, Tikhonov first-order and L1 first-order regularization methods.

  16. Nonsmooth, nonconvex regularizers applied to linear electromagnetic inverse problems

    NASA Astrophysics Data System (ADS)

    Hidalgo-Silva, H.; Gomez-Trevino, E.

    2017-12-01

    Tikhonov's regularization method is the standard technique applied to obtain models of the subsurface conductivity distribution from electric or electromagnetic measurements by solving UT (m) = | F (m) - d |2 + λ P(m). The second term correspond to the stabilizing functional, with P (m) = | ∇ m |2 the usual approach, and λ the regularization parameter. Due to the roughness penalizer inclusion, the model developed by Tikhonov's algorithm tends to smear discontinuities, a feature that may be undesirable. An important requirement for the regularizer is to allow the recovery of edges, and smooth the homogeneous parts. As is well known, Total Variation (TV) is now the standard approach to meet this requirement. Recently, Wang et.al. proved convergence for alternating direction method of multipliers in nonconvex, nonsmooth optimization. In this work we present a study of several algorithms for model recovering of Geosounding data based on Infimal Convolution, and also on hybrid, TV and second order TV and nonsmooth, nonconvex regularizers, observing their performance on synthetic and real data. The algorithms are based on Bregman iteration and Split Bregman method, and the geosounding method is the low-induction numbers magnetic dipoles. Non-smooth regularizers are considered using the Legendre-Fenchel transform.

  17. FOREWORD: Tackling inverse problems in a Banach space environment: from theory to applications Tackling inverse problems in a Banach space environment: from theory to applications

    NASA Astrophysics Data System (ADS)

    Schuster, Thomas; Hofmann, Bernd; Kaltenbacher, Barbara

    2012-10-01

    Inverse problems can usually be modelled as operator equations in infinite-dimensional spaces with a forward operator acting between Hilbert or Banach spaces—a formulation which quite often also serves as the basis for defining and analyzing solution methods. The additional amount of structure and geometric interpretability provided by the concept of an inner product has rendered these methods amenable to a convergence analysis, a fact which has led to a rigorous and comprehensive study of regularization methods in Hilbert spaces over the last three decades. However, for numerous problems such as x-ray diffractometry, certain inverse scattering problems and a number of parameter identification problems in PDEs, the reasons for using a Hilbert space setting seem to be based on conventions rather than an appropriate and realistic model choice, so often a Banach space setting would be closer to reality. Furthermore, non-Hilbertian regularization and data fidelity terms incorporating a priori information on solution and noise, such as general Lp-norms, TV-type norms, or the Kullback-Leibler divergence, have recently become very popular. These facts have motivated intensive investigations on regularization methods in Banach spaces, a topic which has emerged as a highly active research field within the area of inverse problems. Meanwhile some of the most well-known regularization approaches, such as Tikhonov-type methods requiring the solution of extremal problems, and iterative ones like the Landweber method, the Gauss-Newton method, as well as the approximate inverse method, have been investigated for linear and nonlinear operator equations in Banach spaces. Convergence with rates has been proven and conditions on the solution smoothness and on the structure of nonlinearity have been formulated. Still, beyond the existing results a large number of challenging open questions have arisen, due to the more involved handling of general Banach spaces and the larger variety of concrete instances with special properties. The aim of this special section is to provide a forum for highly topical ongoing work in the area of regularization in Banach spaces, its numerics and its applications. Indeed, we have been lucky enough to obtain a number of excellent papers both from colleagues who have previously been contributing to this topic and from researchers entering the field due to its relevance in practical inverse problems. We would like to thank all contributers for enabling us to present a high quality collection of papers on topics ranging from various aspects of regularization via efficient numerical solution to applications in PDE models. We give a brief overview of the contributions included in this issue (here ordered alphabetically by first author). In their paper, Iterative regularization with general penalty term—theory and application to L1 and TV regularization, Radu Bot and Torsten Hein provide an extension of the Landweber iteration for linear operator equations in Banach space to general operators in place of the inverse duality mapping, which corresponds to the use of general regularization functionals in variational regularization. The L∞ topology in data space corresponds to the frequently occuring situation of uniformly distributed data noise. A numerically efficient solution of the resulting Tikhonov regularization problem via a Moreau-Yosida appriximation and a semismooth Newton method, along with a δ-free regularization parameter choice rule, is the topic of the paper L∞ fitting for inverse problems with uniform noise by Christian Clason. Extension of convergence rates results from classical source conditions to their generalization via variational inequalities with a priori and a posteriori stopping rules is the main contribution of the paper Regularization of linear ill-posed problems by the augmented Lagrangian method and variational inequalities by Klaus Frick and Markus Grasmair, again in the context of some iterative method. A powerful tool for proving convergence rates of Tikhonov type but also other regularization methods in Banach spaces are assumptions of the type of variational inequalities that combine conditions on solution smoothness (i.e., source conditions in the Hilbert space case) and nonlinearity of the forward operator. In Parameter choice in Banach space regularization under variational inequalities, Bernd Hofmann and Peter Mathé provide results with general error measures and especially study the question of regularization parameter choice. Daijun Jiang, Hui Feng, and Jun Zou consider an application of Banach space ideas in the context of an application problem in their paper Convergence rates of Tikhonov regularizations for parameter identifiation in a parabolic-elliptic system, namely the identification of a distributed diffusion coefficient in a coupled elliptic-parabolic system. In particular, they show convergence rates of Lp-H1 (variational) regularization for the application under consideration via the use and verification of certain source and nonlinearity conditions. In computational practice, the Lp norm with p close to one is often used as a substitute for the actually sparsity promoting L1 norm. In Norm sensitivity of sparsity regularization with respect to p, Kamil S Kazimierski, Peter Maass and Robin Strehlow consider the question of how sensitive the Tikhonov regularized solution is with respect to p. They do so by computing the derivative via the implicit function theorem, particularly at the crucial value, p=1. Another iterative regularization method in Banach space is considered by Qinian Jin and Linda Stals in Nonstationary iterated Tikhonov regularization for ill-posed problems in Banach spaces. Using a variational formulation and under some smoothness and convexity assumption on the preimage space, they extend the convergence analysis of the well-known iterative Tikhonov method for linear problems in Hilbert space to a more general Banach space framework. Systems of linear or nonlinear operators can be efficiently treated by cyclic iterations, thus several variants of gradient and Newton-type Kaczmarz methods have already been studied in the Hilbert space setting. Antonio Leitão and M Marques Alves in their paper On Landweber---Kaczmarz methods for regularizing systems of ill-posed equations in Banach spaces carry out an extension to Banach spaces for the fundamental Landweber version. The impact of perturbations in the evaluation of the forward operator and its derivative on the convergence behaviour of regularization methods is a practically and highly relevant issue. It is treated in the paper Convergence rates analysis of Tikhonov regularization for nonlinear ill-posed problems with noisy operators by Shuai Lu and Jens Flemming for variational regularization of nonlinear problems in Banach spaces. In The approximate inverse in action: IV. Semi-discrete equations in a Banach space setting, Thomas Schuster, Andreas Rieder and Frank Schöpfer extend the concept of approximate inverse to the practically and highly relevant situation of finitely many measurements and a general smooth and convex Banach space as preimage space. They devise two approaches for computing the reconstruction kernels required in the method and provide convergence and regularization results. Frank Werner and Thorsten Hohage in Convergence rates in expectation for Tikhonov-type regularization of inverse problems with Poisson data prove convergence rates results for variational regularization with general convex regularization term and the Kullback-Leibler distance as data fidelity term by combining a new result on Poisson distributed data with a deterministic rates analysis. Finally, we would like to thank the Inverse Problems team, especially Joanna Evangelides and Chris Wileman, for their extraordinary smooth and productive cooperation, as well as Alfred K Louis for his kind support of our initiative.

  18. Feasibility of inverse problem solution for determination of city emission function from night sky radiance measurements

    NASA Astrophysics Data System (ADS)

    Petržala, Jaromír

    2018-07-01

    The knowledge of the emission function of a city is crucial for simulation of sky glow in its vicinity. The indirect methods to achieve this function from radiances measured over a part of the sky have been recently developed. In principle, such methods represent an ill-posed inverse problem. This paper deals with the theoretical feasibility study of various approaches to solving of given inverse problem. Particularly, it means testing of fitness of various stabilizing functionals within the Tikhonov's regularization. Further, the L-curve and generalized cross validation methods were investigated as indicators of an optimal regularization parameter. At first, we created the theoretical model for calculation of a sky spectral radiance in the form of a functional of an emission spectral radiance. Consequently, all the mentioned approaches were examined in numerical experiments with synthetical data generated for the fictitious city and loaded by random errors. The results demonstrate that the second order Tikhonov's regularization method together with regularization parameter choice by the L-curve maximum curvature criterion provide solutions which are in good agreement with the supposed model emission functions.

  19. Retaining both discrete and smooth features in 1D and 2D NMR relaxation and diffusion experiments

    NASA Astrophysics Data System (ADS)

    Reci, A.; Sederman, A. J.; Gladden, L. F.

    2017-11-01

    A new method of regularization of 1D and 2D NMR relaxation and diffusion experiments is proposed and a robust algorithm for its implementation is introduced. The new form of regularization, termed the Modified Total Generalized Variation (MTGV) regularization, offers a compromise between distinguishing discrete and smooth features in the reconstructed distributions. The method is compared to the conventional method of Tikhonov regularization and the recently proposed method of L1 regularization, when applied to simulated data of 1D spin-lattice relaxation, T1, 1D spin-spin relaxation, T2, and 2D T1-T2 NMR experiments. A range of simulated distributions composed of two lognormally distributed peaks were studied. The distributions differed with regard to the variance of the peaks, which were designed to investigate a range of distributions containing only discrete, only smooth or both features in the same distribution. Three different signal-to-noise ratios were studied: 2000, 200 and 20. A new metric is proposed to compare the distributions reconstructed from the different regularization methods with the true distributions. The metric is designed to penalise reconstructed distributions which show artefact peaks. Based on this metric, MTGV regularization performs better than Tikhonov and L1 regularization in all cases except when the distribution is known to only comprise of discrete peaks, in which case L1 regularization is slightly more accurate than MTGV regularization.

  20. An efficient and flexible Abel-inversion method for noisy data

    NASA Astrophysics Data System (ADS)

    Antokhin, Igor I.

    2016-12-01

    We propose an efficient and flexible method for solving the Abel integral equation of the first kind, frequently appearing in many fields of astrophysics, physics, chemistry, and applied sciences. This equation represents an ill-posed problem, thus solving it requires some kind of regularization. Our method is based on solving the equation on a so-called compact set of functions and/or using Tikhonov's regularization. A priori constraints on the unknown function, defining a compact set, are very loose and can be set using simple physical considerations. Tikhonov's regularization in itself does not require any explicit a priori constraints on the unknown function and can be used independently of such constraints or in combination with them. Various target degrees of smoothness of the unknown function may be set, as required by the problem at hand. The advantage of the method, apart from its flexibility, is that it gives uniform convergence of the approximate solution to the exact solution, as the errors of input data tend to zero. The method is illustrated on several simulated models with known solutions. An example of astrophysical application of the method is also given.

  1. Calibrating the Spatiotemporal Root Density Distribution for Macroscopic Water Uptake Models Using Tikhonov Regularization

    NASA Astrophysics Data System (ADS)

    Li, N.; Yue, X. Y.

    2018-03-01

    Macroscopic root water uptake models proportional to a root density distribution function (RDDF) are most commonly used to model water uptake by plants. As the water uptake is difficult and labor intensive to measure, these models are often calibrated by inverse modeling. Most previous inversion studies assume RDDF to be constant with depth and time or dependent on only depth for simplification. However, under field conditions, this function varies with type of soil and root growth and thus changes with both depth and time. This study proposes an inverse method to calibrate both spatially and temporally varying RDDF in unsaturated water flow modeling. To overcome the difficulty imposed by the ill-posedness, the calibration is formulated as an optimization problem in the framework of the Tikhonov regularization theory, adding additional constraint to the objective function. Then the formulated nonlinear optimization problem is numerically solved with an efficient algorithm on the basis of the finite element method. The advantage of our method is that the inverse problem is translated into a Tikhonov regularization functional minimization problem and then solved based on the variational construction, which circumvents the computational complexity in calculating the sensitivity matrix involved in many derivative-based parameter estimation approaches (e.g., Levenberg-Marquardt optimization). Moreover, the proposed method features optimization of RDDF without any prior form, which is applicable to a more general root water uptake model. Numerical examples are performed to illustrate the applicability and effectiveness of the proposed method. Finally, discussions on the stability and extension of this method are presented.

  2. Application of Two-Parameter Stabilizing Functions in Solving a Convolution-Type Integral Equation by Regularization Method

    NASA Astrophysics Data System (ADS)

    Maslakov, M. L.

    2018-04-01

    This paper examines the solution of convolution-type integral equations of the first kind by applying the Tikhonov regularization method with two-parameter stabilizing functions. The class of stabilizing functions is expanded in order to improve the accuracy of the resulting solution. The features of the problem formulation for identification and adaptive signal correction are described. A method for choosing regularization parameters in problems of identification and adaptive signal correction is suggested.

  3. Iterative Nonlinear Tikhonov Algorithm with Constraints for Electromagnetic Tomography

    NASA Technical Reports Server (NTRS)

    Xu, Feng; Deshpande, Manohar

    2012-01-01

    Low frequency electromagnetic tomography such as the capacitance tomography (ECT) has been proposed for monitoring and mass-gauging of gas-liquid two-phase system under microgravity condition in NASA's future long-term space missions. Due to the ill-posed inverse problem of ECT, images reconstructed using conventional linear algorithms often suffer from limitations such as low resolution and blurred edges. Hence, new efficient high resolution nonlinear imaging algorithms are needed for accurate two-phase imaging. The proposed Iterative Nonlinear Tikhonov Regularized Algorithm with Constraints (INTAC) is based on an efficient finite element method (FEM) forward model of quasi-static electromagnetic problem. It iteratively minimizes the discrepancy between FEM simulated and actual measured capacitances by adjusting the reconstructed image using the Tikhonov regularized method. More importantly, it enforces the known permittivity of two phases to the unknown pixels which exceed the reasonable range of permittivity in each iteration. This strategy does not only stabilize the converging process, but also produces sharper images. Simulations show that resolution improvement of over 2 times can be achieved by INTAC with respect to conventional approaches. Strategies to further improve spatial imaging resolution are suggested, as well as techniques to accelerate nonlinear forward model and thus increase the temporal resolution.

  4. Minimal residual method provides optimal regularization parameter for diffuse optical tomography

    NASA Astrophysics Data System (ADS)

    Jagannath, Ravi Prasad K.; Yalavarthy, Phaneendra K.

    2012-10-01

    The inverse problem in the diffuse optical tomography is known to be nonlinear, ill-posed, and sometimes under-determined, requiring regularization to obtain meaningful results, with Tikhonov-type regularization being the most popular one. The choice of this regularization parameter dictates the reconstructed optical image quality and is typically chosen empirically or based on prior experience. An automated method for optimal selection of regularization parameter that is based on regularized minimal residual method (MRM) is proposed and is compared with the traditional generalized cross-validation method. The results obtained using numerical and gelatin phantom data indicate that the MRM-based method is capable of providing the optimal regularization parameter.

  5. Minimal residual method provides optimal regularization parameter for diffuse optical tomography.

    PubMed

    Jagannath, Ravi Prasad K; Yalavarthy, Phaneendra K

    2012-10-01

    The inverse problem in the diffuse optical tomography is known to be nonlinear, ill-posed, and sometimes under-determined, requiring regularization to obtain meaningful results, with Tikhonov-type regularization being the most popular one. The choice of this regularization parameter dictates the reconstructed optical image quality and is typically chosen empirically or based on prior experience. An automated method for optimal selection of regularization parameter that is based on regularized minimal residual method (MRM) is proposed and is compared with the traditional generalized cross-validation method. The results obtained using numerical and gelatin phantom data indicate that the MRM-based method is capable of providing the optimal regularization parameter.

  6. Combining kernel matrix optimization and regularization to improve particle size distribution retrieval

    NASA Astrophysics Data System (ADS)

    Ma, Qian; Xia, Houping; Xu, Qiang; Zhao, Lei

    2018-05-01

    A new method combining Tikhonov regularization and kernel matrix optimization by multi-wavelength incidence is proposed for retrieving particle size distribution (PSD) in an independent model with improved accuracy and stability. In comparison to individual regularization or multi-wavelength least squares, the proposed method exhibited better anti-noise capability, higher accuracy and stability. While standard regularization typically makes use of the unit matrix, it is not universal for different PSDs, particularly for Junge distributions. Thus, a suitable regularization matrix was chosen by numerical simulation, with the second-order differential matrix found to be appropriate for most PSD types.

  7. Sparse deconvolution for the large-scale ill-posed inverse problem of impact force reconstruction

    NASA Astrophysics Data System (ADS)

    Qiao, Baijie; Zhang, Xingwu; Gao, Jiawei; Liu, Ruonan; Chen, Xuefeng

    2017-01-01

    Most previous regularization methods for solving the inverse problem of force reconstruction are to minimize the l2-norm of the desired force. However, these traditional regularization methods such as Tikhonov regularization and truncated singular value decomposition, commonly fail to solve the large-scale ill-posed inverse problem in moderate computational cost. In this paper, taking into account the sparse characteristic of impact force, the idea of sparse deconvolution is first introduced to the field of impact force reconstruction and a general sparse deconvolution model of impact force is constructed. Second, a novel impact force reconstruction method based on the primal-dual interior point method (PDIPM) is proposed to solve such a large-scale sparse deconvolution model, where minimizing the l2-norm is replaced by minimizing the l1-norm. Meanwhile, the preconditioned conjugate gradient algorithm is used to compute the search direction of PDIPM with high computational efficiency. Finally, two experiments including the small-scale or medium-scale single impact force reconstruction and the relatively large-scale consecutive impact force reconstruction are conducted on a composite wind turbine blade and a shell structure to illustrate the advantage of PDIPM. Compared with Tikhonov regularization, PDIPM is more efficient, accurate and robust whether in the single impact force reconstruction or in the consecutive impact force reconstruction.

  8. Three-dimensional ionospheric tomography reconstruction using the model function approach in Tikhonov regularization

    NASA Astrophysics Data System (ADS)

    Wang, Sicheng; Huang, Sixun; Xiang, Jie; Fang, Hanxian; Feng, Jian; Wang, Yu

    2016-12-01

    Ionospheric tomography is based on the observed slant total electron content (sTEC) along different satellite-receiver rays to reconstruct the three-dimensional electron density distributions. Due to incomplete measurements provided by the satellite-receiver geometry, it is a typical ill-posed problem, and how to overcome the ill-posedness is still a crucial content of research. In this paper, Tikhonov regularization method is used and the model function approach is applied to determine the optimal regularization parameter. This algorithm not only balances the weights between sTEC observations and background electron density field but also converges globally and rapidly. The background error covariance is given by multiplying background model variance and location-dependent spatial correlation, and the correlation model is developed by using sample statistics from an ensemble of the International Reference Ionosphere 2012 (IRI2012) model outputs. The Global Navigation Satellite System (GNSS) observations in China are used to present the reconstruction results, and measurements from two ionosondes are used to make independent validations. Both the test cases using artificial sTEC observations and actual GNSS sTEC measurements show that the regularization method can effectively improve the background model outputs.

  9. MEG Connectivity and Power Detections with Minimum Norm Estimates Require Different Regularization Parameters.

    PubMed

    Hincapié, Ana-Sofía; Kujala, Jan; Mattout, Jérémie; Daligault, Sebastien; Delpuech, Claude; Mery, Domingo; Cosmelli, Diego; Jerbi, Karim

    2016-01-01

    Minimum Norm Estimation (MNE) is an inverse solution method widely used to reconstruct the source time series that underlie magnetoencephalography (MEG) data. MNE addresses the ill-posed nature of MEG source estimation through regularization (e.g., Tikhonov regularization). Selecting the best regularization parameter is a critical step. Generally, once set, it is common practice to keep the same coefficient throughout a study. However, it is yet to be known whether the optimal lambda for spectral power analysis of MEG source data coincides with the optimal regularization for source-level oscillatory coupling analysis. We addressed this question via extensive Monte-Carlo simulations of MEG data, where we generated 21,600 configurations of pairs of coupled sources with varying sizes, signal-to-noise ratio (SNR), and coupling strengths. Then, we searched for the Tikhonov regularization coefficients (lambda) that maximize detection performance for (a) power and (b) coherence. For coherence, the optimal lambda was two orders of magnitude smaller than the best lambda for power. Moreover, we found that the spatial extent of the interacting sources and SNR, but not the extent of coupling, were the main parameters affecting the best choice for lambda. Our findings suggest using less regularization when measuring oscillatory coupling compared to power estimation.

  10. MEG Connectivity and Power Detections with Minimum Norm Estimates Require Different Regularization Parameters

    PubMed Central

    Hincapié, Ana-Sofía; Kujala, Jan; Mattout, Jérémie; Daligault, Sebastien; Delpuech, Claude; Mery, Domingo; Cosmelli, Diego; Jerbi, Karim

    2016-01-01

    Minimum Norm Estimation (MNE) is an inverse solution method widely used to reconstruct the source time series that underlie magnetoencephalography (MEG) data. MNE addresses the ill-posed nature of MEG source estimation through regularization (e.g., Tikhonov regularization). Selecting the best regularization parameter is a critical step. Generally, once set, it is common practice to keep the same coefficient throughout a study. However, it is yet to be known whether the optimal lambda for spectral power analysis of MEG source data coincides with the optimal regularization for source-level oscillatory coupling analysis. We addressed this question via extensive Monte-Carlo simulations of MEG data, where we generated 21,600 configurations of pairs of coupled sources with varying sizes, signal-to-noise ratio (SNR), and coupling strengths. Then, we searched for the Tikhonov regularization coefficients (lambda) that maximize detection performance for (a) power and (b) coherence. For coherence, the optimal lambda was two orders of magnitude smaller than the best lambda for power. Moreover, we found that the spatial extent of the interacting sources and SNR, but not the extent of coupling, were the main parameters affecting the best choice for lambda. Our findings suggest using less regularization when measuring oscillatory coupling compared to power estimation. PMID:27092179

  11. Generalized Bregman distances and convergence rates for non-convex regularization methods

    NASA Astrophysics Data System (ADS)

    Grasmair, Markus

    2010-11-01

    We generalize the notion of Bregman distance using concepts from abstract convexity in order to derive convergence rates for Tikhonov regularization with non-convex regularization terms. In particular, we study the non-convex regularization of linear operator equations on Hilbert spaces, showing that the conditions required for the application of the convergence rates results are strongly related to the standard range conditions from the convex case. Moreover, we consider the setting of sparse regularization, where we show that a rate of order δ1/p holds, if the regularization term has a slightly faster growth at zero than |t|p.

  12. A multiplicative regularization for force reconstruction

    NASA Astrophysics Data System (ADS)

    Aucejo, M.; De Smet, O.

    2017-02-01

    Additive regularizations, such as Tikhonov-like approaches, are certainly the most popular methods for reconstructing forces acting on a structure. These approaches require, however, the knowledge of a regularization parameter, that can be numerically computed using specific procedures. Unfortunately, these procedures are generally computationally intensive. For this particular reason, it could be of primary interest to propose a method able to proceed without defining any regularization parameter beforehand. In this paper, a multiplicative regularization is introduced for this purpose. By construction, the regularized solution has to be calculated in an iterative manner. In doing so, the amount of regularization is automatically adjusted throughout the resolution process. Validations using synthetic and experimental data highlight the ability of the proposed approach in providing consistent reconstructions.

  13. Enhanced nearfield acoustic holography for larger distances of reconstructions using fixed parameter Tikhonov regularization

    DOE PAGES

    Chelliah, Kanthasamy; Raman, Ganesh G.; Muehleisen, Ralph T.

    2016-07-07

    This paper evaluates the performance of various regularization parameter choice methods applied to different approaches of nearfield acoustic holography when a very nearfield measurement is not possible. For a fixed grid resolution, the larger the hologram distance, the larger the error in the naive nearfield acoustic holography reconstructions. These errors can be smoothed out by using an appropriate order of regularization. In conclusion, this study shows that by using a fixed/manual choice of regularization parameter, instead of automated parameter choice methods, reasonably accurate reconstructions can be obtained even when the hologram distance is 16 times larger than the grid resolution.

  14. On epicardial potential reconstruction using regularization schemes with the L1-norm data term.

    PubMed

    Shou, Guofa; Xia, Ling; Liu, Feng; Jiang, Mingfeng; Crozier, Stuart

    2011-01-07

    The electrocardiographic (ECG) inverse problem is ill-posed and usually solved by regularization schemes. These regularization methods, such as the Tikhonov method, are often based on the L2-norm data and constraint terms. However, L2-norm-based methods inherently provide smoothed inverse solutions that are sensitive to measurement errors, and also lack the capability of localizing and distinguishing multiple proximal cardiac electrical sources. This paper presents alternative regularization schemes employing the L1-norm data term for the reconstruction of epicardial potentials (EPs) from measured body surface potentials (BSPs). During numerical implementation, the iteratively reweighted norm algorithm was applied to solve the L1-norm-related schemes, and measurement noises were considered in the BSP data. The proposed L1-norm data term-based regularization schemes (with L1 and L2 penalty terms of the normal derivative constraint (labelled as L1TV and L1L2)) were compared with the L2-norm data terms (Tikhonov with zero-order and normal derivative constraints, labelled as ZOT and FOT, and the total variation method labelled as L2TV). The studies demonstrated that, with averaged measurement noise, the inverse solutions provided by the L1L2 and FOT algorithms have less relative error values. However, when larger noise occurred in some electrodes (for example, signal lost during measurement), the L1TV and L1L2 methods can obtain more accurate EPs in a robust manner. Therefore the L1-norm data term-based solutions are generally less perturbed by measurement noises, suggesting that the new regularization scheme is promising for providing practical ECG inverse solutions.

  15. Optimal Tikhonov Regularization in Finite-Frequency Tomography

    NASA Astrophysics Data System (ADS)

    Fang, Y.; Yao, Z.; Zhou, Y.

    2017-12-01

    The last decade has witnessed a progressive transition in seismic tomography from ray theory to finite-frequency theory which overcomes the resolution limit of the high-frequency approximation in ray theory. In addition to approximations in wave propagation physics, a main difference between ray-theoretical tomography and finite-frequency tomography is the sparseness of the associated sensitivity matrix. It is well known that seismic tomographic problems are ill-posed and regularizations such as damping and smoothing are often applied to analyze the tradeoff between data misfit and model uncertainty. The regularizations depend on the structure of the matrix as well as noise level of the data. Cross-validation has been used to constrain data uncertainties in body-wave finite-frequency inversions when measurements at multiple frequencies are available to invert for a common structure. In this study, we explore an optimal Tikhonov regularization in surface-wave phase-velocity tomography based on minimization of an empirical Bayes risk function using theoretical training datasets. We exploit the structure of the sensitivity matrix in the framework of singular value decomposition (SVD) which also allows for the calculation of complete resolution matrix. We compare the optimal Tikhonov regularization in finite-frequency tomography with traditional tradeo-off analysis using surface wave dispersion measurements from global as well as regional studies.

  16. Phillips-Tikhonov regularization with a priori information for neutron emission tomographic reconstruction on Joint European Torus

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bielecki, J.; Scholz, M.; Drozdowicz, K.

    A method of tomographic reconstruction of the neutron emissivity in the poloidal cross section of the Joint European Torus (JET, Culham, UK) tokamak was developed. Due to very limited data set (two projection angles, 19 lines of sight only) provided by the neutron emission profile monitor (KN3 neutron camera), the reconstruction is an ill-posed inverse problem. The aim of this work consists in making a contribution to the development of reliable plasma tomography reconstruction methods that could be routinely used at JET tokamak. The proposed method is based on Phillips-Tikhonov regularization and incorporates a priori knowledge of the shape ofmore » normalized neutron emissivity profile. For the purpose of the optimal selection of the regularization parameters, the shape of normalized neutron emissivity profile is approximated by the shape of normalized electron density profile measured by LIDAR or high resolution Thomson scattering JET diagnostics. In contrast with some previously developed methods of ill-posed plasma tomography reconstruction problem, the developed algorithms do not include any post-processing of the obtained solution and the physical constrains on the solution are imposed during the regularization process. The accuracy of the method is at first evaluated by several tests with synthetic data based on various plasma neutron emissivity models (phantoms). Then, the method is applied to the neutron emissivity reconstruction for JET D plasma discharge #85100. It is demonstrated that this method shows good performance and reliability and it can be routinely used for plasma neutron emissivity reconstruction on JET.« less

  17. Research of generalized wavelet transformations of Haar correctness in remote sensing of the Earth

    NASA Astrophysics Data System (ADS)

    Kazaryan, Maretta; Shakhramanyan, Mihail; Nedkov, Roumen; Richter, Andrey; Borisova, Denitsa; Stankova, Nataliya; Ivanova, Iva; Zaharinova, Mariana

    2017-10-01

    In this paper, Haar's generalized wavelet functions are applied to the problem of ecological monitoring by the method of remote sensing of the Earth. We study generalized Haar wavelet series and suggest the use of Tikhonov's regularization method for investigating them for correctness. In the solution of this problem, an important role is played by classes of functions that were introduced and described in detail by I.M. Sobol for studying multidimensional quadrature formulas and it contains functions with rapidly convergent series of wavelet Haar. A theorem on the stability and uniform convergence of the regularized summation function of the generalized wavelet-Haar series of a function from this class with approximate coefficients is proved. The article also examines the problem of using orthogonal transformations in Earth remote sensing technologies for environmental monitoring. Remote sensing of the Earth allows to receive from spacecrafts information of medium, high spatial resolution and to conduct hyperspectral measurements. Spacecrafts have tens or hundreds of spectral channels. To process the images, the device of discrete orthogonal transforms, and namely, wavelet transforms, was used. The aim of the work is to apply the regularization method in one of the problems associated with remote sensing of the Earth and subsequently to process the satellite images through discrete orthogonal transformations, in particular, generalized Haar wavelet transforms. General methods of research. In this paper, Tikhonov's regularization method, the elements of mathematical analysis, the theory of discrete orthogonal transformations, and methods for decoding of satellite images are used. Scientific novelty. The task of processing of archival satellite snapshots (images), in particular, signal filtering, was investigated from the point of view of an incorrectly posed problem. The regularization parameters for discrete orthogonal transformations were determined.

  18. Digital Signal Processing Methods for Ultrasonic Echoes.

    PubMed

    Sinding, Kyle; Drapaca, Corina; Tittmann, Bernhard

    2016-04-28

    Digital signal processing has become an important component of data analysis needed in industrial applications. In particular, for ultrasonic thickness measurements the signal to noise ratio plays a major role in the accurate calculation of the arrival time. For this application a band pass filter is not sufficient since the noise level cannot be significantly decreased such that a reliable thickness measurement can be performed. This paper demonstrates the abilities of two regularization methods - total variation and Tikhonov - to filter acoustic and ultrasonic signals. Both of these methods are compared to a frequency based filtering for digitally produced signals as well as signals produced by ultrasonic transducers. This paper demonstrates the ability of the total variation and Tikhonov filters to accurately recover signals from noisy acoustic signals faster than a band pass filter. Furthermore, the total variation filter has been shown to reduce the noise of a signal significantly for signals with clear ultrasonic echoes. Signal to noise ratios have been increased over 400% by using a simple parameter optimization. While frequency based filtering is efficient for specific applications, this paper shows that the reduction of noise in ultrasonic systems can be much more efficient with regularization methods.

  19. Selection of regularization parameter in total variation image restoration.

    PubMed

    Liao, Haiyong; Li, Fang; Ng, Michael K

    2009-11-01

    We consider and study total variation (TV) image restoration. In the literature there are several regularization parameter selection methods for Tikhonov regularization problems (e.g., the discrepancy principle and the generalized cross-validation method). However, to our knowledge, these selection methods have not been applied to TV regularization problems. The main aim of this paper is to develop a fast TV image restoration method with an automatic selection of the regularization parameter scheme to restore blurred and noisy images. The method exploits the generalized cross-validation (GCV) technique to determine inexpensively how much regularization to use in each restoration step. By updating the regularization parameter in each iteration, the restored image can be obtained. Our experimental results for testing different kinds of noise show that the visual quality and SNRs of images restored by the proposed method is promising. We also demonstrate that the method is efficient, as it can restore images of size 256 x 256 in approximately 20 s in the MATLAB computing environment.

  20. Background field removal technique using regularization enabled sophisticated harmonic artifact reduction for phase data with varying kernel sizes.

    PubMed

    Kan, Hirohito; Kasai, Harumasa; Arai, Nobuyuki; Kunitomo, Hiroshi; Hirose, Yasujiro; Shibamoto, Yuta

    2016-09-01

    An effective background field removal technique is desired for more accurate quantitative susceptibility mapping (QSM) prior to dipole inversion. The aim of this study was to evaluate the accuracy of regularization enabled sophisticated harmonic artifact reduction for phase data with varying spherical kernel sizes (REV-SHARP) method using a three-dimensional head phantom and human brain data. The proposed REV-SHARP method used the spherical mean value operation and Tikhonov regularization in the deconvolution process, with varying 2-14mm kernel sizes. The kernel sizes were gradually reduced, similar to the SHARP with varying spherical kernel (VSHARP) method. We determined the relative errors and relationships between the true local field and estimated local field in REV-SHARP, VSHARP, projection onto dipole fields (PDF), and regularization enabled SHARP (RESHARP). Human experiment was also conducted using REV-SHARP, VSHARP, PDF, and RESHARP. The relative errors in the numerical phantom study were 0.386, 0.448, 0.838, and 0.452 for REV-SHARP, VSHARP, PDF, and RESHARP. REV-SHARP result exhibited the highest correlation between the true local field and estimated local field. The linear regression slopes were 1.005, 1.124, 0.988, and 0.536 for REV-SHARP, VSHARP, PDF, and RESHARP in regions of interest on the three-dimensional head phantom. In human experiments, no obvious errors due to artifacts were present in REV-SHARP. The proposed REV-SHARP is a new method combined with variable spherical kernel size and Tikhonov regularization. This technique might make it possible to be more accurate backgroud field removal and help to achive better accuracy of QSM. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Application of the Discrete Regularization Method to the Inverse of the Chord Vibration Equation

    NASA Astrophysics Data System (ADS)

    Wang, Linjun; Han, Xu; Wei, Zhouchao

    The inverse problem of the initial condition about the boundary value of the chord vibration equation is ill-posed. First, we transform it into a Fredholm integral equation. Second, we discretize it by the trapezoidal formula method, and then obtain a severely ill-conditioned linear equation, which is sensitive to the disturbance of the data. In addition, the tiny error of right data causes the huge concussion of the solution. We cannot obtain good results by the traditional method. In this paper, we solve this problem by the Tikhonov regularization method, and the numerical simulations demonstrate that this method is feasible and effective.

  2. 3D Gravity Inversion using Tikhonov Regularization

    NASA Astrophysics Data System (ADS)

    Toushmalani, Reza; Saibi, Hakim

    2015-08-01

    Subsalt exploration for oil and gas is attractive in regions where 3D seismic depth-migration to recover the geometry of a salt base is difficult. Additional information to reduce the ambiguity in seismic images would be beneficial. Gravity data often serve these purposes in the petroleum industry. In this paper, the authors present an algorithm for a gravity inversion based on Tikhonov regularization and an automatically regularized solution process. They examined the 3D Euler deconvolution to extract the best anomaly source depth as a priori information to invert the gravity data and provided a synthetic example. Finally, they applied the gravity inversion to recently obtained gravity data from the Bandar Charak (Hormozgan, Iran) to identify its subsurface density structure. Their model showed the 3D shape of salt dome in this region.

  3. Cortical dipole imaging using truncated total least squares considering transfer matrix error.

    PubMed

    Hori, Junichi; Takeuchi, Kosuke

    2013-01-01

    Cortical dipole imaging has been proposed as a method to visualize electroencephalogram in high spatial resolution. We investigated the inverse technique of cortical dipole imaging using a truncated total least squares (TTLS). The TTLS is a regularization technique to reduce the influence from both the measurement noise and the transfer matrix error caused by the head model distortion. The estimation of the regularization parameter was also investigated based on L-curve. The computer simulation suggested that the estimation accuracy was improved by the TTLS compared with Tikhonov regularization. The proposed method was applied to human experimental data of visual evoked potentials. We confirmed the TTLS provided the high spatial resolution of cortical dipole imaging.

  4. A regularization method for extrapolation of solar potential magnetic fields

    NASA Technical Reports Server (NTRS)

    Gary, G. A.; Musielak, Z. E.

    1992-01-01

    The mathematical basis of a Tikhonov regularization method for extrapolating the chromospheric-coronal magnetic field using photospheric vector magnetograms is discussed. The basic techniques show that the Cauchy initial value problem can be formulated for potential magnetic fields. The potential field analysis considers a set of linear, elliptic partial differential equations. It is found that, by introducing an appropriate smoothing of the initial data of the Cauchy potential problem, an approximate Fourier integral solution is found, and an upper bound to the error in the solution is derived. This specific regularization technique, which is a function of magnetograph measurement sensitivities, provides a method to extrapolate the potential magnetic field above an active region into the chromosphere and low corona.

  5. Characterizing the functional MRI response using Tikhonov regularization.

    PubMed

    Vakorin, Vasily A; Borowsky, Ron; Sarty, Gordon E

    2007-09-20

    The problem of evaluating an averaged functional magnetic resonance imaging (fMRI) response for repeated block design experiments was considered within a semiparametric regression model with autocorrelated residuals. We applied functional data analysis (FDA) techniques that use a least-squares fitting of B-spline expansions with Tikhonov regularization. To deal with the noise autocorrelation, we proposed a regularization parameter selection method based on the idea of combining temporal smoothing with residual whitening. A criterion based on a generalized chi(2)-test of the residuals for white noise was compared with a generalized cross-validation scheme. We evaluated and compared the performance of the two criteria, based on their effect on the quality of the fMRI response. We found that the regularization parameter can be tuned to improve the noise autocorrelation structure, but the whitening criterion provides too much smoothing when compared with the cross-validation criterion. The ultimate goal of the proposed smoothing techniques is to facilitate the extraction of temporal features in the hemodynamic response for further analysis. In particular, these FDA methods allow us to compute derivatives and integrals of the fMRI signal so that fMRI data may be correlated with behavioral and physiological models. For example, positive and negative hemodynamic responses may be easily and robustly identified on the basis of the first derivative at an early time point in the response. Ultimately, these methods allow us to verify previously reported correlations between the hemodynamic response and the behavioral measures of accuracy and reaction time, showing the potential to recover new information from fMRI data. 2007 John Wiley & Sons, Ltd

  6. Characterization of Window Functions for Regularization of Electrical Capacitance Tomography Image Reconstruction

    NASA Astrophysics Data System (ADS)

    Jiang, Peng; Peng, Lihui; Xiao, Deyun

    2007-06-01

    This paper presents a regularization method by using different window functions as regularization for electrical capacitance tomography (ECT) image reconstruction. Image reconstruction for ECT is a typical ill-posed inverse problem. Because of the small singular values of the sensitivity matrix, the solution is sensitive to the measurement noise. The proposed method uses the spectral filtering properties of different window functions to make the solution stable by suppressing the noise in measurements. The window functions, such as the Hanning window, the cosine window and so on, are modified for ECT image reconstruction. Simulations with respect to five typical permittivity distributions are carried out. The reconstructions are better and some of the contours are clearer than the results from the Tikhonov regularization. Numerical results show that the feasibility of the image reconstruction algorithm using different window functions as regularization.

  7. Least squares QR-based decomposition provides an efficient way of computing optimal regularization parameter in photoacoustic tomography.

    PubMed

    Shaw, Calvin B; Prakash, Jaya; Pramanik, Manojit; Yalavarthy, Phaneendra K

    2013-08-01

    A computationally efficient approach that computes the optimal regularization parameter for the Tikhonov-minimization scheme is developed for photoacoustic imaging. This approach is based on the least squares-QR decomposition which is a well-known dimensionality reduction technique for a large system of equations. It is shown that the proposed framework is effective in terms of quantitative and qualitative reconstructions of initial pressure distribution enabled via finding an optimal regularization parameter. The computational efficiency and performance of the proposed method are shown using a test case of numerical blood vessel phantom, where the initial pressure is exactly known for quantitative comparison.

  8. Point-spread function reconstruction in ground-based astronomy by l(1)-l(p) model.

    PubMed

    Chan, Raymond H; Yuan, Xiaoming; Zhang, Wenxing

    2012-11-01

    In ground-based astronomy, images of objects in outer space are acquired via ground-based telescopes. However, the imaging system is generally interfered by atmospheric turbulence, and hence images so acquired are blurred with unknown point-spread function (PSF). To restore the observed images, the wavefront of light at the telescope's aperture is utilized to derive the PSF. A model with the Tikhonov regularization has been proposed to find the high-resolution phase gradients by solving a least-squares system. Here we propose the l(1)-l(p) (p=1, 2) model for reconstructing the phase gradients. This model can provide sharper edges in the gradients while removing noise. The minimization models can easily be solved by the Douglas-Rachford alternating direction method of a multiplier, and the convergence rate is readily established. Numerical results are given to illustrate that the model can give better phase gradients and hence a more accurate PSF. As a result, the restored images are much more accurate when compared to the traditional Tikhonov regularization model.

  9. Filtering techniques for efficient inversion of two-dimensional Nuclear Magnetic Resonance data

    NASA Astrophysics Data System (ADS)

    Bortolotti, V.; Brizi, L.; Fantazzini, P.; Landi, G.; Zama, F.

    2017-10-01

    The inversion of two-dimensional Nuclear Magnetic Resonance (NMR) data requires the solution of a first kind Fredholm integral equation with a two-dimensional tensor product kernel and lower bound constraints. For the solution of this ill-posed inverse problem, the recently presented 2DUPEN algorithm [V. Bortolotti et al., Inverse Problems, 33(1), 2016] uses multiparameter Tikhonov regularization with automatic choice of the regularization parameters. In this work, I2DUPEN, an improved version of 2DUPEN that implements Mean Windowing and Singular Value Decomposition filters, is deeply tested. The reconstruction problem with filtered data is formulated as a compressed weighted least squares problem with multi-parameter Tikhonov regularization. Results on synthetic and real 2D NMR data are presented with the main purpose to deeper analyze the separate and combined effects of these filtering techniques on the reconstructed 2D distribution.

  10. Regional regularization method for ECT based on spectral transformation of Laplacian

    NASA Astrophysics Data System (ADS)

    Guo, Z. H.; Kan, Z.; Lv, D. C.; Shao, F. Q.

    2016-10-01

    Image reconstruction in electrical capacitance tomography is an ill-posed inverse problem, and regularization techniques are usually used to solve the problem for suppressing noise. An anisotropic regional regularization algorithm for electrical capacitance tomography is constructed using a novel approach called spectral transformation. Its function is derived and applied to the weighted gradient magnitude of the sensitivity of Laplacian as a regularization term. With the optimum regional regularizer, the a priori knowledge on the local nonlinearity degree of the forward map is incorporated into the proposed online reconstruction algorithm. Simulation experimentations were performed to verify the capability of the new regularization algorithm to reconstruct a superior quality image over two conventional Tikhonov regularization approaches. The advantage of the new algorithm for improving performance and reducing shape distortion is demonstrated with the experimental data.

  11. Wavelet-promoted sparsity for non-invasive reconstruction of electrical activity of the heart.

    PubMed

    Cluitmans, Matthijs; Karel, Joël; Bonizzi, Pietro; Volders, Paul; Westra, Ronald; Peeters, Ralf

    2018-05-12

    We investigated a novel sparsity-based regularization method in the wavelet domain of the inverse problem of electrocardiography that aims at preserving the spatiotemporal characteristics of heart-surface potentials. In three normal, anesthetized dogs, electrodes were implanted around the epicardium and body-surface electrodes were attached to the torso. Potential recordings were obtained simultaneously on the body surface and on the epicardium. A CT scan was used to digitize a homogeneous geometry which consisted of the body-surface electrodes and the epicardial surface. A novel multitask elastic-net-based method was introduced to regularize the ill-posed inverse problem. The method simultaneously pursues a sparse wavelet representation in time-frequency and exploits correlations in space. Performance was assessed in terms of quality of reconstructed epicardial potentials, estimated activation and recovery time, and estimated locations of pacing, and compared with performance of Tikhonov zeroth-order regularization. Results in the wavelet domain obtained higher sparsity than those in the time domain. Epicardial potentials were non-invasively reconstructed with higher accuracy than with Tikhonov zeroth-order regularization (p < 0.05), and recovery times were improved (p < 0.05). No significant improvement was found in terms of activation times and localization of origin of pacing. Next to improved estimation of recovery isochrones, which is important when assessing substrate for cardiac arrhythmias, this novel technique opens potentially powerful opportunities for clinical application, by allowing to choose wavelet bases that are optimized for specific clinical questions. Graphical Abstract The inverse problem of electrocardiography is to reconstruct heart-surface potentials from recorded bodysurface electrocardiograms (ECGs) and a torso-heart geometry. However, it is ill-posed and solving it requires additional constraints for regularization. We introduce a regularization method that simultaneously pursues a sparse wavelet representation in time-frequency and exploits correlations in space. Our approach reconstructs epicardial (heart-surface) potentials with higher accuracy than common methods. It also improves the reconstruction of recovery isochrones, which is important when assessing substrate for cardiac arrhythmias. This novel technique opens potentially powerful opportunities for clinical application, by allowing to choose wavelet bases that are optimized for specific clinical questions.

  12. Regularization of the double period method for experimental data processing

    NASA Astrophysics Data System (ADS)

    Belov, A. A.; Kalitkin, N. N.

    2017-11-01

    In physical and technical applications, an important task is to process experimental curves measured with large errors. Such problems are solved by applying regularization methods, in which success depends on the mathematician's intuition. We propose an approximation based on the double period method developed for smooth nonperiodic functions. Tikhonov's stabilizer with a squared second derivative is used for regularization. As a result, the spurious oscillations are suppressed and the shape of an experimental curve is accurately represented. This approach offers a universal strategy for solving a broad class of problems. The method is illustrated by approximating cross sections of nuclear reactions important for controlled thermonuclear fusion. Tables recommended as reference data are obtained. These results are used to calculate the reaction rates, which are approximated in a way convenient for gasdynamic codes. These approximations are superior to previously known formulas in the covered temperature range and accuracy.

  13. Bayesian prestack seismic inversion with a self-adaptive Huber-Markov random-field edge protection scheme

    NASA Astrophysics Data System (ADS)

    Tian, Yu-Kun; Zhou, Hui; Chen, Han-Ming; Zou, Ya-Ming; Guan, Shou-Jun

    2013-12-01

    Seismic inversion is a highly ill-posed problem, due to many factors such as the limited seismic frequency bandwidth and inappropriate forward modeling. To obtain a unique solution, some smoothing constraints, e.g., the Tikhonov regularization are usually applied. The Tikhonov method can maintain a global smooth solution, but cause a fuzzy structure edge. In this paper we use Huber-Markov random-field edge protection method in the procedure of inverting three parameters, P-velocity, S-velocity and density. The method can avoid blurring the structure edge and resist noise. For the parameter to be inverted, the Huber-Markov random-field constructs a neighborhood system, which further acts as the vertical and lateral constraints. We use a quadratic Huber edge penalty function within the layer to suppress noise and a linear one on the edges to avoid a fuzzy result. The effectiveness of our method is proved by inverting the synthetic data without and with noises. The relationship between the adopted constraints and the inversion results is analyzed as well.

  14. A fully Galerkin method for the recovery of stiffness and damping parameters in Euler-Bernoulli beam models

    NASA Technical Reports Server (NTRS)

    Smith, R. C.; Bowers, K. L.

    1991-01-01

    A fully Sinc-Galerkin method for recovering the spatially varying stiffness and damping parameters in Euler-Bernoulli beam models is presented. The forward problems are discretized with a sinc basis in both the spatial and temporal domains thus yielding an approximate solution which converges exponentially and is valid on the infinite time interval. Hence the method avoids the time-stepping which is characteristic of many of the forward schemes which are used in parameter recovery algorithms. Tikhonov regularization is used to stabilize the resulting inverse problem, and the L-curve method for determining an appropriate value of the regularization parameter is briefly discussed. Numerical examples are given which demonstrate the applicability of the method for both individual and simultaneous recovery of the material parameters.

  15. Accuracy of AFM force distance curves via direct solution of the Euler-Bernoulli equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eppell, Steven J., E-mail: steven.eppell@case.edu; Liu, Yehe; Zypman, Fredy R.

    2016-03-15

    In an effort to improve the accuracy of force-separation curves obtained from atomic force microscope data, we compare force-separation curves computed using two methods to solve the Euler-Bernoulli equation. A recently introduced method using a direct sequential forward solution, Causal Time-Domain Analysis, is compared against a previously introduced Tikhonov Regularization method. Using the direct solution as a benchmark, it is found that the regularization technique is unable to reproduce accurate curve shapes. Using L-curve analysis and adjusting the regularization parameter, λ, to match either the depth or the full width at half maximum of the force curves, the two techniquesmore » are contrasted. Matched depths result in full width at half maxima that are off by an average of 27% and matched full width at half maxima produce depths that are off by an average of 109%.« less

  16. Regularization techniques on least squares non-uniform fast Fourier transform.

    PubMed

    Gibiino, Fabio; Positano, Vincenzo; Landini, Luigi; Santarelli, Maria Filomena

    2013-05-01

    Non-Cartesian acquisition strategies are widely used in MRI to dramatically reduce the acquisition time while at the same time preserving the image quality. Among non-Cartesian reconstruction methods, the least squares non-uniform fast Fourier transform (LS_NUFFT) is a gridding method based on a local data interpolation kernel that minimizes the worst-case approximation error. The interpolator is chosen using a pseudoinverse matrix. As the size of the interpolation kernel increases, the inversion problem may become ill-conditioned. Regularization methods can be adopted to solve this issue. In this study, we compared three regularization methods applied to LS_NUFFT. We used truncated singular value decomposition (TSVD), Tikhonov regularization and L₁-regularization. Reconstruction performance was evaluated using the direct summation method as reference on both simulated and experimental data. We also evaluated the processing time required to calculate the interpolator. First, we defined the value of the interpolator size after which regularization is needed. Above this value, TSVD obtained the best reconstruction. However, for large interpolator size, the processing time becomes an important constraint, so an appropriate compromise between processing time and reconstruction quality should be adopted. Copyright © 2013 John Wiley & Sons, Ltd.

  17. Regularization of soft-X-ray imaging in the DIII-D tokamak

    DOE PAGES

    Wingen, A.; Shafer, M. W.; Unterberg, E. A.; ...

    2015-03-02

    We developed an image inversion scheme for the soft X-ray imaging system (SXRIS) diagnostic at the DIII-D tokamak in order to obtain the local soft X-ray emission at a poloidal cross-section from the spatially line-integrated image taken by the SXRIS camera. The scheme uses the Tikhonov regularization method since the inversion problem is generally ill-posed. The regularization technique uses the generalized singular value decomposition to determine a solution that depends on a free regularization parameter. The latter has to be chosen carefully, and the so called {\\it L-curve} method to find the optimum regularization parameter is outlined. A representative testmore » image is used to study the properties of the inversion scheme with respect to inversion accuracy, amount/strength of regularization, image noise and image resolution. Moreover, the optimum inversion parameters are identified, while the L-curve method successfully computes the optimum regularization parameter. Noise is found to be the most limiting issue, but sufficient regularization is still possible at noise to signal ratios up to 10%-15%. Finally, the inversion scheme is applied to measured SXRIS data and the line-integrated SXRIS image is successfully inverted.« less

  18. Zernike ultrasonic tomography for fluid velocity imaging based on pipeline intrusive time-of-flight measurements.

    PubMed

    Besic, Nikola; Vasile, Gabriel; Anghel, Andrei; Petrut, Teodor-Ion; Ioana, Cornel; Stankovic, Srdjan; Girard, Alexandre; d'Urso, Guy

    2014-11-01

    In this paper, we propose a novel ultrasonic tomography method for pipeline flow field imaging, based on the Zernike polynomial series. Having intrusive multipath time-offlight ultrasonic measurements (difference in flight time and speed of ultrasound) at the input, we provide at the output tomograms of the fluid velocity components (axial, radial, and orthoradial velocity). Principally, by representing these velocities as Zernike polynomial series, we reduce the tomography problem to an ill-posed problem of finding the coefficients of the series, relying on the acquired ultrasonic measurements. Thereupon, this problem is treated by applying and comparing Tikhonov regularization and quadratically constrained ℓ1 minimization. To enhance the comparative analysis, we additionally introduce sparsity, by employing SVD-based filtering in selecting Zernike polynomials which are to be included in the series. The first approach-Tikhonov regularization without filtering, is used because it is the most suitable method. The performances are quantitatively tested by considering a residual norm and by estimating the flow using the axial velocity tomogram. Finally, the obtained results show the relative residual norm and the error in flow estimation, respectively, ~0.3% and ~1.6% for the less turbulent flow and ~0.5% and ~1.8% for the turbulent flow. Additionally, a qualitative validation is performed by proximate matching of the derived tomograms with a flow physical model.

  19. Image-guided spatial localization of heterogeneous compartments for magnetic resonance

    PubMed Central

    An, Li; Shen, Jun

    2015-01-01

    Purpose: Image-guided localization SPectral Localization Achieved by Sensitivity Heterogeneity (SPLASH) allows rapid measurement of signals from irregularly shaped anatomical compartments without using phase encoding gradients. Here, the authors propose a novel method to address the issue of heterogeneous signal distribution within the localized compartments. Methods: Each compartment was subdivided into multiple subcompartments and their spectra were solved by Tikhonov regularization to enforce smoothness within each compartment. The spectrum of a given compartment was generated by combining the spectra of the components of that compartment. The proposed method was first tested using Monte Carlo simulations and then applied to reconstructing in vivo spectra from irregularly shaped ischemic stroke and normal tissue compartments. Results: Monte Carlo simulations demonstrate that the proposed regularized SPLASH method significantly reduces localization and metabolite quantification errors. In vivo results show that the intracompartment regularization results in ∼40% reduction of error in metabolite quantification. Conclusions: The proposed method significantly reduces localization errors and metabolite quantification errors caused by intracompartment heterogeneous signal distribution. PMID:26328977

  20. Reducing errors in the GRACE gravity solutions using regularization

    NASA Astrophysics Data System (ADS)

    Save, Himanshu; Bettadpur, Srinivas; Tapley, Byron D.

    2012-09-01

    The nature of the gravity field inverse problem amplifies the noise in the GRACE data, which creeps into the mid and high degree and order harmonic coefficients of the Earth's monthly gravity fields provided by GRACE. Due to the use of imperfect background models and data noise, these errors are manifested as north-south striping in the monthly global maps of equivalent water heights. In order to reduce these errors, this study investigates the use of the L-curve method with Tikhonov regularization. L-curve is a popular aid for determining a suitable value of the regularization parameter when solving linear discrete ill-posed problems using Tikhonov regularization. However, the computational effort required to determine the L-curve is prohibitively high for a large-scale problem like GRACE. This study implements a parameter-choice method, using Lanczos bidiagonalization which is a computationally inexpensive approximation to L-curve. Lanczos bidiagonalization is implemented with orthogonal transformation in a parallel computing environment and projects a large estimation problem on a problem of the size of about 2 orders of magnitude smaller for computing the regularization parameter. Errors in the GRACE solution time series have certain characteristics that vary depending on the ground track coverage of the solutions. These errors increase with increasing degree and order. In addition, certain resonant and near-resonant harmonic coefficients have higher errors as compared with the other coefficients. Using the knowledge of these characteristics, this study designs a regularization matrix that provides a constraint on the geopotential coefficients as a function of its degree and order. This regularization matrix is then used to compute the appropriate regularization parameter for each monthly solution. A 7-year time-series of the candidate regularized solutions (Mar 2003-Feb 2010) show markedly reduced error stripes compared with the unconstrained GRACE release 4 solutions (RL04) from the Center for Space Research (CSR). Post-fit residual analysis shows that the regularized solutions fit the data to within the noise level of GRACE. A time series of filtered hydrological model is used to confirm that signal attenuation for basins in the Total Runoff Integrating Pathways (TRIP) database over 320 km radii is less than 1 cm equivalent water height RMS, which is within the noise level of GRACE.

  1. Calculation of the time resolution of the J-PET tomograph using kernel density estimation

    NASA Astrophysics Data System (ADS)

    Raczyński, L.; Wiślicki, W.; Krzemień, W.; Kowalski, P.; Alfs, D.; Bednarski, T.; Białas, P.; Curceanu, C.; Czerwiński, E.; Dulski, K.; Gajos, A.; Głowacz, B.; Gorgol, M.; Hiesmayr, B.; Jasińska, B.; Kamińska, D.; Korcyl, G.; Kozik, T.; Krawczyk, N.; Kubicz, E.; Mohammed, M.; Pawlik-Niedźwiecka, M.; Niedźwiecki, S.; Pałka, M.; Rudy, Z.; Rundel, O.; Sharma, N. G.; Silarski, M.; Smyrski, J.; Strzelecki, A.; Wieczorek, A.; Zgardzińska, B.; Zieliński, M.; Moskal, P.

    2017-06-01

    In this paper we estimate the time resolution of the J-PET scanner built from plastic scintillators. We incorporate the method of signal processing using the Tikhonov regularization framework and the kernel density estimation method. We obtain simple, closed-form analytical formulae for time resolution. The proposed method is validated using signals registered by means of the single detection unit of the J-PET tomograph built from a 30 cm long plastic scintillator strip. It is shown that the experimental and theoretical results obtained for the J-PET scanner equipped with vacuum tube photomultipliers are consistent.

  2. A fractional-order accumulative regularization filter for force reconstruction

    NASA Astrophysics Data System (ADS)

    Wensong, Jiang; Zhongyu, Wang; Jing, Lv

    2018-02-01

    The ill-posed inverse problem of the force reconstruction comes from the influence of noise to measured responses and results in an inaccurate or non-unique solution. To overcome this ill-posedness, in this paper, the transfer function of the reconstruction model is redefined by a Fractional order Accumulative Regularization Filter (FARF). First, the measured responses with noise are refined by a fractional-order accumulation filter based on a dynamic data refresh strategy. Second, a transfer function, generated by the filtering results of the measured responses, is manipulated by an iterative Tikhonov regularization with a serious of iterative Landweber filter factors. Third, the regularization parameter is optimized by the Generalized Cross-Validation (GCV) to improve the ill-posedness of the force reconstruction model. A Dynamic Force Measurement System (DFMS) for the force reconstruction is designed to illustrate the application advantages of our suggested FARF method. The experimental result shows that the FARF method with r = 0.1 and α = 20, has a PRE of 0.36% and an RE of 2.45%, is superior to other cases of the FARF method and the traditional regularization methods when it comes to the dynamic force reconstruction.

  3. A comparative study of minimum norm inverse methods for MEG imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leahy, R.M.; Mosher, J.C.; Phillips, J.W.

    1996-07-01

    The majority of MEG imaging techniques currently in use fall into the general class of (weighted) minimum norm methods. The minimization of a norm is used as the basis for choosing one from a generally infinite set of solutions that provide an equally good fit to the data. This ambiguity in the solution arises from the inherent non- uniqueness of the continuous inverse problem and is compounded by the imbalance between the relatively small number of measurements and the large number of source voxels. Here we present a unified view of the minimum norm methods and describe how we canmore » use Tikhonov regularization to avoid instabilities in the solutions due to noise. We then compare the performance of regularized versions of three well known linear minimum norm methods with the non-linear iteratively reweighted minimum norm method and a Bayesian approach.« less

  4. Combined neural network/Phillips-Tikhonov approach to aerosol retrievals over land from the NASA Research Scanning Polarimeter

    NASA Astrophysics Data System (ADS)

    Di Noia, Antonio; Hasekamp, Otto P.; Wu, Lianghai; van Diedenhoven, Bastiaan; Cairns, Brian; Yorks, John E.

    2017-11-01

    In this paper, an algorithm for the retrieval of aerosol and land surface properties from airborne spectropolarimetric measurements - combining neural networks and an iterative scheme based on Phillips-Tikhonov regularization - is described. The algorithm - which is an extension of a scheme previously designed for ground-based retrievals - is applied to measurements from the Research Scanning Polarimeter (RSP) on board the NASA ER-2 aircraft. A neural network, trained on a large data set of synthetic measurements, is applied to perform aerosol retrievals from real RSP data, and the neural network retrievals are subsequently used as a first guess for the Phillips-Tikhonov retrieval. The resulting algorithm appears capable of accurately retrieving aerosol optical thickness, fine-mode effective radius and aerosol layer height from RSP data. Among the advantages of using a neural network as initial guess for an iterative algorithm are a decrease in processing time and an increase in the number of converging retrievals.

  5. An efficient method for model refinement in diffuse optical tomography

    NASA Astrophysics Data System (ADS)

    Zirak, A. R.; Khademi, M.

    2007-11-01

    Diffuse optical tomography (DOT) is a non-linear, ill-posed, boundary value and optimization problem which necessitates regularization. Also, Bayesian methods are suitable owing to measurements data are sparse and correlated. In such problems which are solved with iterative methods, for stabilization and better convergence, the solution space must be small. These constraints subject to extensive and overdetermined system of equations which model retrieving criteria specially total least squares (TLS) must to refine model error. Using TLS is limited to linear systems which is not achievable when applying traditional Bayesian methods. This paper presents an efficient method for model refinement using regularized total least squares (RTLS) for treating on linearized DOT problem, having maximum a posteriori (MAP) estimator and Tikhonov regulator. This is done with combination Bayesian and regularization tools as preconditioner matrices, applying them to equations and then using RTLS to the resulting linear equations. The preconditioning matrixes are guided by patient specific information as well as a priori knowledge gained from the training set. Simulation results illustrate that proposed method improves the image reconstruction performance and localize the abnormally well.

  6. A spatially adaptive total variation regularization method for electrical resistance tomography

    NASA Astrophysics Data System (ADS)

    Song, Xizi; Xu, Yanbin; Dong, Feng

    2015-12-01

    The total variation (TV) regularization method has been used to solve the ill-posed inverse problem of electrical resistance tomography (ERT), owing to its good ability to preserve edges. However, the quality of the reconstructed images, especially in the flat region, is often degraded by noise. To optimize the regularization term and the regularization factor according to the spatial feature and to improve the resolution of reconstructed images, a spatially adaptive total variation (SATV) regularization method is proposed. A kind of effective spatial feature indicator named difference curvature is used to identify which region is a flat or edge region. According to different spatial features, the SATV regularization method can automatically adjust both the regularization term and regularization factor. At edge regions, the regularization term is approximate to the TV functional to preserve the edges; in flat regions, it is approximate to the first-order Tikhonov (FOT) functional to make the solution stable. Meanwhile, the adaptive regularization factor determined by the spatial feature is used to constrain the regularization strength of the SATV regularization method for different regions. Besides, a numerical scheme is adopted for the implementation of the second derivatives of difference curvature to improve the numerical stability. Several reconstruction image metrics are used to quantitatively evaluate the performance of the reconstructed results. Both simulation and experimental results indicate that, compared with the TV (mean relative error 0.288, mean correlation coefficient 0.627) and FOT (mean relative error 0.295, mean correlation coefficient 0.638) regularization methods, the proposed SATV (mean relative error 0.259, mean correlation coefficient 0.738) regularization method can endure a relatively high level of noise and improve the resolution of reconstructed images.

  7. Dissatisfaction of Compact Picard Condition (CPC) with GRACE satellite data and its treatment by Generalized Tikhonov in Sobolev subspace

    NASA Astrophysics Data System (ADS)

    AllahTavakoli, Y.; Bagheri, H.; Safari, A.; Sharifi, M.

    2012-04-01

    This paper is mainly aiming to prove that the stripy noises in the map of earth's surface mass-density changes derived from GRACE Satellites gravimetry, is due to a dissatisfaction of Compact Picard Condition (CPC) with the GRACE data in the inversion of the Newton Integral Equation over the thin layer of earth; and hence the paper proposes the regularization strategies as efficient tools to treat the Ill-posedness and consequently to de-strip the data. First of all, we preferred to slightly modify the mathematical model of earth's surface mass-density changes developed creatively first by J. Wahr and et.al (1998), according to the all their previous assumptions plus taking into consideration the effect of the earth topography. By the modification we expect that some uncertainties in the prior model have been reduced to some extent. Then we analyzed the CPC on the model and we demonstrated how to perform Generalized Tikhonov regularization in Sobolev subspace for overcoming the instability of the problem. Then we applied the strategy in some simulations and case studies to validate our ideas. The simulations confirm that the stripy noises in the GRACE-derived map of the mass-density changes are due to the CPC dissatisfaction and furthermore the case studies show that Generalized Tikhonov regularization in Sobolev subspace is an influential filtering tool to de-strip the noisy data. Also, the case studies interestingly show that the effect of the topography is comparable to the effect of the load Love numbers on the Wahr's model; hence it may be taken into consideration when the load Love numbers have been taken into account.

  8. A modified conjugate gradient method based on the Tikhonov system for computerized tomography (CT).

    PubMed

    Wang, Qi; Wang, Huaxiang

    2011-04-01

    During the past few decades, computerized tomography (CT) was widely used for non-destructive testing (NDT) and non-destructive examination (NDE) in the industrial area because of its characteristics of non-invasiveness and visibility. Recently, CT technology has been applied to multi-phase flow measurement. Using the principle of radiation attenuation measurements along different directions through the investigated object with a special reconstruction algorithm, cross-sectional information of the scanned object can be worked out. It is a typical inverse problem and has always been a challenge for its nonlinearity and ill-conditions. The Tikhonov regulation method is widely used for similar ill-posed problems. However, the conventional Tikhonov method does not provide reconstructions with qualities good enough, the relative errors between the reconstructed images and the real distribution should be further reduced. In this paper, a modified conjugate gradient (CG) method is applied to a Tikhonov system (MCGT method) for reconstructing CT images. The computational load is dominated by the number of independent measurements m, and a preconditioner is imported to lower the condition number of the Tikhonov system. Both simulation and experiment results indicate that the proposed method can reduce the computational time and improve the quality of image reconstruction. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.

  9. Regularization Reconstruction Method for Imaging Problems in Electrical Capacitance Tomography

    NASA Astrophysics Data System (ADS)

    Chu, Pan; Lei, Jing

    2017-11-01

    The electrical capacitance tomography (ECT) is deemed to be a powerful visualization measurement technique for the parametric measurement in a multiphase flow system. The inversion task in the ECT technology is an ill-posed inverse problem, and seeking for an efficient numerical method to improve the precision of the reconstruction images is important for practical measurements. By the introduction of the Tikhonov regularization (TR) methodology, in this paper a loss function that emphasizes the robustness of the estimation and the low rank property of the imaging targets is put forward to convert the solution of the inverse problem in the ECT reconstruction task into a minimization problem. Inspired by the split Bregman (SB) algorithm, an iteration scheme is developed for solving the proposed loss function. Numerical experiment results validate that the proposed inversion method not only reconstructs the fine structures of the imaging targets, but also improves the robustness.

  10. Adaptive eigenspace method for inverse scattering problems in the frequency domain

    NASA Astrophysics Data System (ADS)

    Grote, Marcus J.; Kray, Marie; Nahum, Uri

    2017-02-01

    A nonlinear optimization method is proposed for the solution of inverse scattering problems in the frequency domain, when the scattered field is governed by the Helmholtz equation. The time-harmonic inverse medium problem is formulated as a PDE-constrained optimization problem and solved by an inexact truncated Newton-type iteration. Instead of a grid-based discrete representation, the unknown wave speed is projected to a particular finite-dimensional basis of eigenfunctions, which is iteratively adapted during the optimization. Truncating the adaptive eigenspace (AE) basis at a (small and slowly increasing) finite number of eigenfunctions effectively introduces regularization into the inversion and thus avoids the need for standard Tikhonov-type regularization. Both analytical and numerical evidence underpins the accuracy of the AE representation. Numerical experiments demonstrate the efficiency and robustness to missing or noisy data of the resulting adaptive eigenspace inversion method.

  11. Sinc-Galerkin estimation of diffusivity in parabolic problems

    NASA Technical Reports Server (NTRS)

    Smith, Ralph C.; Bowers, Kenneth L.

    1991-01-01

    A fully Sinc-Galerkin method for the numerical recovery of spatially varying diffusion coefficients in linear partial differential equations is presented. Because the parameter recovery problems are inherently ill-posed, an output error criterion in conjunction with Tikhonov regularization is used to formulate them as infinite-dimensional minimization problems. The forward problems are discretized with a sinc basis in both the spatial and temporal domains thus yielding an approximate solution which displays an exponential convergence rate and is valid on the infinite time interval. The minimization problems are then solved via a quasi-Newton/trust region algorithm. The L-curve technique for determining an approximate value of the regularization parameter is briefly discussed, and numerical examples are given which show the applicability of the method both for problems with noise-free data as well as for those whose data contains white noise.

  12. An experimental comparison of various methods of nearfield acoustic holography

    DOE PAGES

    Chelliah, Kanthasamy; Raman, Ganesh; Muehleisen, Ralph T.

    2017-05-19

    An experimental comparison of four different methods of nearfield acoustic holography (NAH) is presented in this study for planar acoustic sources. The four NAH methods considered in this study are based on: (1) spatial Fourier transform, (2) equivalent sources model, (3) boundary element methods and (4) statistically optimized NAH. Two dimensional measurements were obtained at different distances in front of a tonal sound source and the NAH methods were used to reconstruct the sound field at the source surface. Reconstructed particle velocity and acoustic pressure fields presented in this study showed that the equivalent sources model based algorithm along withmore » Tikhonov regularization provided the best localization of the sources. Reconstruction errors were found to be smaller for the equivalent sources model based algorithm and the statistically optimized NAH algorithm. Effect of hologram distance on the performance of various algorithms is discussed in detail. The study also compares the computational time required by each algorithm to complete the comparison. Four different regularization parameter choice methods were compared. The L-curve method provided more accurate reconstructions than the generalized cross validation and the Morozov discrepancy principle. Finally, the performance of fixed parameter regularization was comparable to that of the L-curve method.« less

  13. An experimental comparison of various methods of nearfield acoustic holography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chelliah, Kanthasamy; Raman, Ganesh; Muehleisen, Ralph T.

    An experimental comparison of four different methods of nearfield acoustic holography (NAH) is presented in this study for planar acoustic sources. The four NAH methods considered in this study are based on: (1) spatial Fourier transform, (2) equivalent sources model, (3) boundary element methods and (4) statistically optimized NAH. Two dimensional measurements were obtained at different distances in front of a tonal sound source and the NAH methods were used to reconstruct the sound field at the source surface. Reconstructed particle velocity and acoustic pressure fields presented in this study showed that the equivalent sources model based algorithm along withmore » Tikhonov regularization provided the best localization of the sources. Reconstruction errors were found to be smaller for the equivalent sources model based algorithm and the statistically optimized NAH algorithm. Effect of hologram distance on the performance of various algorithms is discussed in detail. The study also compares the computational time required by each algorithm to complete the comparison. Four different regularization parameter choice methods were compared. The L-curve method provided more accurate reconstructions than the generalized cross validation and the Morozov discrepancy principle. Finally, the performance of fixed parameter regularization was comparable to that of the L-curve method.« less

  14. Obtaining sparse distributions in 2D inverse problems.

    PubMed

    Reci, A; Sederman, A J; Gladden, L F

    2017-08-01

    The mathematics of inverse problems has relevance across numerous estimation problems in science and engineering. L 1 regularization has attracted recent attention in reconstructing the system properties in the case of sparse inverse problems; i.e., when the true property sought is not adequately described by a continuous distribution, in particular in Compressed Sensing image reconstruction. In this work, we focus on the application of L 1 regularization to a class of inverse problems; relaxation-relaxation, T 1 -T 2 , and diffusion-relaxation, D-T 2 , correlation experiments in NMR, which have found widespread applications in a number of areas including probing surface interactions in catalysis and characterizing fluid composition and pore structures in rocks. We introduce a robust algorithm for solving the L 1 regularization problem and provide a guide to implementing it, including the choice of the amount of regularization used and the assignment of error estimates. We then show experimentally that L 1 regularization has significant advantages over both the Non-Negative Least Squares (NNLS) algorithm and Tikhonov regularization. It is shown that the L 1 regularization algorithm stably recovers a distribution at a signal to noise ratio<20 and that it resolves relaxation time constants and diffusion coefficients differing by as little as 10%. The enhanced resolving capability is used to measure the inter and intra particle concentrations of a mixture of hexane and dodecane present within porous silica beads immersed within a bulk liquid phase; neither NNLS nor Tikhonov regularization are able to provide this resolution. This experimental study shows that the approach enables discrimination between different chemical species when direct spectroscopic discrimination is impossible, and hence measurement of chemical composition within porous media, such as catalysts or rocks, is possible while still being stable to high levels of noise. Copyright © 2017. Published by Elsevier Inc.

  15. Obtaining sparse distributions in 2D inverse problems

    NASA Astrophysics Data System (ADS)

    Reci, A.; Sederman, A. J.; Gladden, L. F.

    2017-08-01

    The mathematics of inverse problems has relevance across numerous estimation problems in science and engineering. L1 regularization has attracted recent attention in reconstructing the system properties in the case of sparse inverse problems; i.e., when the true property sought is not adequately described by a continuous distribution, in particular in Compressed Sensing image reconstruction. In this work, we focus on the application of L1 regularization to a class of inverse problems; relaxation-relaxation, T1-T2, and diffusion-relaxation, D-T2, correlation experiments in NMR, which have found widespread applications in a number of areas including probing surface interactions in catalysis and characterizing fluid composition and pore structures in rocks. We introduce a robust algorithm for solving the L1 regularization problem and provide a guide to implementing it, including the choice of the amount of regularization used and the assignment of error estimates. We then show experimentally that L1 regularization has significant advantages over both the Non-Negative Least Squares (NNLS) algorithm and Tikhonov regularization. It is shown that the L1 regularization algorithm stably recovers a distribution at a signal to noise ratio < 20 and that it resolves relaxation time constants and diffusion coefficients differing by as little as 10%. The enhanced resolving capability is used to measure the inter and intra particle concentrations of a mixture of hexane and dodecane present within porous silica beads immersed within a bulk liquid phase; neither NNLS nor Tikhonov regularization are able to provide this resolution. This experimental study shows that the approach enables discrimination between different chemical species when direct spectroscopic discrimination is impossible, and hence measurement of chemical composition within porous media, such as catalysts or rocks, is possible while still being stable to high levels of noise.

  16. Seismic waveform inversion best practices: regional, global and exploration test cases

    NASA Astrophysics Data System (ADS)

    Modrak, Ryan; Tromp, Jeroen

    2016-09-01

    Reaching the global minimum of a waveform misfit function requires careful choices about the nonlinear optimization, preconditioning and regularization methods underlying an inversion. Because waveform inversion problems are susceptible to erratic convergence associated with strong nonlinearity, one or two test cases are not enough to reliably inform such decisions. We identify best practices, instead, using four seismic near-surface problems, one regional problem and two global problems. To make meaningful quantitative comparisons between methods, we carry out hundreds of inversions, varying one aspect of the implementation at a time. Comparing nonlinear optimization algorithms, we find that limited-memory BFGS provides computational savings over nonlinear conjugate gradient methods in a wide range of test cases. Comparing preconditioners, we show that a new diagonal scaling derived from the adjoint of the forward operator provides better performance than two conventional preconditioning schemes. Comparing regularization strategies, we find that projection, convolution, Tikhonov regularization and total variation regularization are effective in different contexts. Besides questions of one strategy or another, reliability and efficiency in waveform inversion depend on close numerical attention and care. Implementation details involving the line search and restart conditions have a strong effect on computational cost, regardless of the chosen nonlinear optimization algorithm.

  17. Application of Fourier-wavelet regularized deconvolution for improving image quality of free space propagation x-ray phase contrast imaging.

    PubMed

    Zhou, Zhongxing; Gao, Feng; Zhao, Huijuan; Zhang, Lixin

    2012-11-21

    New x-ray phase contrast imaging techniques without using synchrotron radiation confront a common problem from the negative effects of finite source size and limited spatial resolution. These negative effects swamp the fine phase contrast fringes and make them almost undetectable. In order to alleviate this problem, deconvolution procedures should be applied to the blurred x-ray phase contrast images. In this study, three different deconvolution techniques, including Wiener filtering, Tikhonov regularization and Fourier-wavelet regularized deconvolution (ForWaRD), were applied to the simulated and experimental free space propagation x-ray phase contrast images of simple geometric phantoms. These algorithms were evaluated in terms of phase contrast improvement and signal-to-noise ratio. The results demonstrate that the ForWaRD algorithm is most appropriate for phase contrast image restoration among above-mentioned methods; it can effectively restore the lost information of phase contrast fringes while reduce the amplified noise during Fourier regularization.

  18. A combined reconstruction-classification method for diffuse optical tomography.

    PubMed

    Hiltunen, P; Prince, S J D; Arridge, S

    2009-11-07

    We present a combined classification and reconstruction algorithm for diffuse optical tomography (DOT). DOT is a nonlinear ill-posed inverse problem. Therefore, some regularization is needed. We present a mixture of Gaussians prior, which regularizes the DOT reconstruction step. During each iteration, the parameters of a mixture model are estimated. These associate each reconstructed pixel with one of several classes based on the current estimate of the optical parameters. This classification is exploited to form a new prior distribution to regularize the reconstruction step and update the optical parameters. The algorithm can be described as an iteration between an optimization scheme with zeroth-order variable mean and variance Tikhonov regularization and an expectation-maximization scheme for estimation of the model parameters. We describe the algorithm in a general Bayesian framework. Results from simulated test cases and phantom measurements show that the algorithm enhances the contrast of the reconstructed images with good spatial accuracy. The probabilistic classifications of each image contain only a few misclassified pixels.

  19. Deep neural network-based bandwidth enhancement of photoacoustic data.

    PubMed

    Gutta, Sreedevi; Kadimesetty, Venkata Suryanarayana; Kalva, Sandeep Kumar; Pramanik, Manojit; Ganapathy, Sriram; Yalavarthy, Phaneendra K

    2017-11-01

    Photoacoustic (PA) signals collected at the boundary of tissue are always band-limited. A deep neural network was proposed to enhance the bandwidth (BW) of the detected PA signal, thereby improving the quantitative accuracy of the reconstructed PA images. A least square-based deconvolution method that utilizes the Tikhonov regularization framework was used for comparison with the proposed network. The proposed method was evaluated using both numerical and experimental data. The results indicate that the proposed method was capable of enhancing the BW of the detected PA signal, which inturn improves the contrast recovery and quality of reconstructed PA images without adding any significant computational burden. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  20. X-Ray Phase Imaging for Breast Cancer Detection

    DTIC Science & Technology

    2010-09-01

    regularization seeks the minimum- norm , least squares solution for phase retrieval. The retrieval result with Tikhonov regularization is still unsatisfactory...of norm , that can effectively reflect the accuracy of the retrieved data as an image, if ‖δ Ik+1−δ Ik‖ is less than a predefined threshold value β...pointed out that the proper norm for images is the total variation (TV) norm , which is the L1 norm of the gradient of the image function, and not the

  1. Electrochemical Impedance Imaging via the Distribution of Diffusion Times

    NASA Astrophysics Data System (ADS)

    Song, Juhyun; Bazant, Martin Z.

    2018-03-01

    We develop a mathematical framework to analyze electrochemical impedance spectra in terms of a distribution of diffusion times (DDT) for a parallel array of random finite-length Warburg (diffusion) or Gerischer (reaction-diffusion) circuit elements. A robust DDT inversion method is presented based on complex nonlinear least squares regression with Tikhonov regularization and illustrated for three cases of nanostructured electrodes for energy conversion: (i) a carbon nanotube supercapacitor, (ii) a silicon nanowire Li-ion battery, and (iii) a porous-carbon vanadium flow battery. The results demonstrate the feasibility of nondestructive "impedance imaging" to infer microstructural statistics of random, heterogeneous materials.

  2. Robust design optimization using the price of robustness, robust least squares and regularization methods

    NASA Astrophysics Data System (ADS)

    Bukhari, Hassan J.

    2017-12-01

    In this paper a framework for robust optimization of mechanical design problems and process systems that have parametric uncertainty is presented using three different approaches. Robust optimization problems are formulated so that the optimal solution is robust which means it is minimally sensitive to any perturbations in parameters. The first method uses the price of robustness approach which assumes the uncertain parameters to be symmetric and bounded. The robustness for the design can be controlled by limiting the parameters that can perturb.The second method uses the robust least squares method to determine the optimal parameters when data itself is subjected to perturbations instead of the parameters. The last method manages uncertainty by restricting the perturbation on parameters to improve sensitivity similar to Tikhonov regularization. The methods are implemented on two sets of problems; one linear and the other non-linear. This methodology will be compared with a prior method using multiple Monte Carlo simulation runs which shows that the approach being presented in this paper results in better performance.

  3. A Flexible and Efficient Method for Solving Ill-Posed Linear Integral Equations of the First Kind for Noisy Data

    NASA Astrophysics Data System (ADS)

    Antokhin, I. I.

    2017-06-01

    We propose an efficient and flexible method for solving Fredholm and Abel integral equations of the first kind, frequently appearing in astrophysics. These equations present an ill-posed problem. Our method is based on solving them on a so-called compact set of functions and/or using Tikhonov's regularization. Both approaches are non-parametric and do not require any theoretic model, apart from some very loose a priori constraints on the unknown function. The two approaches can be used independently or in a combination. The advantage of the method, apart from its flexibility, is that it gives uniform convergence of the approximate solution to the exact one, as the errors of input data tend to zero. Simulated and astrophysical examples are presented.

  4. Moving force identification based on redundant concatenated dictionary and weighted l1-norm regularization

    NASA Astrophysics Data System (ADS)

    Pan, Chu-Dong; Yu, Ling; Liu, Huan-Lin; Chen, Ze-Peng; Luo, Wen-Feng

    2018-01-01

    Moving force identification (MFI) is an important inverse problem in the field of bridge structural health monitoring (SHM). Reasonable signal structures of moving forces are rarely considered in the existing MFI methods. Interaction forces are complex because they contain both slowly-varying harmonic and impact signals due to bridge vibration and bumps on a bridge deck, respectively. Therefore, the interaction forces are usually hard to be expressed completely and sparsely by using a single basis function set. Based on the redundant concatenated dictionary and weighted l1-norm regularization method, a hybrid method is proposed for MFI in this study. The redundant dictionary consists of both trigonometric functions and rectangular functions used for matching the harmonic and impact signal features of unknown moving forces. The weighted l1-norm regularization method is introduced for formulation of MFI equation, so that the signal features of moving forces can be accurately extracted. The fast iterative shrinkage-thresholding algorithm (FISTA) is used for solving the MFI problem. The optimal regularization parameter is appropriately chosen by the Bayesian information criterion (BIC) method. In order to assess the accuracy and the feasibility of the proposed method, a simply-supported beam bridge subjected to a moving force is taken as an example for numerical simulations. Finally, a series of experimental studies on MFI of a steel beam are performed in laboratory. Both numerical and experimental results show that the proposed method can accurately identify the moving forces with a strong robustness, and it has a better performance than the Tikhonov regularization method. Some related issues are discussed as well.

  5. Electrochemical Impedance Imaging via the Distribution of Diffusion Times.

    PubMed

    Song, Juhyun; Bazant, Martin Z

    2018-03-16

    We develop a mathematical framework to analyze electrochemical impedance spectra in terms of a distribution of diffusion times (DDT) for a parallel array of random finite-length Warburg (diffusion) or Gerischer (reaction-diffusion) circuit elements. A robust DDT inversion method is presented based on complex nonlinear least squares regression with Tikhonov regularization and illustrated for three cases of nanostructured electrodes for energy conversion: (i) a carbon nanotube supercapacitor, (ii) a silicon nanowire Li-ion battery, and (iii) a porous-carbon vanadium flow battery. The results demonstrate the feasibility of nondestructive "impedance imaging" to infer microstructural statistics of random, heterogeneous materials.

  6. Identification of Vehicle Axle Loads from Bridge Dynamic Responses

    NASA Astrophysics Data System (ADS)

    ZHU, X. Q.; LAW, S. S.

    2000-09-01

    A method is presented to identify moving loads on a bridge deck modelled as an orthotropic rectangular plate. The dynamic behavior of the bridge deck under moving loads is analyzed using the orthotropic plate theory and modal superposition principle, and Tikhonov regularization procedure is applied to provide bounds to the identified forces in the time domain. The identified results using a beam model and a plate model of the bridge deck are compared, and the conditions under which the bridge deck can be simplified as an equivalent beam model are discussed. Computation simulation and laboratory tests show the effectiveness and the validity of the proposed method in identifying forces travelling along the central line or at an eccentric path on the bridge deck.

  7. Remote Measurement of the Atmospheric Isoplanatic Angle and Determination of Refractive Turbulence Profiles by Direct Inversion of the Scintillation Amplitude Covariance Function with Tikhonov Regularization.

    DTIC Science & Technology

    1985-12-01

    shows Good’s 2 data between 500 m and 40 km. Good obtained thisCn profile by differential temperature measurement between two balloon-borne microthermal ...Cn profiles. However, they are difficult to obtain by remote measurements. In Chapters IV and V, I presented a profile measured by microthermal probes

  8. Sparse regularization for force identification using dictionaries

    NASA Astrophysics Data System (ADS)

    Qiao, Baijie; Zhang, Xingwu; Wang, Chenxi; Zhang, Hang; Chen, Xuefeng

    2016-04-01

    The classical function expansion method based on minimizing l2-norm of the response residual employs various basis functions to represent the unknown force. Its difficulty lies in determining the optimum number of basis functions. Considering the sparsity of force in the time domain or in other basis space, we develop a general sparse regularization method based on minimizing l1-norm of the coefficient vector of basis functions. The number of basis functions is adaptively determined by minimizing the number of nonzero components in the coefficient vector during the sparse regularization process. First, according to the profile of the unknown force, the dictionary composed of basis functions is determined. Second, a sparsity convex optimization model for force identification is constructed. Third, given the transfer function and the operational response, Sparse reconstruction by separable approximation (SpaRSA) is developed to solve the sparse regularization problem of force identification. Finally, experiments including identification of impact and harmonic forces are conducted on a cantilever thin plate structure to illustrate the effectiveness and applicability of SpaRSA. Besides the Dirac dictionary, other three sparse dictionaries including Db6 wavelets, Sym4 wavelets and cubic B-spline functions can also accurately identify both the single and double impact forces from highly noisy responses in a sparse representation frame. The discrete cosine functions can also successfully reconstruct the harmonic forces including the sinusoidal, square and triangular forces. Conversely, the traditional Tikhonov regularization method with the L-curve criterion fails to identify both the impact and harmonic forces in these cases.

  9. An inverse finance problem for estimation of the volatility

    NASA Astrophysics Data System (ADS)

    Neisy, A.; Salmani, K.

    2013-01-01

    Black-Scholes model, as a base model for pricing in derivatives markets has some deficiencies, such as ignoring market jumps, and considering market volatility as a constant factor. In this article, we introduce a pricing model for European-Options under jump-diffusion underlying asset. Then, using some appropriate numerical methods we try to solve this model with integral term, and terms including derivative. Finally, considering volatility as an unknown parameter, we try to estimate it by using our proposed model. For the purpose of estimating volatility, in this article, we utilize inverse problem, in which inverse problem model is first defined, and then volatility is estimated using minimization function with Tikhonov regularization.

  10. Angular dependence of multiangle dynamic light scattering for particle size distribution inversion using a self-adapting regularization algorithm

    NASA Astrophysics Data System (ADS)

    Li, Lei; Yu, Long; Yang, Kecheng; Li, Wei; Li, Kai; Xia, Min

    2018-04-01

    The multiangle dynamic light scattering (MDLS) technique can better estimate particle size distributions (PSDs) than single-angle dynamic light scattering. However, determining the inversion range, angular weighting coefficients, and scattering angle combination is difficult but fundamental to the reconstruction for both unimodal and multimodal distributions. In this paper, we propose a self-adapting regularization method called the wavelet iterative recursion nonnegative Tikhonov-Phillips-Twomey (WIRNNT-PT) algorithm. This algorithm combines a wavelet multiscale strategy with an appropriate inversion method and could self-adaptively optimize several noteworthy issues containing the choices of the weighting coefficients, the inversion range and the optimal inversion method from two regularization algorithms for estimating the PSD from MDLS measurements. In addition, the angular dependence of the MDLS for estimating the PSDs of polymeric latexes is thoroughly analyzed. The dependence of the results on the number and range of measurement angles was analyzed in depth to identify the optimal scattering angle combination. Numerical simulations and experimental results for unimodal and multimodal distributions are presented to demonstrate both the validity of the WIRNNT-PT algorithm and the angular dependence of MDLS and show that the proposed algorithm with a six-angle analysis in the 30-130° range can be satisfactorily applied to retrieve PSDs from MDLS measurements.

  11. A function space framework for structural total variation regularization with applications in inverse problems

    NASA Astrophysics Data System (ADS)

    Hintermüller, Michael; Holler, Martin; Papafitsoros, Kostas

    2018-06-01

    In this work, we introduce a function space setting for a wide class of structural/weighted total variation (TV) regularization methods motivated by their applications in inverse problems. In particular, we consider a regularizer that is the appropriate lower semi-continuous envelope (relaxation) of a suitable TV type functional initially defined for sufficiently smooth functions. We study examples where this relaxation can be expressed explicitly, and we also provide refinements for weighted TV for a wide range of weights. Since an integral characterization of the relaxation in function space is, in general, not always available, we show that, for a rather general linear inverse problems setting, instead of the classical Tikhonov regularization problem, one can equivalently solve a saddle-point problem where no a priori knowledge of an explicit formulation of the structural TV functional is needed. In particular, motivated by concrete applications, we deduce corresponding results for linear inverse problems with norm and Poisson log-likelihood data discrepancy terms. Finally, we provide proof-of-concept numerical examples where we solve the saddle-point problem for weighted TV denoising as well as for MR guided PET image reconstruction.

  12. Observation of a high-energy tail in ion energy distribution in the cylindrical Hall thruster plasma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lim, Youbong; Kim, Holak; Choe, Wonho, E-mail: wchoe@kaist.ac.kr

    2014-10-15

    A novel method is presented to determine populations and ion energy distribution functions (IEDFs) of individual ion species having different charge states in an ion beam from the measured spectrum of an E × B probe. The inversion of the problem is performed by adopting the iterative Tikhonov regularization method with the characteristic matrices obtained from the calculated ion trajectories. In a cylindrical Hall thruster plasma, an excellent agreement is observed between the IEDFs by an E × B probe and those by a retarding potential analyzer. The existence of a high-energy tail in the IEDF is found to be mainly due to singlymore » charged Xe ions, and is interpreted in terms of non-linear ion acceleration.« less

  13. Determination of the Geometric Form of a Plane of a Tectonic Gap as the Inverse III-posed Problem of Mathematical Physics

    NASA Astrophysics Data System (ADS)

    Sirota, Dmitry; Ivanov, Vadim

    2017-11-01

    Any mining operations influence stability of natural and technogenic massifs are the reason of emergence of the sources of differences of mechanical tension. These sources generate a quasistationary electric field with a Newtonian potential. The paper reviews the method of determining the shape and size of a flat source field with this kind of potential. This common problem meets in many fields of mining: geological exploration mineral resources, ore deposits, control of mining by underground method, determining coal self-heating source, localization of the rock crack's sources and other applied problems of practical physics. This problems are ill-posed and inverse and solved by converting to Fredholm-Uryson integral equation of the first kind. This equation will be solved by A.N. Tikhonov regularization method.

  14. An improved current potential method for fast computation of stellarator coil shapes

    NASA Astrophysics Data System (ADS)

    Landreman, Matt

    2017-04-01

    Several fast methods for computing stellarator coil shapes are compared, including the classical NESCOIL procedure (Merkel 1987 Nucl. Fusion 27 867), its generalization using truncated singular value decomposition, and a Tikhonov regularization approach we call REGCOIL in which the squared current density is included in the objective function. Considering W7-X and NCSX geometries, and for any desired level of regularization, we find the REGCOIL approach simultaneously achieves lower surface-averaged and maximum values of both current density (on the coil winding surface) and normal magnetic field (on the desired plasma surface). This approach therefore can simultaneously improve the free-boundary reconstruction of the target plasma shape while substantially increasing the minimum distances between coils, preventing collisions between coils while improving access for ports and maintenance. The REGCOIL method also allows finer control over the level of regularization, it preserves convexity to ensure the local optimum found is the global optimum, and it eliminates two pathologies of NESCOIL: the resulting coil shapes become independent of the arbitrary choice of angles used to parameterize the coil surface, and the resulting coil shapes converge rather than diverge as Fourier resolution is increased. We therefore contend that REGCOIL should be used instead of NESCOIL for applications in which a fast and robust method for coil calculation is needed, such as when targeting coil complexity in fixed-boundary plasma optimization, or for scoping new stellarator geometries.

  15. A comparison of image restoration approaches applied to three-dimensional confocal and wide-field fluorescence microscopy.

    PubMed

    Verveer, P. J; Gemkow, M. J; Jovin, T. M

    1999-01-01

    We have compared different image restoration approaches for fluorescence microscopy. The most widely used algorithms were classified with a Bayesian theory according to the assumed noise model and the type of regularization imposed. We considered both Gaussian and Poisson models for the noise in combination with Tikhonov regularization, entropy regularization, Good's roughness and without regularization (maximum likelihood estimation). Simulations of fluorescence confocal imaging were used to examine the different noise models and regularization approaches using the mean squared error criterion. The assumption of a Gaussian noise model yielded only slightly higher errors than the Poisson model. Good's roughness was the best choice for the regularization. Furthermore, we compared simulated confocal and wide-field data. In general, restored confocal data are superior to restored wide-field data, but given sufficient higher signal level for the wide-field data the restoration result may rival confocal data in quality. Finally, a visual comparison of experimental confocal and wide-field data is presented.

  16. 4D-tomographic reconstruction of water vapor using the hybrid regularization technique with application to the North West of Iran

    NASA Astrophysics Data System (ADS)

    Adavi, Zohre; Mashhadi-Hossainali, Masoud

    2015-04-01

    Water vapor is considered as one of the most important weather parameter in meteorology. Its non-uniform distribution, which is due to the atmospheric phenomena above the surface of the earth, depends both on space and time. Due to the limited spatial and temporal coverage of observations, estimating water vapor is still a challenge in meteorology and related fields such as positioning and geodetic techniques. Tomography is a method for modeling the spatio-temporal variations of this parameter. By analyzing the impact of troposphere on the Global Navigation Satellite (GNSS) signals, inversion techniques are used for modeling the water vapor in this approach. Non-uniqueness and instability of solution are the two characteristic features of this problem. Horizontal and/or vertical constraints are usually used to compute a unique solution for this problem. Here, a hybrid regularization method is used for computing a regularized solution. The adopted method is based on the Least-Square QR (LSQR) and Tikhonov regularization techniques. This method benefits from the advantages of both the iterative and direct techniques. Moreover, it is independent of initial values. Based on this property and using an appropriate resolution for the model, firstly the number of model elements which are not constrained by GPS measurement are minimized and then; water vapor density is only estimated at the voxels which are constrained by these measurements. In other words, no constraint is added to solve the problem. Reconstructed profiles of water vapor are validated using radiosonde measurements.

  17. Reconstruction of electrical impedance tomography (EIT) images based on the expectation maximum (EM) method.

    PubMed

    Wang, Qi; Wang, Huaxiang; Cui, Ziqiang; Yang, Chengyi

    2012-11-01

    Electrical impedance tomography (EIT) calculates the internal conductivity distribution within a body using electrical contact measurements. The image reconstruction for EIT is an inverse problem, which is both non-linear and ill-posed. The traditional regularization method cannot avoid introducing negative values in the solution. The negativity of the solution produces artifacts in reconstructed images in presence of noise. A statistical method, namely, the expectation maximization (EM) method, is used to solve the inverse problem for EIT in this paper. The mathematical model of EIT is transformed to the non-negatively constrained likelihood minimization problem. The solution is obtained by the gradient projection-reduced Newton (GPRN) iteration method. This paper also discusses the strategies of choosing parameters. Simulation and experimental results indicate that the reconstructed images with higher quality can be obtained by the EM method, compared with the traditional Tikhonov and conjugate gradient (CG) methods, even with non-negative processing. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  18. Tomographic Neutron Imaging using SIRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gregor, Jens; FINNEY, Charles E A; Toops, Todd J

    2013-01-01

    Neutron imaging is complementary to x-ray imaging in that materials such as water and plastic are highly attenuating while material such as metal is nearly transparent. We showcase tomographic imaging of a diesel particulate filter. Reconstruction is done using a modified version of SIRT called PSIRT. We expand on previous work and introduce Tikhonov regularization. We show that near-optimal relaxation can still be achieved. The algorithmic ideas apply to cone beam x-ray CT and other inverse problems.

  19. Statistical Method to Overcome Overfitting Issue in Rational Function Models

    NASA Astrophysics Data System (ADS)

    Alizadeh Moghaddam, S. H.; Mokhtarzade, M.; Alizadeh Naeini, A.; Alizadeh Moghaddam, S. A.

    2017-09-01

    Rational function models (RFMs) are known as one of the most appealing models which are extensively applied in geometric correction of satellite images and map production. Overfitting is a common issue, in the case of terrain dependent RFMs, that degrades the accuracy of RFMs-derived geospatial products. This issue, resulting from the high number of RFMs' parameters, leads to ill-posedness of the RFMs. To tackle this problem, in this study, a fast and robust statistical approach is proposed and compared to Tikhonov regularization (TR) method, as a frequently-used solution to RFMs' overfitting. In the proposed method, a statistical test, namely, significance test is applied to search for the RFMs' parameters that are resistant against overfitting issue. The performance of the proposed method was evaluated for two real data sets of Cartosat-1 satellite images. The obtained results demonstrate the efficiency of the proposed method in term of the achievable level of accuracy. This technique, indeed, shows an improvement of 50-80% over the TR.

  20. Evaluation of global equal-area mass grid solutions from GRACE

    NASA Astrophysics Data System (ADS)

    Save, Himanshu; Bettadpur, Srinivas; Tapley, Byron

    2015-04-01

    The Gravity Recovery and Climate Experiment (GRACE) range-rate data was inverted into global equal-area mass grid solutions at the Center for Space Research (CSR) using Tikhonov Regularization to stabilize the ill-posed inversion problem. These solutions are intended to be used for applications in Hydrology, Oceanography, Cryosphere etc without any need for post-processing. This paper evaluates these solutions with emphasis on spatial and temporal characteristics of the signal content. These solutions will be validated against multiple models and in-situ data sets.

  1. Accelerated gradient methods for the x-ray imaging of solar flares

    NASA Astrophysics Data System (ADS)

    Bonettini, S.; Prato, M.

    2014-05-01

    In this paper we present new optimization strategies for the reconstruction of x-ray images of solar flares by means of the data collected by the Reuven Ramaty high energy solar spectroscopic imager. The imaging concept of the satellite is based on rotating modulation collimator instruments, which allow the use of both Fourier imaging approaches and reconstruction techniques based on the straightforward inversion of the modulated count profiles. Although in the last decade, greater attention has been devoted to the former strategies due to their very limited computational cost, here we consider the latter model and investigate the effectiveness of different accelerated gradient methods for the solution of the corresponding constrained minimization problem. Moreover, regularization is introduced through either an early stopping of the iterative procedure, or a Tikhonov term added to the discrepancy function by means of a discrepancy principle accounting for the Poisson nature of the noise affecting the data.

  2. Model study of imaging myocardial infarction by intracardiac electrical impedance tomography.

    PubMed

    Li, Ying; Rao, Liyun; Ling, Yuesheng; He, Renjie; Khoury, Dirar S

    2008-01-01

    Electrical impedance tomography (EIT) detects tissue composition inside a medium by determining its resistive properties, and uses various electrode configurations to pass a small electric current and measure corresponding potential. We investigated the feasibility of reconstructing scarred tissue inside the heart wall by employing EIT on the basis of a catheter carrying a plurality of electrodes and placed inside the blood-filled heart cavity. We built a computer model of the biological medium, and reconstructed the resistivity distribution using the finite element method and Tikhonov regularization. The results established the successful implementation of the numeric methods and the possibility of localizing and quantifying scarred myocardium. Novel application of EIT from inside the heart cavity could be useful during catheterization and may complement other diagnostic modalities. Further research is necessary to assess the impact of several factors on the accuracy of the reconstruction and include number of electrodes, catheter location, and scar size.

  3. Optimal design of loudspeaker arrays for robust cross-talk cancellation using the Taguchi method and the genetic algorithm.

    PubMed

    Bai, Mingsian R; Tung, Chih-Wei; Lee, Chih-Chung

    2005-05-01

    An optimal design technique of loudspeaker arrays for cross-talk cancellation with application in three-dimensional audio is presented. An array focusing scheme is presented on the basis of the inverse propagation that relates the transducers to a set of chosen control points. Tikhonov regularization is employed in designing the inverse cancellation filters. An extensive analysis is conducted to explore the cancellation performance and robustness issues. To best compromise the performance and robustness of the cross-talk cancellation system, optimal configurations are obtained with the aid of the Taguchi method and the genetic algorithm (GA). The proposed systems are further justified by physical as well as subjective experiments. The results reveal that large number of loudspeakers, closely spaced configuration, and optimal control point design all contribute to the robustness of cross-talk cancellation systems (CCS) against head misalignment.

  4. Chromotomography for a rotating-prism instrument using backprojection, then filtering.

    PubMed

    Deming, Ross W

    2006-08-01

    A simple closed-form solution is derived for reconstructing a 3D spatial-chromatic image cube from a set of chromatically dispersed 2D image frames. The algorithm is tailored for a particular instrument in which the dispersion element is a matching set of mechanically rotated direct vision prisms positioned between a lens and a focal plane array. By using a linear operator formalism to derive the Tikhonov-regularized pseudoinverse operator, it is found that the unique minimum-norm solution is obtained by applying the adjoint operator, followed by 1D filtering with respect to the chromatic variable. Thus the filtering and backprojection (adjoint) steps are applied in reverse order relative to an existing method. Computational efficiency is provided by use of the fast Fourier transform in the filtering step.

  5. Food adulteration analysis without laboratory prepared or determined reference food adulterant values.

    PubMed

    Kalivas, John H; Georgiou, Constantinos A; Moira, Marianna; Tsafaras, Ilias; Petrakis, Eleftherios A; Mousdis, George A

    2014-04-01

    Quantitative analysis of food adulterants is an important health and economic issue that needs to be fast and simple. Spectroscopy has significantly reduced analysis time. However, still needed are preparations of analyte calibration samples matrix matched to prediction samples which can be laborious and costly. Reported in this paper is the application of a newly developed pure component Tikhonov regularization (PCTR) process that does not require laboratory prepared or reference analysis methods, and hence, is a greener calibration method. The PCTR method requires an analyte pure component spectrum and non-analyte spectra. As a food analysis example, synchronous fluorescence spectra of extra virgin olive oil samples adulterated with sunflower oil is used. Results are shown to be better than those obtained using ridge regression with reference calibration samples. The flexibility of PCTR allows including reference samples and is generic for use with other instrumental methods and food products. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. Compressive sensing of signals generated in plastic scintillators in a novel J-PET instrument

    NASA Astrophysics Data System (ADS)

    Raczyński, L.; Moskal, P.; Kowalski, P.; Wiślicki, W.; Bednarski, T.; Białas, P.; Czerwiński, E.; Gajos, A.; Kapłon, Ł.; Kochanowski, A.; Korcyl, G.; Kowal, J.; Kozik, T.; Krzemień, W.; Kubicz, E.; Niedźwiecki, Sz.; Pałka, M.; Rudy, Z.; Rundel, O.; Salabura, P.; Sharma, N. G.; Silarski, M.; Słomski, A.; Smyrski, J.; Strzelecki, A.; Wieczorek, A.; Zieliński, M.; Zoń, N.

    2015-06-01

    The J-PET scanner, which allows for single bed imaging of the whole human body, is currently under development at the Jagiellonian University. The discussed detector offers improvement of the Time of Flight (TOF) resolution due to the use of fast plastic scintillators and dedicated electronics allowing for sampling in the voltage domain of signals with durations of few nanoseconds. In this paper we show that recovery of the whole signal, based on only a few samples, is possible. In order to do that, we incorporate the training signals into the Tikhonov regularization framework and we perform the Principal Component Analysis decomposition, which is well known for its compaction properties. The method yields a simple closed form analytical solution that does not require iterative processing. Moreover, from the Bayes theory the properties of regularized solution, especially its covariance matrix, may be easily derived. This is the key to introduce and prove the formula for calculations of the signal recovery error. In this paper we show that an average recovery error is approximately inversely proportional to the number of acquired samples.

  7. Dynamic Reconstruction Algorithm of Three-Dimensional Temperature Field Measurement by Acoustic Tomography

    PubMed Central

    Li, Yanqiu; Liu, Shi; Inaki, Schlaberg H.

    2017-01-01

    Accuracy and speed of algorithms play an important role in the reconstruction of temperature field measurements by acoustic tomography. Existing algorithms are based on static models which only consider the measurement information. A dynamic model of three-dimensional temperature reconstruction by acoustic tomography is established in this paper. A dynamic algorithm is proposed considering both acoustic measurement information and the dynamic evolution information of the temperature field. An objective function is built which fuses measurement information and the space constraint of the temperature field with its dynamic evolution information. Robust estimation is used to extend the objective function. The method combines a tunneling algorithm and a local minimization technique to solve the objective function. Numerical simulations show that the image quality and noise immunity of the dynamic reconstruction algorithm are better when compared with static algorithms such as least square method, algebraic reconstruction technique and standard Tikhonov regularization algorithms. An effective method is provided for temperature field reconstruction by acoustic tomography. PMID:28895930

  8. Solving the Rational Polynomial Coefficients Based on L Curve

    NASA Astrophysics Data System (ADS)

    Zhou, G.; Li, X.; Yue, T.; Huang, W.; He, C.; Huang, Y.

    2018-05-01

    The rational polynomial coefficients (RPC) model is a generalized sensor model, which can achieve high approximation accuracy. And it is widely used in the field of photogrammetry and remote sensing. Least square method is usually used to determine the optimal parameter solution of the rational function model. However the distribution of control points is not uniform or the model is over-parameterized, which leads to the singularity of the coefficient matrix of the normal equation. So the normal equation becomes ill conditioned equation. The obtained solutions are extremely unstable and even wrong. The Tikhonov regularization can effectively improve and solve the ill conditioned equation. In this paper, we calculate pathological equations by regularization method, and determine the regularization parameters by L curve. The results of the experiments on aerial format photos show that the accuracy of the first-order RPC with the equal denominators has the highest accuracy. The high order RPC model is not necessary in the processing of dealing with frame images, as the RPC model and the projective model are almost the same. The result shows that the first-order RPC model is basically consistent with the strict sensor model of photogrammetry. Orthorectification results both the firstorder RPC model and Camera Model (ERDAS9.2 platform) are similar to each other, and the maximum residuals of X and Y are 0.8174 feet and 0.9272 feet respectively. This result shows that RPC model can be used in the aerial photographic compensation replacement sensor model.

  9. Källén-Lehmann spectroscopy for (un)physical degrees of freedom

    NASA Astrophysics Data System (ADS)

    Dudal, David; Oliveira, Orlando; Silva, Paulo J.

    2014-01-01

    We consider the problem of "measuring" the Källén-Lehmann spectral density of a particle (be it elementary or bound state) propagator by means of 4D lattice data. As the latter are obtained from operations at (Euclidean momentum squared) p2≥0, we are facing the generically ill-posed problem of converting a limited data set over the positive real axis to an integral representation, extending over the whole complex p2 plane. We employ a linear regularization strategy, commonly known as the Tikhonov method with the Morozov discrepancy principle, with suitable adaptations to realistic data, e.g. with an unknown threshold. An important virtue over the (standard) maximum entropy method is the possibility to also probe unphysical spectral densities, for example, of a confined gluon. We apply our proposal here to "physical" mock spectral data as a litmus test and then to the lattice SU(3) Landau gauge gluon at zero temperature.

  10. Image-guided filtering for improving photoacoustic tomographic image reconstruction.

    PubMed

    Awasthi, Navchetan; Kalva, Sandeep Kumar; Pramanik, Manojit; Yalavarthy, Phaneendra K

    2018-06-01

    Several algorithms exist to solve the photoacoustic image reconstruction problem depending on the expected reconstructed image features. These reconstruction algorithms promote typically one feature, such as being smooth or sharp, in the output image. Combining these features using a guided filtering approach was attempted in this work, which requires an input and guiding image. This approach act as a postprocessing step to improve commonly used Tikhonov or total variational regularization method. The result obtained from linear backprojection was used as a guiding image to improve these results. Using both numerical and experimental phantom cases, it was shown that the proposed guided filtering approach was able to improve (as high as 11.23 dB) the signal-to-noise ratio of the reconstructed images with the added advantage being computationally efficient. This approach was compared with state-of-the-art basis pursuit deconvolution as well as standard denoising methods and shown to outperform them. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seo, Dongcheol; Peterson, B. J.; Lee, Seung Hun

    The resistive bolometers have been successfully installed in the midplane of L-port in Korea Superconducting Tokamak Advanced Research (KSTAR) device. The spatial and temporal resolutions, 4.5 cm and {approx}1 kHz, respectively, enable us to measure the radial profile of the total radiated power from magnetically confined plasma at a high temperature through radiation and neutral particles. The radiated power was measured at all shots. Even at low plasma current, the bolometer signal was detectable. The electron cyclotron resonance heating (ECH) has been used in tokamak for ECH assisted start-up and plasma control by local heating and current drive. The detectorsmore » of resistive bolometer, near the antenna of ECH, are affected by electron cyclotron wave. The tomographic reconstruction, using the Phillips-Tikhonov regularization method, will be carried out for a major radial profile of the radiation emissivity of the circular cross-section plasma.« less

  12. Iterative feature refinement for accurate undersampled MR image reconstruction

    NASA Astrophysics Data System (ADS)

    Wang, Shanshan; Liu, Jianbo; Liu, Qiegen; Ying, Leslie; Liu, Xin; Zheng, Hairong; Liang, Dong

    2016-05-01

    Accelerating MR scan is of great significance for clinical, research and advanced applications, and one main effort to achieve this is the utilization of compressed sensing (CS) theory. Nevertheless, the existing CSMRI approaches still have limitations such as fine structure loss or high computational complexity. This paper proposes a novel iterative feature refinement (IFR) module for accurate MR image reconstruction from undersampled K-space data. Integrating IFR with CSMRI which is equipped with fixed transforms, we develop an IFR-CS method to restore meaningful structures and details that are originally discarded without introducing too much additional complexity. Specifically, the proposed IFR-CS is realized with three iterative steps, namely sparsity-promoting denoising, feature refinement and Tikhonov regularization. Experimental results on both simulated and in vivo MR datasets have shown that the proposed module has a strong capability to capture image details, and that IFR-CS is comparable and even superior to other state-of-the-art reconstruction approaches.

  13. Dipolar filtered magic-sandwich-echoes as a tool for probing molecular motions using time domain NMR

    NASA Astrophysics Data System (ADS)

    Filgueiras, Jefferson G.; da Silva, Uilson B.; Paro, Giovanni; d'Eurydice, Marcel N.; Cobo, Márcio F.; deAzevedo, Eduardo R.

    2017-12-01

    We present a simple 1 H NMR approach for characterizing intermediate to fast regime molecular motions using 1 H time-domain NMR at low magnetic field. The method is based on a Goldmann Shen dipolar filter (DF) followed by a Mixed Magic Sandwich Echo (MSE). The dipolar filter suppresses the signals arising from molecular segments presenting sub kHz mobility, so only signals from mobile segments are detected. Thus, the temperature dependence of the signal intensities directly evidences the onset of molecular motions with rates higher than kHz. The DF-MSE signal intensity is described by an analytical function based on the Anderson Weiss theory, from where parameters related to the molecular motion (e.g. correlation times and activation energy) can be estimated when performing experiments as function of the temperature. Furthermore, we propose the use of the Tikhonov regularization for estimating the width of the distribution of correlation times.

  14. Iterative Correction Scheme Based on Discrete Cosine Transform and L1 Regularization for Fluorescence Molecular Tomography With Background Fluorescence.

    PubMed

    Zhang, Jiulou; Shi, Junwei; Guang, Huizhi; Zuo, Simin; Liu, Fei; Bai, Jing; Luo, Jianwen

    2016-06-01

    High-intensity background fluorescence is generally encountered in fluorescence molecular tomography (FMT), because of the accumulation of fluorescent probes in nontarget tissues or the existence of autofluorescence in biological tissues. The reconstruction results are affected or even distorted by the background fluorescence, especially when the distribution of fluorescent targets is relatively sparse. The purpose of this paper is to reduce the negative effect of background fluorescence on FMT reconstruction. After each iteration of the Tikhonov regularization algorithm, 3-D discrete cosine transform is adopted to filter the intermediate results. And then, a sparsity constraint step based on L1 regularization is applied to restrain the energy of the objective function. Phantom experiments with different fluorescence intensities of homogeneous and heterogeneous background are carried out to validate the performance of the proposed scheme. The results show that the reconstruction quality can be improved with the proposed iterative correction scheme. The influence of background fluorescence in FMT can be reduced effectively because of the filtering of the intermediate results, the detail preservation, and noise suppression of L1 regularization.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shafer, Morgan W; Battaglia, D. J.; Unterberg, Ezekial A

    A new tangential 2D Soft X-Ray Imaging System (SXRIS) is being designed to examine the edge magnetic island structure in the lower X-point region of DIII-D. A synthetic diagnostic calculation coupled to 3D emissivity estimates is used to generate phantom images. Phillips-Tikhonov regularization is used to invert the phantom images for comparison to the original emissivity model. Noise level, island size, and equilibrium accuracy are scanned to assess the feasibility of detecting edge island structures. Models of typical DIII-D discharges indicate integration times > 1 ms with accurate equilibrium reconstruction are needed for small island (< 3 cm) detection.

  16. Forecasting Epidemics Through Nonparametric Estimation of Time-Dependent Transmission Rates Using the SEIR Model.

    PubMed

    Smirnova, Alexandra; deCamp, Linda; Chowell, Gerardo

    2017-05-02

    Deterministic and stochastic methods relying on early case incidence data for forecasting epidemic outbreaks have received increasing attention during the last few years. In mathematical terms, epidemic forecasting is an ill-posed problem due to instability of parameter identification and limited available data. While previous studies have largely estimated the time-dependent transmission rate by assuming specific functional forms (e.g., exponential decay) that depend on a few parameters, here we introduce a novel approach for the reconstruction of nonparametric time-dependent transmission rates by projecting onto a finite subspace spanned by Legendre polynomials. This approach enables us to effectively forecast future incidence cases, the clear advantage over recovering the transmission rate at finitely many grid points within the interval where the data are currently available. In our approach, we compare three regularization algorithms: variational (Tikhonov's) regularization, truncated singular value decomposition (TSVD), and modified TSVD in order to determine the stabilizing strategy that is most effective in terms of reliability of forecasting from limited data. We illustrate our methodology using simulated data as well as case incidence data for various epidemics including the 1918 influenza pandemic in San Francisco and the 2014-2015 Ebola epidemic in West Africa.

  17. Person Authentication Using Learned Parameters of Lifting Wavelet Filters

    NASA Astrophysics Data System (ADS)

    Niijima, Koichi

    2006-10-01

    This paper proposes a method for identifying persons by the use of the lifting wavelet parameters learned by kurtosis-minimization. Our learning method uses desirable properties of kurtosis and wavelet coefficients of a facial image. Exploiting these properties, the lifting parameters are trained so as to minimize the kurtosis of lifting wavelet coefficients computed for the facial image. Since this minimization problem is an ill-posed problem, it is solved by the aid of Tikhonov's regularization method. Our learning algorithm is applied to each of the faces to be identified to generate its feature vector whose components consist of the learned parameters. The constructed feature vectors are memorized together with the corresponding faces in a feature vectors database. Person authentication is performed by comparing the feature vector of a query face with those stored in the database. In numerical experiments, the lifting parameters are trained for each of the neutral faces of 132 persons (74 males and 58 females) in the AR face database. Person authentication is executed by using the smile and anger faces of the same persons in the database.

  18. Obtaining the Bidirectional Transfer Distribution Function ofIsotropically Scattering Materials Using an Integrating Sphere

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jonsson, Jacob C.; Branden, Henrik

    2006-10-19

    This paper demonstrates a method to determine thebidirectional transfer distribution function (BTDF) using an integratingsphere. Information about the sample's angle dependent scattering isobtained by making transmittance measurements with the sample atdifferent distances from the integrating sphere. Knowledge about theilluminated area of the sample and the geometry of the sphere port incombination with the measured data combines to an system of equationsthat includes the angle dependent transmittance. The resulting system ofequations is an ill-posed problem which rarely gives a physical solution.A solvable system is obtained by using Tikhonov regularization on theill-posed problem. The solution to this system can then be usedmore » to obtainthe BTDF. Four bulk-scattering samples were characterised using both twogoniophotometers and the described method to verify the validity of thenew method. The agreement shown is great for the more diffuse samples.The solution to the low-scattering samples contains unphysicaloscillations, butstill gives the correct shape of the solution. Theorigin of the oscillations and why they are more prominent inlow-scattering samples are discussed.« less

  19. Towards adjoint-based inversion for rheological parameters in nonlinear viscous mantle flow

    NASA Astrophysics Data System (ADS)

    Worthen, Jennifer; Stadler, Georg; Petra, Noemi; Gurnis, Michael; Ghattas, Omar

    2014-09-01

    We address the problem of inferring mantle rheological parameter fields from surface velocity observations and instantaneous nonlinear mantle flow models. We formulate this inverse problem as an infinite-dimensional nonlinear least squares optimization problem governed by nonlinear Stokes equations. We provide expressions for the gradient of the cost functional of this optimization problem with respect to two spatially-varying rheological parameter fields: the viscosity prefactor and the exponent of the second invariant of the strain rate tensor. Adjoint (linearized) Stokes equations, which are characterized by a 4th order anisotropic viscosity tensor, facilitates efficient computation of the gradient. A quasi-Newton method for the solution of this optimization problem is presented, which requires the repeated solution of both nonlinear forward Stokes and linearized adjoint Stokes equations. For the solution of the nonlinear Stokes equations, we find that Newton’s method is significantly more efficient than a Picard fixed point method. Spectral analysis of the inverse operator given by the Hessian of the optimization problem reveals that the numerical eigenvalues collapse rapidly to zero, suggesting a high degree of ill-posedness of the inverse problem. To overcome this ill-posedness, we employ Tikhonov regularization (favoring smooth parameter fields) or total variation (TV) regularization (favoring piecewise-smooth parameter fields). Solution of two- and three-dimensional finite element-based model inverse problems show that a constant parameter in the constitutive law can be recovered well from surface velocity observations. Inverting for a spatially-varying parameter field leads to its reasonable recovery, in particular close to the surface. When inferring two spatially varying parameter fields, only an effective viscosity field and the total viscous dissipation are recoverable. Finally, a model of a subducting plate shows that a localized weak zone at the plate boundary can be partially recovered, especially with TV regularization.

  20. Solution of Tikhonov's Motion-Separation Problem Using the Modified Newton-Kantorovich Theorem

    NASA Astrophysics Data System (ADS)

    Belolipetskii, A. A.; Ter-Krikorov, A. M.

    2018-02-01

    The paper presents a new way to prove the existence of a solution of the well-known Tikhonov's problem on systems of ordinary differential equations in which one part of the variables performs "fast" motions and the other part, "slow" motions. Tikhonov's problem has been the subject of a large number of works in connection with its applications to a wide range of mathematical models in natural science and economics. Only a short list of publications, which present the proof of the existence of solutions in this problem, is cited. The aim of the paper is to demonstrate the possibility of applying the modified Newton-Kantorovich theorem to prove the existence of a solution in Tikhonov's problem. The technique proposed can be used to prove the existence of solutions of other classes of problems with a small parameter.

  1. Nonparametric instrumental regression with non-convex constraints

    NASA Astrophysics Data System (ADS)

    Grasmair, M.; Scherzer, O.; Vanhems, A.

    2013-03-01

    This paper considers the nonparametric regression model with an additive error that is dependent on the explanatory variables. As is common in empirical studies in epidemiology and economics, it also supposes that valid instrumental variables are observed. A classical example in microeconomics considers the consumer demand function as a function of the price of goods and the income, both variables often considered as endogenous. In this framework, the economic theory also imposes shape restrictions on the demand function, such as integrability conditions. Motivated by this illustration in microeconomics, we study an estimator of a nonparametric constrained regression function using instrumental variables by means of Tikhonov regularization. We derive rates of convergence for the regularized model both in a deterministic and stochastic setting under the assumption that the true regression function satisfies a projected source condition including, because of the non-convexity of the imposed constraints, an additional smallness condition.

  2. Stochastic calibration and learning in nonstationary hydroeconomic models

    NASA Astrophysics Data System (ADS)

    Maneta, M. P.; Howitt, R.

    2014-05-01

    Concern about water scarcity and adverse climate events over agricultural regions has motivated a number of efforts to develop operational integrated hydroeconomic models to guide adaptation and optimal use of water. Once calibrated, these models are used for water management and analysis assuming they remain valid under future conditions. In this paper, we present and demonstrate a methodology that permits the recursive calibration of economic models of agricultural production from noisy but frequently available data. We use a standard economic calibration approach, namely positive mathematical programming, integrated in a data assimilation algorithm based on the ensemble Kalman filter equations to identify the economic model parameters. A moving average kernel ensures that new and past information on agricultural activity are blended during the calibration process, avoiding loss of information and overcalibration for the conditions of a single year. A regularization constraint akin to the standard Tikhonov regularization is included in the filter to ensure its stability even in the presence of parameters with low sensitivity to observations. The results show that the implementation of the PMP methodology within a data assimilation framework based on the enKF equations is an effective method to calibrate models of agricultural production even with noisy information. The recursive nature of the method incorporates new information as an added value to the known previous observations of agricultural activity without the need to store historical information. The robustness of the method opens the door to the use of new remote sensing algorithms for operational water management.

  3. Disentangling rotational velocity distribution of stars

    NASA Astrophysics Data System (ADS)

    Curé, Michel; Rial, Diego F.; Cassetti, Julia; Christen, Alejandra

    2017-11-01

    Rotational speed is an important physical parameter of stars: knowing the distribution of stellar rotational velocities is essential for understanding stellar evolution. However, rotational speed cannot be measured directly and is instead the convolution between the rotational speed and the sine of the inclination angle vsin(i). The problem itself can be described via a Fredhoml integral of the first kind. A new method (Curé et al. 2014) to deconvolve this inverse problem and obtain the cumulative distribution function for stellar rotational velocities is based on the work of Chandrasekhar & Münch (1950). Another method to obtain the probability distribution function is Tikhonov regularization method (Christen et al. 2016). The proposed methods can be also applied to the mass ratio distribution of extrasolar planets and brown dwarfs (in binary systems, Curé et al. 2015). For stars in a cluster, where all members are gravitationally bounded, the standard assumption that rotational axes are uniform distributed over the sphere is questionable. On the basis of the proposed techniques a simple approach to model this anisotropy of rotational axes has been developed with the possibility to ``disentangling'' simultaneously both the rotational speed distribution and the orientation of rotational axes.

  4. Constrained H1-regularization schemes for diffeomorphic image registration

    PubMed Central

    Mang, Andreas; Biros, George

    2017-01-01

    We propose regularization schemes for deformable registration and efficient algorithms for their numerical approximation. We treat image registration as a variational optimal control problem. The deformation map is parametrized by its velocity. Tikhonov regularization ensures well-posedness. Our scheme augments standard smoothness regularization operators based on H1- and H2-seminorms with a constraint on the divergence of the velocity field, which resembles variational formulations for Stokes incompressible flows. In our formulation, we invert for a stationary velocity field and a mass source map. This allows us to explicitly control the compressibility of the deformation map and by that the determinant of the deformation gradient. We also introduce a new regularization scheme that allows us to control shear. We use a globalized, preconditioned, matrix-free, reduced space (Gauss–)Newton–Krylov scheme for numerical optimization. We exploit variable elimination techniques to reduce the number of unknowns of our system; we only iterate on the reduced space of the velocity field. Our current implementation is limited to the two-dimensional case. The numerical experiments demonstrate that we can control the determinant of the deformation gradient without compromising registration quality. This additional control allows us to avoid oversmoothing of the deformation map. We also demonstrate that we can promote or penalize shear whilst controlling the determinant of the deformation gradient. PMID:29075361

  5. Low-rank separated representation surrogates of high-dimensional stochastic functions: Application in Bayesian inference

    NASA Astrophysics Data System (ADS)

    Validi, AbdoulAhad

    2014-03-01

    This study introduces a non-intrusive approach in the context of low-rank separated representation to construct a surrogate of high-dimensional stochastic functions, e.g., PDEs/ODEs, in order to decrease the computational cost of Markov Chain Monte Carlo simulations in Bayesian inference. The surrogate model is constructed via a regularized alternative least-square regression with Tikhonov regularization using a roughening matrix computing the gradient of the solution, in conjunction with a perturbation-based error indicator to detect optimal model complexities. The model approximates a vector of a continuous solution at discrete values of a physical variable. The required number of random realizations to achieve a successful approximation linearly depends on the function dimensionality. The computational cost of the model construction is quadratic in the number of random inputs, which potentially tackles the curse of dimensionality in high-dimensional stochastic functions. Furthermore, this vector-valued separated representation-based model, in comparison to the available scalar-valued case, leads to a significant reduction in the cost of approximation by an order of magnitude equal to the vector size. The performance of the method is studied through its application to three numerical examples including a 41-dimensional elliptic PDE and a 21-dimensional cavity flow.

  6. Whole head quantitative susceptibility mapping using a least-norm direct dipole inversion method.

    PubMed

    Sun, Hongfu; Ma, Yuhan; MacDonald, M Ethan; Pike, G Bruce

    2018-06-15

    A new dipole field inversion method for whole head quantitative susceptibility mapping (QSM) is proposed. Instead of performing background field removal and local field inversion sequentially, the proposed method performs dipole field inversion directly on the total field map in a single step. To aid this under-determined and ill-posed inversion process and obtain robust QSM images, Tikhonov regularization is implemented to seek the local susceptibility solution with the least-norm (LN) using the L-curve criterion. The proposed LN-QSM does not require brain edge erosion, thereby preserving the cerebral cortex in the final images. This should improve its applicability for QSM-based cortical grey matter measurement, functional imaging and venography of full brain. Furthermore, LN-QSM also enables susceptibility mapping of the entire head without the need for brain extraction, which makes QSM reconstruction more automated and less dependent on intermediate pre-processing methods and their associated parameters. It is shown that the proposed LN-QSM method reduced errors in a numerical phantom simulation, improved accuracy in a gadolinium phantom experiment, and suppressed artefacts in nine subjects, as compared to two-step and other single-step QSM methods. Measurements of deep grey matter and skull susceptibilities from LN-QSM are consistent with established reconstruction methods. Copyright © 2018 Elsevier Inc. All rights reserved.

  7. Post-processing through linear regression

    NASA Astrophysics Data System (ADS)

    van Schaeybroeck, B.; Vannitsem, S.

    2011-03-01

    Various post-processing techniques are compared for both deterministic and ensemble forecasts, all based on linear regression between forecast data and observations. In order to evaluate the quality of the regression methods, three criteria are proposed, related to the effective correction of forecast error, the optimal variability of the corrected forecast and multicollinearity. The regression schemes under consideration include the ordinary least-square (OLS) method, a new time-dependent Tikhonov regularization (TDTR) method, the total least-square method, a new geometric-mean regression (GM), a recently introduced error-in-variables (EVMOS) method and, finally, a "best member" OLS method. The advantages and drawbacks of each method are clarified. These techniques are applied in the context of the 63 Lorenz system, whose model version is affected by both initial condition and model errors. For short forecast lead times, the number and choice of predictors plays an important role. Contrarily to the other techniques, GM degrades when the number of predictors increases. At intermediate lead times, linear regression is unable to provide corrections to the forecast and can sometimes degrade the performance (GM and the best member OLS with noise). At long lead times the regression schemes (EVMOS, TDTR) which yield the correct variability and the largest correlation between ensemble error and spread, should be preferred.

  8. Cerenkov luminescence tomography based on preconditioning orthogonal matching pursuit

    NASA Astrophysics Data System (ADS)

    Liu, Haixiao; Hu, Zhenhua; Wang, Kun; Tian, Jie; Yang, Xin

    2015-03-01

    Cerenkov luminescence imaging (CLI) is a novel optical imaging method and has been proved to be a potential substitute of the traditional radionuclide imaging such as positron emission tomography (PET) and single-photon emission computed tomography (SPECT). This imaging method inherits the high sensitivity of nuclear medicine and low cost of optical molecular imaging. To obtain the depth information of the radioactive isotope, Cerenkov luminescence tomography (CLT) is established and the 3D distribution of the isotope is reconstructed. However, because of the strong absorption and scatter, the reconstruction of the CLT sources is always converted to an ill-posed linear system which is hard to be solved. In this work, the sparse nature of the light source was taken into account and the preconditioning orthogonal matching pursuit (POMP) method was established to effectively reduce the ill-posedness and obtain better reconstruction accuracy. To prove the accuracy and speed of this algorithm, a heterogeneous numerical phantom experiment and an in vivo mouse experiment were conducted. Both the simulation result and the mouse experiment showed that our reconstruction method can provide more accurate reconstruction result compared with the traditional Tikhonov regularization method and the ordinary orthogonal matching pursuit (OMP) method. Our reconstruction method will provide technical support for the biological application for Cerenkov luminescence.

  9. Accurate reconstruction of the thermal conductivity depth profile in case hardened steel

    NASA Astrophysics Data System (ADS)

    Celorrio, Ricardo; Apiñaniz, Estibaliz; Mendioroz, Arantza; Salazar, Agustín; Mandelis, Andreas

    2010-04-01

    The problem of retrieving a nonhomogeneous thermal conductivity profile from photothermal radiometry data is addressed from the perspective of a stabilized least square fitting algorithm. We have implemented an inversion method with several improvements: (a) a renormalization of the experimental data which removes not only the instrumental factor, but the constants affecting the amplitude and the phase as well, (b) the introduction of a frequency weighting factor in order to balance the contribution of high and low frequencies in the inversion algorithm, (c) the simultaneous fitting of amplitude and phase data, balanced according to their experimental noises, (d) a modified Tikhonov regularization procedure has been introduced to stabilize the inversion, and (e) the Morozov discrepancy principle has been used to stop the iterative process automatically, according to the experimental noise, to avoid "overfitting" of the experimental data. We have tested this improved method by fitting theoretical data generated from a known conductivity profile. Finally, we have applied our method to real data obtained in a hardened stainless steel plate. The reconstructed in-depth thermal conductivity profile exhibits low dispersion, even at the deepest locations, and is in good anticorrelation with the hardness indentation test.

  10. Flow curve analysis of a Pickering emulsion-polymerized PEDOT:PSS/PS-based electrorheological fluid

    NASA Astrophysics Data System (ADS)

    Kim, So Hee; Choi, Hyoung Jin; Leong, Yee-Kwong

    2017-11-01

    The steady shear electrorheological (ER) response of poly(3, 4-ethylenedioxythiophene): poly(styrene sulfonate)/polystyrene (PEDOT:PSS/PS) composite particles, which were initially fabricated from Pickering emulsion polymerization, was tested with a 10 vol% ER fluid dispersed in a silicone oil. The model independent shear rate and yield stress obtained from the raw torque-rotational speed data using a Couette type rotational rheometer under an applied electric field strength were then analyzed by Tikhonov regularization, which is the most suitable technique for solving an ill-posed inverse problem. The shear stress-shear rate data also fitted well with the data extracted from the Bingham fluid model.

  11. A miniaturized near infrared spectrometer for non-invasive sensing of bio-markers as a wearable healthcare solution

    NASA Astrophysics Data System (ADS)

    Bae, Jungmok; Druzhin, Vladislav V.; Anikanov, Alexey G.; Afanasyev, Sergey V.; Shchekin, Alexey; Medvedev, Anton S.; Morozov, Alexander V.; Kim, Dongho; Kim, Sang Kyu; Moon, Hyunseok; Jang, Hyeongseok; Shim, Jaewook; Park, Jongae

    2017-02-01

    A novel miniaturized near-infrared spectrometer readily mountable to wearable devices for continuous monitoring of individual's key bio-markers was proposed. Spectrum is measured by sequential illuminations with LED's, having independent spectrum profiles and a continuous detection of light radiations from the skin tissue with a single cell PD. Based on Tikhonov regularization with singular value decomposition, a spectrum resolution less than 10nm was reconstructed based on experimentally measured LED profiles. A prototype covering first overtone band (1500-1800nm) where bio-markers have pronounced absorption peaks was fabricated and verified of its performance. Reconstructed spectrum shows that the novel concept of miniaturized spectrometer is valid.

  12. Acoustic and elastic waveform inversion best practices

    NASA Astrophysics Data System (ADS)

    Modrak, Ryan T.

    Reaching the global minimum of a waveform misfit function requires careful choices about the nonlinear optimization, preconditioning and regularization methods underlying an inversion. Because waveform inversion problems are susceptible to erratic convergence, one or two test cases are not enough to reliably inform such decisions. We identify best practices instead using two global, one regional and four near-surface acoustic test problems. To obtain meaningful quantitative comparisons, we carry out hundreds acoustic inversions, varying one aspect of the implementation at a time. Comparing nonlinear optimization algorithms, we find that L-BFGS provides computational savings over nonlinear conjugate gradient methods in a wide variety of test cases. Comparing preconditioners, we show that a new diagonal scaling derived from the adjoint of the forward operator provides better performance than two conventional preconditioning schemes. Comparing regularization strategies, we find that projection, convolution, Tikhonov regularization, and total variation regularization are effective in different contexts. Besides these issues, reliability and efficiency in waveform inversion depend on close numerical attention and care. Implementation details have a strong effect on computational cost, regardless of the chosen material parameterization or nonlinear optimization algorithm. Building on the acoustic inversion results, we carry out elastic experiments with four test problems, three objective functions, and four material parameterizations. The choice of parameterization for isotropic elastic media is found to be more complicated than previous studies suggests, with "wavespeed-like'' parameters performing well with phase-based objective functions and Lame parameters performing well with amplitude-based objective functions. Reliability and efficiency can be even harder to achieve in transversely isotropic elastic inversions because rotation angle parameters describing fast-axis direction are difficult to recover. Using Voigt or Chen-Tromp parameters avoids the need to include rotation angles explicitly and provides an effective strategy for anisotropic inversion. The need for flexible and portable workflow management tools for seismic inversion also poses a major challenge. In a final chapter, the software used to the carry out the above experiments is described and instructions for reproducing experimental results are given.

  13. Fast dictionary-based reconstruction for diffusion spectrum imaging.

    PubMed

    Bilgic, Berkin; Chatnuntawech, Itthi; Setsompop, Kawin; Cauley, Stephen F; Yendiki, Anastasia; Wald, Lawrence L; Adalsteinsson, Elfar

    2013-11-01

    Diffusion spectrum imaging reveals detailed local diffusion properties at the expense of substantially long imaging times. It is possible to accelerate acquisition by undersampling in q-space, followed by image reconstruction that exploits prior knowledge on the diffusion probability density functions (pdfs). Previously proposed methods impose this prior in the form of sparsity under wavelet and total variation transforms, or under adaptive dictionaries that are trained on example datasets to maximize the sparsity of the representation. These compressed sensing (CS) methods require full-brain processing times on the order of hours using MATLAB running on a workstation. This work presents two dictionary-based reconstruction techniques that use analytical solutions, and are two orders of magnitude faster than the previously proposed dictionary-based CS approach. The first method generates a dictionary from the training data using principal component analysis (PCA), and performs the reconstruction in the PCA space. The second proposed method applies reconstruction using pseudoinverse with Tikhonov regularization with respect to a dictionary. This dictionary can either be obtained using the K-SVD algorithm, or it can simply be the training dataset of pdfs without any training. All of the proposed methods achieve reconstruction times on the order of seconds per imaging slice, and have reconstruction quality comparable to that of dictionary-based CS algorithm.

  14. Fast Dictionary-Based Reconstruction for Diffusion Spectrum Imaging

    PubMed Central

    Bilgic, Berkin; Chatnuntawech, Itthi; Setsompop, Kawin; Cauley, Stephen F.; Yendiki, Anastasia; Wald, Lawrence L.; Adalsteinsson, Elfar

    2015-01-01

    Diffusion Spectrum Imaging (DSI) reveals detailed local diffusion properties at the expense of substantially long imaging times. It is possible to accelerate acquisition by undersampling in q-space, followed by image reconstruction that exploits prior knowledge on the diffusion probability density functions (pdfs). Previously proposed methods impose this prior in the form of sparsity under wavelet and total variation (TV) transforms, or under adaptive dictionaries that are trained on example datasets to maximize the sparsity of the representation. These compressed sensing (CS) methods require full-brain processing times on the order of hours using Matlab running on a workstation. This work presents two dictionary-based reconstruction techniques that use analytical solutions, and are two orders of magnitude faster than the previously proposed dictionary-based CS approach. The first method generates a dictionary from the training data using Principal Component Analysis (PCA), and performs the reconstruction in the PCA space. The second proposed method applies reconstruction using pseudoinverse with Tikhonov regularization with respect to a dictionary. This dictionary can either be obtained using the K-SVD algorithm, or it can simply be the training dataset of pdfs without any training. All of the proposed methods achieve reconstruction times on the order of seconds per imaging slice, and have reconstruction quality comparable to that of dictionary-based CS algorithm. PMID:23846466

  15. MREIT experiments with 200 µA injected currents: a feasibility study using two reconstruction algorithms, SMM and harmonic B(Z).

    PubMed

    Arpinar, V E; Hamamura, M J; Degirmenci, E; Muftuler, L T

    2012-07-07

    Magnetic resonance electrical impedance tomography (MREIT) is a technique that produces images of conductivity in tissues and phantoms. In this technique, electrical currents are applied to an object and the resulting magnetic flux density is measured using magnetic resonance imaging (MRI) and the conductivity distribution is reconstructed using these MRI data. Currently, the technique is used in research environments, primarily studying phantoms and animals. In order to translate MREIT to clinical applications, strict safety standards need to be established, especially for safe current limits. However, there are currently no standards for safe current limits specific to MREIT. Until such standards are established, human MREIT applications need to conform to existing electrical safety standards in medical instrumentation, such as IEC601. This protocol limits patient auxiliary currents to 100 µA for low frequencies. However, published MREIT studies have utilized currents 10-400 times larger than this limit, bringing into question whether the clinical applications of MREIT are attainable under current standards. In this study, we investigated the feasibility of MREIT to accurately reconstruct the relative conductivity of a simple agarose phantom using 200 µA total injected current and tested the performance of two MREIT reconstruction algorithms. These reconstruction algorithms used are the iterative sensitivity matrix method (SMM) by Ider and Birgul (1998 Elektrik 6 215-25) with Tikhonov regularization and the harmonic B(Z) proposed by Oh et al (2003 Magn. Reason. Med. 50 875-8). The reconstruction techniques were tested at both 200 µA and 5 mA injected currents to investigate their noise sensitivity at low and high current conditions. It should be noted that 200 µA total injected current into a cylindrical phantom generates only 14.7 µA current in imaging slice. Similarly, 5 mA total injected current results in 367 µA in imaging slice. Total acquisition time for 200 µA and 5 mA experiments was about 1 h and 8.5 min, respectively. The results demonstrate that conductivity imaging is possible at low currents using the suggested imaging parameters and reconstructing the images using iterative SMM with Tikhonov regularization, which appears to be more tolerant to noisy data than harmonic B(Z).

  16. Updating a synchronous fluorescence spectroscopic virgin olive oil adulteration calibration to a new geographical region.

    PubMed

    Kunz, Matthew Ross; Ottaway, Joshua; Kalivas, John H; Georgiou, Constantinos A; Mousdis, George A

    2011-02-23

    Detecting and quantifying extra virgin olive adulteration is of great importance to the olive oil industry. Many spectroscopic methods in conjunction with multivariate analysis have been used to solve these issues. However, successes to date are limited as calibration models are built to a specific set of geographical regions, growing seasons, cultivars, and oil extraction methods (the composite primary condition). Samples from new geographical regions, growing seasons, etc. (secondary conditions) are not always correctly predicted by the primary model due to different olive oil and/or adulterant compositions stemming from secondary conditions not matching the primary conditions. Three Tikhonov regularization (TR) variants are used in this paper to allow adulterant (sunflower oil) concentration predictions in samples from geographical regions not part of the original primary calibration domain. Of the three TR variants, ridge regression with an additional 2-norm penalty provides the smallest validation sample prediction errors. Although the paper reports on using TR for model updating to predict adulterant oil concentration, the methods should also be applicable to updating models distinguishing adulterated samples from pure extra virgin olive oil. Additionally, the approaches are general and can be used with other spectroscopic methods and adulterants as well as with other agriculture products.

  17. Variable Grid Traveltime Tomography for Near-surface Seismic Imaging

    NASA Astrophysics Data System (ADS)

    Cai, A.; Zhang, J.

    2017-12-01

    We present a new algorithm of traveltime tomography for imaging the subsurface with automated variable grids upon geological structures. The nonlinear traveltime tomography along with Tikhonov regularization using conjugate gradient method is a conventional method for near surface imaging. However, model regularization for any regular and even grids assumes uniform resolution. From geophysical point of view, long-wavelength and large scale structures can be reliably resolved, the details along geological boundaries are difficult to resolve. Therefore, we solve a traveltime tomography problem that automatically identifies large scale structures and aggregates grids within the structures for inversion. As a result, the number of velocity unknowns is reduced significantly, and inversion intends to resolve small-scale structures or the boundaries of large-scale structures. The approach is demonstrated by tests on both synthetic and field data. One synthetic model is a buried basalt model with one horizontal layer. Using the variable grid traveltime tomography, the resulted model is more accurate in top layer velocity, and basalt blocks, and leading to a less number of grids. The field data was collected in an oil field in China. The survey was performed in an area where the subsurface structures were predominantly layered. The data set includes 476 shots with a 10 meter spacing and 1735 receivers with a 10 meter spacing. The first-arrival traveltime of the seismogram is picked for tomography. The reciprocal errors of most shots are between 2ms and 6ms. The normal tomography results in fluctuations in layers and some artifacts in the velocity model. In comparison, the implementation of new method with proper threshold provides blocky model with resolved flat layer and less artifacts. Besides, the number of grids reduces from 205,656 to 4,930 and the inversion produces higher resolution due to less unknowns and relatively fine grids in small structures. The variable grid traveltime tomography provides an alternative imaging solution for blocky structures in the subsurface and builds a good starting model for waveform inversion and statics.

  18. Joint state and parameter estimation of the hemodynamic model by particle smoother expectation maximization method

    NASA Astrophysics Data System (ADS)

    Aslan, Serdar; Taylan Cemgil, Ali; Akın, Ata

    2016-08-01

    Objective. In this paper, we aimed for the robust estimation of the parameters and states of the hemodynamic model by using blood oxygen level dependent signal. Approach. In the fMRI literature, there are only a few successful methods that are able to make a joint estimation of the states and parameters of the hemodynamic model. In this paper, we implemented a maximum likelihood based method called the particle smoother expectation maximization (PSEM) algorithm for the joint state and parameter estimation. Main results. Former sequential Monte Carlo methods were only reliable in the hemodynamic state estimates. They were claimed to outperform the local linearization (LL) filter and the extended Kalman filter (EKF). The PSEM algorithm is compared with the most successful method called square-root cubature Kalman smoother (SCKS) for both state and parameter estimation. SCKS was found to be better than the dynamic expectation maximization (DEM) algorithm, which was shown to be a better estimator than EKF, LL and particle filters. Significance. PSEM was more accurate than SCKS for both the state and the parameter estimation. Hence, PSEM seems to be the most accurate method for the system identification and state estimation for the hemodynamic model inversion literature. This paper do not compare its results with Tikhonov-regularized Newton—CKF (TNF-CKF), a recent robust method which works in filtering sense.

  19. Atmospheric inverse modeling via sparse reconstruction

    NASA Astrophysics Data System (ADS)

    Hase, Nils; Miller, Scot M.; Maaß, Peter; Notholt, Justus; Palm, Mathias; Warneke, Thorsten

    2017-10-01

    Many applications in atmospheric science involve ill-posed inverse problems. A crucial component of many inverse problems is the proper formulation of a priori knowledge about the unknown parameters. In most cases, this knowledge is expressed as a Gaussian prior. This formulation often performs well at capturing smoothed, large-scale processes but is often ill equipped to capture localized structures like large point sources or localized hot spots. Over the last decade, scientists from a diverse array of applied mathematics and engineering fields have developed sparse reconstruction techniques to identify localized structures. In this study, we present a new regularization approach for ill-posed inverse problems in atmospheric science. It is based on Tikhonov regularization with sparsity constraint and allows bounds on the parameters. We enforce sparsity using a dictionary representation system. We analyze its performance in an atmospheric inverse modeling scenario by estimating anthropogenic US methane (CH4) emissions from simulated atmospheric measurements. Different measures indicate that our sparse reconstruction approach is better able to capture large point sources or localized hot spots than other methods commonly used in atmospheric inversions. It captures the overall signal equally well but adds details on the grid scale. This feature can be of value for any inverse problem with point or spatially discrete sources. We show an example for source estimation of synthetic methane emissions from the Barnett shale formation.

  20. Analytic Intermodel Consistent Modeling of Volumetric Human Lung Dynamics.

    PubMed

    Ilegbusi, Olusegun; Seyfi, Behnaz; Neylon, John; Santhanam, Anand P

    2015-10-01

    Human lung undergoes breathing-induced deformation in the form of inhalation and exhalation. Modeling the dynamics is numerically complicated by the lack of information on lung elastic behavior and fluid-structure interactions between air and the tissue. A mathematical method is developed to integrate deformation results from a deformable image registration (DIR) and physics-based modeling approaches in order to represent consistent volumetric lung dynamics. The computational fluid dynamics (CFD) simulation assumes the lung is a poro-elastic medium with spatially distributed elastic property. Simulation is performed on a 3D lung geometry reconstructed from four-dimensional computed tomography (4DCT) dataset of a human subject. The heterogeneous Young's modulus (YM) is estimated from a linear elastic deformation model with the same lung geometry and 4D lung DIR. The deformation obtained from the CFD is then coupled with the displacement obtained from the 4D lung DIR by means of the Tikhonov regularization (TR) algorithm. The numerical results include 4DCT registration, CFD, and optimal displacement data which collectively provide consistent estimate of the volumetric lung dynamics. The fusion method is validated by comparing the optimal displacement with the results obtained from the 4DCT registration.

  1. Determining "small parameters" for quasi-steady state

    NASA Astrophysics Data System (ADS)

    Goeke, Alexandra; Walcher, Sebastian; Zerz, Eva

    2015-08-01

    For a parameter-dependent system of ordinary differential equations we present a systematic approach to the determination of parameter values near which singular perturbation scenarios (in the sense of Tikhonov and Fenichel) arise. We call these special values Tikhonov-Fenichel parameter values. The principal application we intend is to equations that describe chemical reactions, in the context of quasi-steady state (or partial equilibrium) settings. Such equations have rational (or even polynomial) right-hand side. We determine the structure of the set of Tikhonov-Fenichel parameter values as a semi-algebraic set, and present an algorithmic approach to their explicit determination, using Groebner bases. Examples and applications (which include the irreversible and reversible Michaelis-Menten systems) illustrate that the approach is rather easy to implement.

  2. Compact terahertz spectrometer based on disordered rough surfaces

    NASA Astrophysics Data System (ADS)

    Yang, Tao; Jiang, Bing; Ge, Jia-cheng; Zhu, Yong-yuan; Li, Xing-ao; Huang, Wei

    2018-01-01

    In this paper, a compact spectrometer based on disordered rough surfaces for operation in the terahertz band is presented. The proposed spectrometer consists of three components, which are used for dispersion, modulation and detection respectively. The disordered rough surfaces, which are acted as the dispersion component, are modulated by the modulation component. Different scattering intensities are captured by the detection component with different extent of modulation. With a calibration measurement process, one can reconstruct the spectra of the probe terahertz beam by solving a system of simultaneous linear equations. A Tikhonov regularization approach has been implemented to improve the accuracy of the spectral reconstruction. The reported broadband, compact, high-resolution terahertz spectrometer is well suited for portable terahertz spectroscopy applications.

  3. Imaging metallic samples using electrical capacitance tomography: forward modelling and reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Hosani, E. Al; Zhang, M.; Abascal, J. F. P. J.; Soleimani, M.

    2016-11-01

    Electrical capacitance tomography (ECT) is an imaging technology used to reconstruct the permittivity distribution within the sensing region. So far, ECT has been primarily used to image non-conductive media only, since if the conductivity of the imaged object is high, the capacitance measuring circuit will be almost shortened by the conductivity path and a clear image cannot be produced using the standard image reconstruction approaches. This paper tackles the problem of imaging metallic samples using conventional ECT systems by investigating the two main aspects of image reconstruction algorithms, namely the forward problem and the inverse problem. For the forward problem, two different methods to model the region of high conductivity in ECT is presented. On the other hand, for the inverse problem, three different algorithms to reconstruct the high contrast images are examined. The first two methods are the linear single step Tikhonov method and the iterative total variation regularization method, and use two sets of ECT data to reconstruct the image in time difference mode. The third method, namely the level set method, uses absolute ECT measurements and was developed using a metallic forward model. The results indicate that the applications of conventional ECT systems can be extended to metal samples using the suggested algorithms and forward model, especially using a level set algorithm to find the boundary of the metal.

  4. Redundancy Analysis of Capacitance Data of a Coplanar Electrode Array for Fast and Stable Imaging Processing

    PubMed Central

    Wen, Yintang; Zhang, Zhenda; Zhang, Yuyan; Sun, Dongtao

    2017-01-01

    A coplanar electrode array sensor is established for the imaging of composite-material adhesive-layer defect detection. The sensor is based on the capacitive edge effect, which leads to capacitance data being considerably weak and susceptible to environmental noise. The inverse problem of coplanar array electrical capacitance tomography (C-ECT) is ill-conditioning, in which a small error of capacitance data can seriously affect the quality of reconstructed images. In order to achieve a stable image reconstruction process, a redundancy analysis method for capacitance data is proposed. The proposed method is based on contribution rate and anti-interference capability. According to the redundancy analysis, the capacitance data are divided into valid and invalid data. When the image is reconstructed by valid data, the sensitivity matrix needs to be changed accordingly. In order to evaluate the effectiveness of the sensitivity map, singular value decomposition (SVD) is used. Finally, the two-dimensional (2D) and three-dimensional (3D) images are reconstructed by the Tikhonov regularization method. Through comparison of the reconstructed images of raw capacitance data, the stability of the image reconstruction process can be improved, and the quality of reconstructed images is not degraded. As a result, much invalid data are not collected, and the data acquisition time can also be reduced. PMID:29295537

  5. A 2D forward and inverse code for streaming potential problems

    NASA Astrophysics Data System (ADS)

    Soueid Ahmed, A.; Jardani, A.; Revil, A.

    2013-12-01

    The self-potential method corresponds to the passive measurement of the electrical field in response to the occurrence of natural sources of current in the ground. One of these sources corresponds to the streaming current associated with the flow of the groundwater. We can therefore apply the self- potential method to recover non-intrusively some information regarding the groundwater flow. We first solve the forward problem starting with the solution of the groundwater flow problem, then computing the source current density, and finally solving a Poisson equation for the electrical potential. We use the finite-element method to solve the relevant partial differential equations. In order to reduce the number of (petrophysical) model parameters required to solve the forward problem, we introduced an effective charge density tensor of the pore water, which can be determined directly from the permeability tensor for neutral pore waters. The second aspect of our work concerns the inversion of the self-potential data using Tikhonov regularization with smoothness and weighting depth constraints. This approach accounts for the distribution of the electrical resistivity, which can be independently and approximately determined from electrical resistivity tomography. A numerical code, SP2DINV, has been implemented in Matlab to perform both the forward and inverse modeling. Three synthetic case studies are discussed.

  6. Characterization of micron-size hydrogen clusters using Mie scattering.

    PubMed

    Jinno, S; Tanaka, H; Matsui, R; Kanasaki, M; Sakaki, H; Kando, M; Kondo, K; Sugiyama, A; Uesaka, M; Kishimoto, Y; Fukuda, Y

    2017-08-07

    Hydrogen clusters with diameters of a few micrometer range, composed of 10 8-10 hydrogen molecules, have been produced for the first time in an expansion of supercooled, high-pressure hydrogen gas into a vacuum through a conical nozzle connected to a cryogenic pulsed solenoid valve. The size distribution of the clusters has been evaluated by measuring the angular distribution of laser light scattered from the clusters. The data were analyzed based on the Mie scattering theory combined with the Tikhonov regularization method including the instrumental functions, the validity of which was assessed by performing a calibration study using a reference target consisting of standard micro-particles with two different sizes. The size distribution of the clusters was found discrete peaked at 0.33 ± 0.03, 0.65 ± 0.05, 0.81 ± 0.06, 1.40 ± 0.06 and 2.00 ± 0.13 µm in diameter. The highly reproducible and impurity-free nature of the micron-size hydrogen clusters can be a promising target for laser-driven multi-MeV proton sources with the currently available high power lasers.

  7. A framework for testing the use of electric and electromagnetic data to reduce the prediction error of groundwater models

    NASA Astrophysics Data System (ADS)

    Christensen, N. K.; Christensen, S.; Ferre, T. P. A.

    2015-09-01

    Despite geophysics is being used increasingly, it is still unclear how and when the integration of geophysical data improves the construction and predictive capability of groundwater models. Therefore, this paper presents a newly developed HYdrogeophysical TEst-Bench (HYTEB) which is a collection of geological, groundwater and geophysical modeling and inversion software wrapped to make a platform for generation and consideration of multi-modal data for objective hydrologic analysis. It is intentionally flexible to allow for simple or sophisticated treatments of geophysical responses, hydrologic processes, parameterization, and inversion approaches. It can also be used to discover potential errors that can be introduced through petrophysical models and approaches to correlating geophysical and hydrologic parameters. With HYTEB we study alternative uses of electromagnetic (EM) data for groundwater modeling in a hydrogeological environment consisting of various types of glacial deposits with typical hydraulic conductivities and electrical resistivities covering impermeable bedrock with low resistivity. It is investigated to what extent groundwater model calibration and, often more importantly, model predictions can be improved by including in the calibration process electrical resistivity estimates obtained from TEM data. In all calibration cases, the hydraulic conductivity field is highly parameterized and the estimation is stabilized by regularization. For purely hydrologic inversion (HI, only using hydrologic data) we used Tikhonov regularization combined with singular value decomposition. For joint hydrogeophysical inversion (JHI) and sequential hydrogeophysical inversion (SHI) the resistivity estimates from TEM are used together with a petrophysical relationship to formulate the regularization term. In all cases, the regularization stabilizes the inversion, but neither the HI nor the JHI objective function could be minimized uniquely. SHI or JHI with regularization based on the use of TEM data produced estimated hydraulic conductivity fields that bear more resemblance to the reference fields than when using HI with Tikhonov regularization. However, for the studied system the resistivities estimated by SHI or JHI must be used with caution as estimators of hydraulic conductivity or as regularization means for subsequent hydrological inversion. Much of the lack of value of the geophysical data arises from a mistaken faith in the power of the petrophysical model in combination with geophysical data of low sensitivity, thereby propagating geophysical estimation errors into the hydrologic model parameters. With respect to reducing model prediction error, it depends on the type of prediction whether it has value to include geophysical data in the model calibration. It is found that all calibrated models are good predictors of hydraulic head. When the stress situation is changed from that of the hydrologic calibration data, then all models make biased predictions of head change. All calibrated models turn out to be a very poor predictor of the pumping well's recharge area and groundwater age. The reason for this is that distributed recharge is parameterized as depending on estimated hydraulic conductivity of the upper model layer which tends to be underestimated. Another important insight from the HYTEB analysis is thus that either recharge should be parameterized and estimated in a different way, or other types of data should be added to better constrain the recharge estimates.

  8. Inverse electrocardiographic transformations: dependence on the number of epicardial regions and body surface data points.

    PubMed

    Johnston, P R; Walker, S J; Hyttinen, J A; Kilpatrick, D

    1994-04-01

    The inverse problem of electrocardiography, the computation of epicardial potentials from body surface potentials, is influenced by the desired resolution on the epicardium, the number of recording points on the body surface, and the method of limiting the inversion process. To examine the role of these variables in the computation of the inverse transform, Tikhonov's zero-order regularization and singular value decomposition (SVD) have been used to invert the forward transfer matrix. The inverses have been compared in a data-independent manner using the resolution and the noise amplification as endpoints. Sets of 32, 50, 192, and 384 leads were chosen as sets of body surface data, and 26, 50, 74, and 98 regions were chosen to represent the epicardium. The resolution and noise were both improved by using a greater number of electrodes on the body surface. When 60% of the singular values are retained, the results show a trade-off between noise and resolution, with typical maximal epicardial noise levels of less than 0.5% of maximum epicardial potentials for 26 epicardial regions, 2.5% for 50 epicardial regions, 7.5% for 74 epicardial regions, and 50% for 98 epicardial regions. As the number of epicardial regions is increased, the regularization technique effectively fixes the noise amplification but markedly decreases the resolution, whereas SVD results in an increase in noise and a moderate decrease in resolution. Overall the regularization technique performs slightly better than SVD in the noise-resolution relationship. There is a region at the posterior of the heart that was poorly resolved regardless of the number of regions chosen. The variance of the resolution was such as to suggest the use of variable-size epicardial regions based on the resolution.

  9. Nonlinear inversion of borehole-radar tomography data to reconstruct velocity and attenuation distribution in earth materials

    USGS Publications Warehouse

    Zhou, C.; Liu, L.; Lane, J.W.

    2001-01-01

    A nonlinear tomographic inversion method that uses first-arrival travel-time and amplitude-spectra information from cross-hole radar measurements was developed to simultaneously reconstruct electromagnetic velocity and attenuation distribution in earth materials. Inversion methods were developed to analyze single cross-hole tomography surveys and differential tomography surveys. Assuming the earth behaves as a linear system, the inversion methods do not require estimation of source radiation pattern, receiver coupling, or geometrical spreading. The data analysis and tomographic inversion algorithm were applied to synthetic test data and to cross-hole radar field data provided by the US Geological Survey (USGS). The cross-hole radar field data were acquired at the USGS fractured-rock field research site at Mirror Lake near Thornton, New Hampshire, before and after injection of a saline tracer, to monitor the transport of electrically conductive fluids in the image plane. Results from the synthetic data test demonstrate the algorithm computational efficiency and indicate that the method robustly can reconstruct electromagnetic (EM) wave velocity and attenuation distribution in earth materials. The field test results outline zones of velocity and attenuation anomalies consistent with the finding of previous investigators; however, the tomograms appear to be quite smooth. Further work is needed to effectively find the optimal smoothness criterion in applying the Tikhonov regularization in the nonlinear inversion algorithms for cross-hole radar tomography. ?? 2001 Elsevier Science B.V. All rights reserved.

  10. Direct detection of metal-insulator phase transitions using the modified Backus-Gilbert method

    NASA Astrophysics Data System (ADS)

    Ulybyshev, Maksim; Winterowd, Christopher; Zafeiropoulos, Savvas

    2018-03-01

    The detection of the (semi)metal-insulator phase transition can be extremely difficult if the local order parameter which characterizes the ordered phase is unknown. In some cases, it is even impossible to define a local order parameter: the most prominent example of such system is the spin liquid state. This state was proposed to exist in the Hubbard model on the hexagonal lattice in a region between the semimetal phase and the antiferromagnetic insulator phase. The existence of this phase has been the subject of a long debate. In order to detect these exotic phases we must use alternative methods to those used for more familiar examples of spontaneous symmetry breaking. We have modified the Backus-Gilbert method of analytic continuation which was previously used in the calculation of the pion quasiparticle mass in lattice QCD. The modification of the method consists of the introduction of the Tikhonov regularization scheme which was used to treat the ill-conditioned kernel. This modified Backus-Gilbert method is applied to the Euclidean propagators in momentum space calculated using the hybrid Monte Carlo algorithm. In this way, it is possible to reconstruct the full dispersion relation and to estimate the mass gap, which is a direct signal of the transition to the insulating state. We demonstrate the utility of this method in our calculations for the Hubbard model on the hexagonal lattice. We also apply the method to the metal-insulator phase transition in the Hubbard-Coulomb model on the square lattice.

  11. Self-prior strategy for organ reconstruction in fluorescence molecular tomography

    PubMed Central

    Zhou, Yuan; Chen, Maomao; Su, Han; Luo, Jianwen

    2017-01-01

    The purpose of this study is to propose a strategy for organ reconstruction in fluorescence molecular tomography (FMT) without prior information from other imaging modalities, and to overcome the high cost and ionizing radiation caused by the traditional structural prior strategy. The proposed strategy is designed as an iterative architecture to solve the inverse problem of FMT. In each iteration, a short time Fourier transform (STFT) based algorithm is used to extract the self-prior information in the space-frequency energy spectrum with the assumption that the regions with higher fluorescence concentration have larger energy intensity, then the cost function of the inverse problem is modified by the self-prior information, and lastly an iterative Laplacian regularization algorithm is conducted to solve the updated inverse problem and obtains the reconstruction results. Simulations and in vivo experiments on liver reconstruction are carried out to test the performance of the self-prior strategy on organ reconstruction. The organ reconstruction results obtained by the proposed self-prior strategy are closer to the ground truth than those obtained by the iterative Tikhonov regularization (ITKR) method (traditional non-prior strategy). Significant improvements are shown in the evaluation indexes of relative locational error (RLE), relative error (RE) and contrast-to-noise ratio (CNR). The self-prior strategy improves the organ reconstruction results compared with the non-prior strategy and also overcomes the shortcomings of the traditional structural prior strategy. Various applications such as metabolic imaging and pharmacokinetic study can be aided by this strategy. PMID:29082094

  12. Self-prior strategy for organ reconstruction in fluorescence molecular tomography.

    PubMed

    Zhou, Yuan; Chen, Maomao; Su, Han; Luo, Jianwen

    2017-10-01

    The purpose of this study is to propose a strategy for organ reconstruction in fluorescence molecular tomography (FMT) without prior information from other imaging modalities, and to overcome the high cost and ionizing radiation caused by the traditional structural prior strategy. The proposed strategy is designed as an iterative architecture to solve the inverse problem of FMT. In each iteration, a short time Fourier transform (STFT) based algorithm is used to extract the self-prior information in the space-frequency energy spectrum with the assumption that the regions with higher fluorescence concentration have larger energy intensity, then the cost function of the inverse problem is modified by the self-prior information, and lastly an iterative Laplacian regularization algorithm is conducted to solve the updated inverse problem and obtains the reconstruction results. Simulations and in vivo experiments on liver reconstruction are carried out to test the performance of the self-prior strategy on organ reconstruction. The organ reconstruction results obtained by the proposed self-prior strategy are closer to the ground truth than those obtained by the iterative Tikhonov regularization (ITKR) method (traditional non-prior strategy). Significant improvements are shown in the evaluation indexes of relative locational error (RLE), relative error (RE) and contrast-to-noise ratio (CNR). The self-prior strategy improves the organ reconstruction results compared with the non-prior strategy and also overcomes the shortcomings of the traditional structural prior strategy. Various applications such as metabolic imaging and pharmacokinetic study can be aided by this strategy.

  13. On decomposing stimulus and response waveforms in event-related potentials recordings.

    PubMed

    Yin, Gang; Zhang, Jun

    2011-06-01

    Event-related potentials (ERPs) reflect the brain activities related to specific behavioral events, and are obtained by averaging across many trial repetitions with individual trials aligned to the onset of a specific event, e.g., the onset of stimulus (s-aligned) or the onset of the behavioral response (r-aligned). However, the s-aligned and r-aligned ERP waveforms do not purely reflect, respectively, underlying stimulus (S-) or response (R-) component waveform, due to their cross-contaminations in the recorded ERP waveforms. Zhang [J. Neurosci. Methods, 80, pp. 49-63, 1998] proposed an algorithm to recover the pure S-component waveform and the pure R-component waveform from the s-aligned and r-aligned ERP average waveforms-however, due to the nature of this inverse problem, a direct solution is sensitive to noise that disproportionally affects low-frequency components, hindering the practical implementation of this algorithm. Here, we apply the Wiener deconvolution technique to deal with noise in input data, and investigate a Tikhonov regularization approach to obtain a stable solution that is robust against variances in the sampling of reaction-time distribution (when number of trials is low). Our method is demonstrated using data from a Go/NoGo experiment about image classification and recognition.

  14. Spectral multivariate calibration without laboratory prepared or determined reference analyte values.

    PubMed

    Ottaway, Josh; Farrell, Jeremy A; Kalivas, John H

    2013-02-05

    An essential part to calibration is establishing the analyte calibration reference samples. These samples must characterize the sample matrix and measurement conditions (chemical, physical, instrumental, and environmental) of any sample to be predicted. Calibration usually requires measuring spectra for numerous reference samples in addition to determining the corresponding analyte reference values. Both tasks are typically time-consuming and costly. This paper reports on a method named pure component Tikhonov regularization (PCTR) that does not require laboratory prepared or determined reference values. Instead, an analyte pure component spectrum is used in conjunction with nonanalyte spectra for calibration. Nonanalyte spectra can be from different sources including pure component interference samples, blanks, and constant analyte samples. The approach is also applicable to calibration maintenance when the analyte pure component spectrum is measured in one set of conditions and nonanalyte spectra are measured in new conditions. The PCTR method balances the trade-offs between calibration model shrinkage and the degree of orthogonality to the nonanalyte content (model direction) in order to obtain accurate predictions. Using visible and near-infrared (NIR) spectral data sets, the PCTR results are comparable to those obtained using ridge regression (RR) with reference calibration sets. The flexibility of PCTR also allows including reference samples if such samples are available.

  15. Changing basal conditions during the speed-up of Jakobshavn Isbræ, Greenland

    NASA Astrophysics Data System (ADS)

    Habermann, M.; Truffer, M.; Maxwell, D.

    2013-06-01

    Ice-sheet outlet glaciers can undergo dynamic changes such as the rapid speed-up of Jakobshavn Isbræ following the disintegration of its floating ice tongue. These changes are associated with stress changes on the boundary of the ice mass. We investigate the basal conditions throughout a well-observed period of rapid change and evaluate parameterizations currently used in ice-sheet models. A Tikhonov inverse method with a Shallow Shelf Approximation forward model is used for diagnostic inversions for the years 1985, 2000, 2005, 2006 and 2008. Our ice softness, model norm, and regularization parameter choices are justified using the data-model misfit metric and the L-curve method. The sensitivity of the inversion results to these parameter choices is explored. We find a lowering of basal yield stress in the first 7 km of the 2008 grounding line and no significant changes higher upstream. The temporal evolution in the fast flow area is in broad agreement with a Mohr-Coulomb parameterization of basal shear stress, but with a till friction angle much lower than has been measured for till samples. The lowering of basal yield stress is significant within the uncertainties of the inversion, but it cannot be ruled out that there are other significant contributors to the acceleration of the glacier.

  16. Approaches to highly parameterized inversion-A guide to using PEST for groundwater-model calibration

    USGS Publications Warehouse

    Doherty, John E.; Hunt, Randall J.

    2010-01-01

    Highly parameterized groundwater models can create calibration difficulties. Regularized inversion-the combined use of large numbers of parameters with mathematical approaches for stable parameter estimation-is becoming a common approach to address these difficulties and enhance the transfer of information contained in field measurements to parameters used to model that system. Though commonly used in other industries, regularized inversion is somewhat imperfectly understood in the groundwater field. There is concern that this unfamiliarity can lead to underuse, and misuse, of the methodology. This document is constructed to facilitate the appropriate use of regularized inversion for calibrating highly parameterized groundwater models. The presentation is directed at an intermediate- to advanced-level modeler, and it focuses on the PEST software suite-a frequently used tool for highly parameterized model calibration and one that is widely supported by commercial graphical user interfaces. A brief overview of the regularized inversion approach is provided, and techniques for mathematical regularization offered by PEST are outlined, including Tikhonov, subspace, and hybrid schemes. Guidelines for applying regularized inversion techniques are presented after a logical progression of steps for building suitable PEST input. The discussion starts with use of pilot points as a parameterization device and processing/grouping observations to form multicomponent objective functions. A description of potential parameter solution methodologies and resources available through the PEST software and its supporting utility programs follows. Directing the parameter-estimation process through PEST control variables is then discussed, including guidance for monitoring and optimizing the performance of PEST. Comprehensive listings of PEST control variables, and of the roles performed by PEST utility support programs, are presented in the appendixes.

  17. Computed myography: three-dimensional reconstruction of motor functions from surface EMG data

    NASA Astrophysics Data System (ADS)

    van den Doel, Kees; Ascher, Uri M.; Pai, Dinesh K.

    2008-12-01

    We describe a methodology called computed myography to qualitatively and quantitatively determine the activation level of individual muscles by voltage measurements from an array of voltage sensors on the skin surface. A finite element model for electrostatics simulation is constructed from morphometric data. For the inverse problem, we utilize a generalized Tikhonov regularization. This imposes smoothness on the reconstructed sources inside the muscles and suppresses sources outside the muscles using a penalty term. Results from experiments with simulated and human data are presented for activation reconstructions of three muscles in the upper arm (biceps brachii, bracialis and triceps). This approach potentially offers a new clinical tool to sensitively assess muscle function in patients suffering from neurological disorders (e.g., spinal cord injury), and could more accurately guide advances in the evaluation of specific rehabilitation training regimens.

  18. Full-Sky Maps of the VHF Radio Sky with the Owens Valley Radio Observatory Long Wavelength Array

    NASA Astrophysics Data System (ADS)

    Eastwood, Michael W.; Hallinan, Gregg

    2018-05-01

    21-cm cosmology is a powerful new probe of the intergalactic medium at redshifts 20 >~ z >~ 6 corresponding to the Cosmic Dawn and Epoch of Reionization. Current observations of the highly-redshifted 21-cm transition are limited by the dynamic range they can achieve against foreground sources of low-frequency (<200 MHz) of radio emission. We used the Owens Valley Radio Observatory Long Wavelength Array (OVRO-LWA) to generate a series of new modern high-fidelity sky maps that capture emission on angular scales ranging from tens of degrees to ~15 arcmin, and frequencies between 36 and 73 MHz. These sky maps were generated from the application of Tikhonov-regularized m-mode analysis imaging, which is a new interferometric imaging technique that is uniquely suited for low-frequency, wide-field, drift-scanning interferometers.

  19. Cone-beam x-ray luminescence computed tomography based on x-ray absorption dosage

    NASA Astrophysics Data System (ADS)

    Liu, Tianshuai; Rong, Junyan; Gao, Peng; Zhang, Wenli; Liu, Wenlei; Zhang, Yuanke; Lu, Hongbing

    2018-02-01

    With the advances of x-ray excitable nanophosphors, x-ray luminescence computed tomography (XLCT) has become a promising hybrid imaging modality. In particular, a cone-beam XLCT (CB-XLCT) system has demonstrated its potential in in vivo imaging with the advantage of fast imaging speed over other XLCT systems. Currently, the imaging models of most XLCT systems assume that nanophosphors emit light based on the intensity distribution of x-ray within the object, not completely reflecting the nature of the x-ray excitation process. To improve the imaging quality of CB-XLCT, an imaging model that adopts an excitation model of nanophosphors based on x-ray absorption dosage is proposed in this study. To solve the ill-posed inverse problem, a reconstruction algorithm that combines the adaptive Tikhonov regularization method with the imaging model is implemented for CB-XLCT reconstruction. Numerical simulations and phantom experiments indicate that compared with the traditional forward model based on x-ray intensity, the proposed dose-based model could improve the image quality of CB-XLCT significantly in terms of target shape, localization accuracy, and image contrast. In addition, the proposed model behaves better in distinguishing closer targets, demonstrating its advantage in improving spatial resolution.

  20. Sensor fusion for structural tilt estimation using an acceleration-based tilt sensor and a gyroscope

    NASA Astrophysics Data System (ADS)

    Liu, Cheng; Park, Jong-Woong; Spencer, B. F., Jr.; Moon, Do-Soo; Fan, Jiansheng

    2017-10-01

    A tilt sensor can provide useful information regarding the health of structural systems. Most existing tilt sensors are gravity/acceleration based and can provide accurate measurements of static responses. However, for dynamic tilt, acceleration can dramatically affect the measured responses due to crosstalk. Thus, dynamic tilt measurement is still a challenging problem. One option is to integrate the output of a gyroscope sensor, which measures the angular velocity, to obtain the tilt; however, problems arise because the low-frequency sensitivity of the gyroscope is poor. This paper proposes a new approach to dynamic tilt measurements, fusing together information from a MEMS-based gyroscope and an acceleration-based tilt sensor. The gyroscope provides good estimates of the tilt at higher frequencies, whereas the acceleration measurements are used to estimate the tilt at lower frequencies. The Tikhonov regularization approach is employed to fuse these measurements together and overcome the ill-posed nature of the problem. The solution is carried out in the frequency domain and then implemented in the time domain using FIR filters to ensure stability. The proposed method is validated numerically and experimentally to show that it performs well in estimating both the pseudo-static and dynamic tilt measurements.

  1. Reconstruction of the temperature field for inverse ultrasound hyperthermia calculations at a muscle/bone interface.

    PubMed

    Liauh, Chihng-Tsung; Shih, Tzu-Ching; Huang, Huang-Wen; Lin, Win-Li

    2004-02-01

    An inverse algorithm with Tikhonov regularization of order zero has been used to estimate the intensity ratios of the reflected longitudinal wave to the incident longitudinal wave and that of the refracted shear wave to the total transmitted wave into bone in calculating the absorbed power field and then to reconstruct the temperature distribution in muscle and bone regions based on a limited number of temperature measurements during simulated ultrasound hyperthermia. The effects of the number of temperature sensors are investigated, as is the amount of noise superimposed on the temperature measurements, and the effects of the optimal sensor location on the performance of the inverse algorithm. Results show that noisy input data degrades the performance of this inverse algorithm, especially when the number of temperature sensors is small. Results are also presented demonstrating an improvement in the accuracy of the temperature estimates by employing an optimal value of the regularization parameter. Based on the analysis of singular-value decomposition, the optimal sensor position in a case utilizing only one temperature sensor can be determined to make the inverse algorithm converge to the true solution.

  2. Blockwise conjugate gradient methods for image reconstruction in volumetric CT.

    PubMed

    Qiu, W; Titley-Peloquin, D; Soleimani, M

    2012-11-01

    Cone beam computed tomography (CBCT) enables volumetric image reconstruction from 2D projection data and plays an important role in image guided radiation therapy (IGRT). Filtered back projection is still the most frequently used algorithm in applications. The algorithm discretizes the scanning process (forward projection) into a system of linear equations, which must then be solved to recover images from measured projection data. The conjugate gradients (CG) algorithm and its variants can be used to solve (possibly regularized) linear systems of equations Ax=b and linear least squares problems minx∥b-Ax∥2, especially when the matrix A is very large and sparse. Their applications can be found in a general CT context, but in tomography problems (e.g. CBCT reconstruction) they have not widely been used. Hence, CBCT reconstruction using the CG-type algorithm LSQR was implemented and studied in this paper. In CBCT reconstruction, the main computational challenge is that the matrix A usually is very large, and storing it in full requires an amount of memory well beyond the reach of commodity computers. Because of these memory capacity constraints, only a small fraction of the weighting matrix A is typically used, leading to a poor reconstruction. In this paper, to overcome this difficulty, the matrix A is partitioned and stored blockwise, and blockwise matrix-vector multiplications are implemented within LSQR. This implementation allows us to use the full weighting matrix A for CBCT reconstruction without further enhancing computer standards. Tikhonov regularization can also be implemented in this fashion, and can produce significant improvement in the reconstructed images. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  3. Subsurface classification of objects under turbid waters by means of regularization techniques applied to real hyperspectral data

    NASA Astrophysics Data System (ADS)

    Carpena, Emmanuel; Jiménez, Luis O.; Arzuaga, Emmanuel; Fonseca, Sujeily; Reyes, Ernesto; Figueroa, Juan

    2017-05-01

    Improved benthic habitat mapping is needed to monitor coral reefs around the world and to assist coastal zones management programs. A fundamental challenge to remotely sensed mapping of coastal shallow waters is due to the significant disparity in the optical properties of the water column caused by the interaction between the coast and the sea. The objects to be classified have weak signals that interact with turbid waters that include sediments. In real scenarios, the absorption and backscattering coefficients are unknown with different sources of variability (river discharges and coastal interactions). Under normal circumstances, another unknown variable is the depth of shallow waters. This paper presents the development of algorithms for retrieving information and its application to the classification and mapping of objects under coastal shallow waters with different unknown concentrations of sediments. A mathematical model that simplifies the radiative transfer equation was used to quantify the interaction between the object of interest, the medium and the sensor. The retrieval of information requires the development of mathematical models and processing tools in the area of inversion, image reconstruction and classification of hyperspectral data. The algorithms developed were applied to one set of real hyperspectral imagery taken in a tank filled with water and TiO2 that emulates turbid coastal shallow waters. Tikhonov method of regularization was used in the inversion process to estimate the bottom albedo of the water tank using a priori information in the form of stored spectral signatures, previously measured, of objects of interest.

  4. L1 norm based common spatial patterns decomposition for scalp EEG BCI.

    PubMed

    Li, Peiyang; Xu, Peng; Zhang, Rui; Guo, Lanjin; Yao, Dezhong

    2013-08-06

    Brain computer interfaces (BCI) is one of the most popular branches in biomedical engineering. It aims at constructing a communication between the disabled persons and the auxiliary equipments in order to improve the patients' life. In motor imagery (MI) based BCI, one of the popular feature extraction strategies is Common Spatial Patterns (CSP). In practical BCI situation, scalp EEG inevitably has the outlier and artifacts introduced by ocular, head motion or the loose contact of electrodes in scalp EEG recordings. Because outlier and artifacts are usually observed with large amplitude, when CSP is solved in view of L2 norm, the effect of outlier and artifacts will be exaggerated due to the imposing of square to outliers, which will finally influence the MI based BCI performance. While L1 norm will lower the outlier effects as proved in other application fields like EEG inverse problem, face recognition, etc. In this paper, we present a new CSP implementation using the L1 norm technique, instead of the L2 norm, to solve the eigen problem for spatial filter estimation with aim to improve the robustness of CSP to outliers. To evaluate the performance of our method, we applied our method as well as the standard CSP and the regularized CSP with Tikhonov regularization (TR-CSP), on both the peer BCI dataset with simulated outliers and the dataset from the MI BCI system developed in our group. The McNemar test is used to investigate whether the difference among the three CSPs is of statistical significance. The results of both the simulation and real BCI datasets consistently reveal that the proposed method has much higher classification accuracies than the conventional CSP and the TR-CSP. By combining L1 norm based Eigen decomposition into Common Spatial Patterns, the proposed approach can effectively improve the robustness of BCI system to EEG outliers and thus be potential for the actual MI BCI application, where outliers are inevitably introduced into EEG recordings.

  5. Changing basal conditions during the speed-up of Jakobshavn Isbræ, Greenland

    NASA Astrophysics Data System (ADS)

    Habermann, M.; Truffer, M.; Maxwell, D.

    2013-11-01

    Ice-sheet outlet glaciers can undergo dynamic changes such as the rapid speed-up of Jakobshavn Isbræ following the disintegration of its floating ice tongue. These changes are associated with stress changes on the boundary of the ice mass. We invert for basal conditions from surface velocity data throughout a well-observed period of rapid change and evaluate parameterizations currently used in ice-sheet models. A Tikhonov inverse method with a shallow-shelf approximation forward model is used for diagnostic inversions for the years 1985, 2000, 2005, 2006 and 2008. Our ice-softness, model norm, and regularization parameter choices are justified using the data-model misfit metric and the L curve method. The sensitivity of the inversion results to these parameter choices is explored. We find a lowering of effective basal yield stress in the first 7 km upstream from the 2008 grounding line and no significant changes higher upstream. The temporal evolution in the fast flow area is in broad agreement with a Mohr-Coulomb parameterization of basal shear stress, but with a till friction angle much lower than has been measured for till samples. The lowering of effective basal yield stress is significant within the uncertainties of the inversion, but it cannot be ruled out that there are other significant contributors to the acceleration of the glacier.

  6. PERIODOGRAMS FOR MULTIBAND ASTRONOMICAL TIME SERIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    VanderPlas, Jacob T.; Ivezic, Željko

    This paper introduces the multiband periodogram, a general extension of the well-known Lomb–Scargle approach for detecting periodic signals in time-domain data. In addition to advantages of the Lomb–Scargle method such as treatment of non-uniform sampling and heteroscedastic errors, the multiband periodogram significantly improves period finding for randomly sampled multiband light curves (e.g., Pan-STARRS, DES, and LSST). The light curves in each band are modeled as arbitrary truncated Fourier series, with the period and phase shared across all bands. The key aspect is the use of Tikhonov regularization which drives most of the variability into the so-called base model common tomore » all bands, while fits for individual bands describe residuals relative to the base model and typically require lower-order Fourier series. This decrease in the effective model complexity is the main reason for improved performance. After a pedagogical development of the formalism of least-squares spectral analysis, which motivates the essential features of the multiband model, we use simulated light curves and randomly subsampled SDSS Stripe 82 data to demonstrate the superiority of this method compared to other methods from the literature and find that this method will be able to efficiently determine the correct period in the majority of LSST’s bright RR Lyrae stars with as little as six months of LSST data, a vast improvement over the years of data reported to be required by previous studies. A Python implementation of this method, along with code to fully reproduce the results reported here, is available on GitHub.« less

  7. Thick-target-method study of Mα β x-ray production cross sections of Pb and Bi impacted by positrons up to 9 keV

    NASA Astrophysics Data System (ADS)

    Wu, Y.; Liang, Y.; Xu, M. X.; Yuan, Y.; Chang, C. H.; Qian, Z. C.; Wang, B. Y.; Kuang, P.; Zhang, P.

    2018-03-01

    Atomic M -shell x-ray production cross sections induced by positrons near the threshold energy have been presented in this paper. In the experiment, online monitoring technology, which utilizes a high-purity germanium detector to record the annihilation photons emitted from the pure thick target impacted by positrons, was developed to obtain the accurate number of the incident positrons. The effects of the multiple scattering of incident positrons, from the bremsstrahlung and annihilation photons and other secondary particles on the experimental characteristic x-ray yield, were eliminated by Monte Carlo simulation in combination with theoretical integral calculation. The Tikhonov regularization method was adopted to handle the ill-posed inverse problem involved in the thick-target method, i.e., x-ray production cross sections by the corrected characteristic x-ray yield. Experimental results of Mα β x-ray production cross sections for Pb and Bi impacted by 6-9-keV positrons were compared with the corresponding values predicted by the distorted-wave Born approximation (DWBA). Good agreement was found between the two. Moreover, we have presented the experimental results on the ratios of the Mα β x-ray production cross sections by electron impact in the literature to that by 6-9-keV positron impact in this work. They were also in accordance with the theoretical ratios calculated by the predictions of DWBA theory.

  8. A stable downward continuation of airborne magnetic data: A case study for mineral prospectivity mapping in Central Iran

    NASA Astrophysics Data System (ADS)

    Abedi, Maysam; Gholami, Ali; Norouzi, Gholam-Hossain

    2013-03-01

    Previous studies have shown that a well-known multi-criteria decision making (MCDM) technique called Preference Ranking Organization METHod for Enrichment Evaluation (PROMETHEE II) to explore porphyry copper deposits can prioritize the ground-based exploratory evidential layers effectively. In this paper, the PROMETHEE II method is applied to airborne geophysical (potassium radiometry and magnetometry) data, geological layers (fault and host rock zones), and various extracted alteration layers from remote sensing images. The central Iranian volcanic-sedimentary belt is chosen for this study. A stable downward continuation method as an inverse problem in the Fourier domain using Tikhonov and edge-preserving regularizations is proposed to enhance magnetic data. Numerical analysis of synthetic models show that the reconstructed magnetic data at the ground surface exhibits significant enhancement compared to the airborne data. The reduced-to-pole (RTP) and the analytic signal filters are applied to the magnetic data to show better maps of the magnetic anomalies. Four remote sensing evidential layers including argillic, phyllic, propylitic and hydroxyl alterations are extracted from Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) images in order to map the altered areas associated with porphyry copper deposits. Principal component analysis (PCA) based on six Enhanced Thematic Mapper Plus (ETM+) images is implemented to map iron oxide layer. The final mineral prospectivity map based on desired geo-data set indicates adequately matching of high potential zones with previous working mines and copper deposits.

  9. W-phase estimation of first-order rupture distribution for megathrust earthquakes

    NASA Astrophysics Data System (ADS)

    Benavente, Roberto; Cummins, Phil; Dettmer, Jan

    2014-05-01

    Estimating the rupture pattern for large earthquakes during the first hour after the origin time can be crucial for rapid impact assessment and tsunami warning. However, the estimation of coseismic slip distribution models generally involves complex methodologies that are difficult to implement rapidly. Further, while model parameter uncertainty can be crucial for meaningful estimation, they are often ignored. In this work we develop a finite fault inversion for megathrust earthquakes which rapidly generates good first order estimates and uncertainties of spatial slip distributions. The algorithm uses W-phase waveforms and a linear automated regularization approach to invert for rupture models of some recent megathrust earthquakes. The W phase is a long period (100-1000 s) wave which arrives together with the P wave. Because it is fast, has small amplitude and a long-period character, the W phase is regularly used to estimate point source moment tensors by the NEIC and PTWC, among others, within an hour of earthquake occurrence. We use W-phase waveforms processed in a manner similar to that used for such point-source solutions. The inversion makes use of 3 component W-phase records retrieved from the Global Seismic Network. The inverse problem is formulated by a multiple time window method, resulting in a linear over-parametrized problem. The over-parametrization is addressed by Tikhonov regularization and regularization parameters are chosen according to the discrepancy principle by grid search. Noise on the data is addressed by estimating the data covariance matrix from data residuals. The matrix is obtained by starting with an a priori covariance matrix and then iteratively updating the matrix based on the residual errors of consecutive inversions. Then, a covariance matrix for the parameters is computed using a Bayesian approach. The application of this approach to recent megathrust earthquakes produces models which capture the most significant features of their slip distributions. Also, reliable solutions are generally obtained with data in a 30-minute window following the origin time, suggesting that a real-time system could obtain solutions in less than one hour following the origin time.

  10. Time-lapse Inversion of Electrical Resistivity Data

    NASA Astrophysics Data System (ADS)

    Nguyen, F.; Kemna, A.

    2005-12-01

    Time-lapse geophysical measurements (also known as monitoring, repeat or multi-frame survey) now play a critical role for monitoring -non-destructively- changes induced by human, as reservoir compaction, or to study natural processes, as flow and transport in porous media. To invert such data sets into time-varying subsurface properties, several strategies are found in different engineering or scientific fields (e.g., in biomedical, process tomography, or geophysical applications). Indeed, for time-lapse surveys, the data sets and the models at each time frame have the particularity to be closely related to their "neighbors", if the process does not induce chaotic or very high variations. Therefore, the information contained in the different frames can be used for constraining the inversion in the others. A first strategy consists in imposing constraints to the model based on prior estimation, a priori spatiotemporal or temporal behavior (arbitrary or based on a law describing the monitored process), restriction of changes in certain areas, or data changes reproducibility. A second strategy aims to invert directly the model changes, where the objective function penalizes those models whose spatial, temporal, or spatiotemporal behavior differs from a prior assumption or from a computed a priori. Clearly, the incorporation of time-lapse a priori information, determined from data sets or assumed, in the inversion process has been proven to improve significantly the resolving capability, mainly by removing artifacts. However, there is a lack of comparison of these methods. In this paper, we focus on Tikhonov-like inversion approaches for electrical tomography imaging to evaluate the capability of the different existing strategies, and to propose new ones. To evaluate the bias inevitably introduced by time-lapse regularization, we quantified the relative contribution of the different approaches to the resolving power of the method. Furthermore, we incorporated different noise levels and types (random and/or systematic) to determine the strategies' ability to cope with real data. Introducing additional regularization terms yields also more regularization parameters to compute. Since this is a difficult and computationally costly task, we propose that it should be proportional to the velocity of the process. To achieve these objectives, we tested the different methods using synthetic models, and experimental data, taking noise and error propagation into account. Our study shows that the choice of the inversion strategy highly depends on the nature and magnitude of noise, whereas the choice of the regularization term strongly influences the resulting image according to the a priori assumption. This study was developed under the scope of the European project ALERT (GOCE-CT-2004-505329).

  11. High-resolution CSR GRACE RL05 mascons

    NASA Astrophysics Data System (ADS)

    Save, Himanshu; Bettadpur, Srinivas; Tapley, Byron D.

    2016-10-01

    The determination of the gravity model for the Gravity Recovery and Climate Experiment (GRACE) is susceptible to modeling errors, measurement noise, and observability issues. The ill-posed GRACE estimation problem causes the unconstrained GRACE RL05 solutions to have north-south stripes. We discuss the development of global equal area mascon solutions to improve the GRACE gravity information for the study of Earth surface processes. These regularized mascon solutions are developed with a 1° resolution using Tikhonov regularization in a geodesic grid domain. These solutions are derived from GRACE information only, and no external model or data is used to inform the constraints. The regularization matrix is time variable and will not bias or attenuate future regional signals to some past statistics from GRACE or other models. The resulting Center for Space Research (CSR) mascon solutions have no stripe errors and capture all the signals observed by GRACE within the measurement noise level. The solutions are not tailored for specific applications and are global in nature. This study discusses the solution approach and compares the resulting solutions with postprocessed results from the RL05 spherical harmonic solutions and other global mascon solutions for studies of Arctic ice sheet processes, ocean bottom pressure variation, and land surface total water storage change. This suite of comparisons leads to the conclusion that the mascon solutions presented here are an enhanced representation of the RL05 GRACE solutions and provide accurate surface-based gridded information that can be used without further processing.

  12. Cone-beam x-ray luminescence computed tomography based on x-ray absorption dosage.

    PubMed

    Liu, Tianshuai; Rong, Junyan; Gao, Peng; Zhang, Wenli; Liu, Wenlei; Zhang, Yuanke; Lu, Hongbing

    2018-02-01

    With the advances of x-ray excitable nanophosphors, x-ray luminescence computed tomography (XLCT) has become a promising hybrid imaging modality. In particular, a cone-beam XLCT (CB-XLCT) system has demonstrated its potential in in vivo imaging with the advantage of fast imaging speed over other XLCT systems. Currently, the imaging models of most XLCT systems assume that nanophosphors emit light based on the intensity distribution of x-ray within the object, not completely reflecting the nature of the x-ray excitation process. To improve the imaging quality of CB-XLCT, an imaging model that adopts an excitation model of nanophosphors based on x-ray absorption dosage is proposed in this study. To solve the ill-posed inverse problem, a reconstruction algorithm that combines the adaptive Tikhonov regularization method with the imaging model is implemented for CB-XLCT reconstruction. Numerical simulations and phantom experiments indicate that compared with the traditional forward model based on x-ray intensity, the proposed dose-based model could improve the image quality of CB-XLCT significantly in terms of target shape, localization accuracy, and image contrast. In addition, the proposed model behaves better in distinguishing closer targets, demonstrating its advantage in improving spatial resolution. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  13. Computed inverse MRI for magnetic susceptibility map reconstruction

    PubMed Central

    Chen, Zikuan; Calhoun, Vince

    2015-01-01

    Objective This paper reports on a computed inverse magnetic resonance imaging (CIMRI) model for reconstructing the magnetic susceptibility source from MRI data using a two-step computational approach. Methods The forward T2*-weighted MRI (T2*MRI) process is decomposed into two steps: 1) from magnetic susceptibility source to fieldmap establishment via magnetization in a main field, and 2) from fieldmap to MR image formation by intravoxel dephasing average. The proposed CIMRI model includes two inverse steps to reverse the T2*MRI procedure: fieldmap calculation from MR phase image and susceptibility source calculation from the fieldmap. The inverse step from fieldmap to susceptibility map is a 3D ill-posed deconvolution problem, which can be solved by three kinds of approaches: Tikhonov-regularized matrix inverse, inverse filtering with a truncated filter, and total variation (TV) iteration. By numerical simulation, we validate the CIMRI model by comparing the reconstructed susceptibility maps for a predefined susceptibility source. Results Numerical simulations of CIMRI show that the split Bregman TV iteration solver can reconstruct the susceptibility map from a MR phase image with high fidelity (spatial correlation≈0.99). The split Bregman TV iteration solver includes noise reduction, edge preservation, and image energy conservation. For applications to brain susceptibility reconstruction, it is important to calibrate the TV iteration program by selecting suitable values of the regularization parameter. Conclusions The proposed CIMRI model can reconstruct the magnetic susceptibility source of T2*MRI by two computational steps: calculating the fieldmap from the phase image and reconstructing the susceptibility map from the fieldmap. The crux of CIMRI lies in an ill-posed 3D deconvolution problem, which can be effectively solved by the split Bregman TV iteration algorithm. PMID:22446372

  14. Development of daily "swath" mascon solutions from GRACE

    NASA Astrophysics Data System (ADS)

    Save, Himanshu; Bettadpur, Srinivas

    2016-04-01

    The Gravity Recovery and Climate Experiment (GRACE) mission has provided invaluable and the only data of its kind over the past 14 years that measures the total water column in the Earth System. The GRACE project provides monthly average solutions and there are experimental quick-look solutions and regularized sliding window solutions available from Center for Space Research (CSR) that implement a sliding window approach and variable daily weights. The need for special handling of these solutions in data assimilation and the possibility of capturing the total water storage (TWS) signal at sub-monthly time scales motivated this study. This study discusses the progress of the development of true daily high resolution "swath" mascon total water storage estimate from GRACE using Tikhonov regularization. These solutions include the estimates of daily total water storage (TWS) for the mascon elements that were "observed" by the GRACE satellites on a given day. This paper discusses the computation techniques, signal, error and uncertainty characterization of these daily solutions. We discuss the comparisons with the official GRACE RL05 solutions and with CSR mascon solution to characterize the impact on science results especially at the sub-monthly time scales. The evaluation is done with emphasis on the temporal signal characteristics and validated against in-situ data set and multiple models.

  15. Convergence and objective functions of some fault/noise-injection-based online learning algorithms for RBF networks.

    PubMed

    Ho, Kevin I-J; Leung, Chi-Sing; Sum, John

    2010-06-01

    In the last two decades, many online fault/noise injection algorithms have been developed to attain a fault tolerant neural network. However, not much theoretical works related to their convergence and objective functions have been reported. This paper studies six common fault/noise-injection-based online learning algorithms for radial basis function (RBF) networks, namely 1) injecting additive input noise, 2) injecting additive/multiplicative weight noise, 3) injecting multiplicative node noise, 4) injecting multiweight fault (random disconnection of weights), 5) injecting multinode fault during training, and 6) weight decay with injecting multinode fault. Based on the Gladyshev theorem, we show that the convergence of these six online algorithms is almost sure. Moreover, their true objective functions being minimized are derived. For injecting additive input noise during training, the objective function is identical to that of the Tikhonov regularizer approach. For injecting additive/multiplicative weight noise during training, the objective function is the simple mean square training error. Thus, injecting additive/multiplicative weight noise during training cannot improve the fault tolerance of an RBF network. Similar to injective additive input noise, the objective functions of other fault/noise-injection-based online algorithms contain a mean square error term and a specialized regularization term.

  16. Modeling of reverberant room responses for two-dimensional spatial sound field analysis and synthesis.

    PubMed

    Bai, Mingsian R; Li, Yi; Chiang, Yi-Hao

    2017-10-01

    A unified framework is proposed for analysis and synthesis of two-dimensional spatial sound field in reverberant environments. In the sound field analysis (SFA) phase, an unbaffled 24-element circular microphone array is utilized to encode the sound field based on the plane-wave decomposition. Depending on the sparsity of the sound sources, the SFA stage can be implemented in two manners. For sparse-source scenarios, a one-stage algorithm based on compressive sensing algorithm is utilized. Alternatively, a two-stage algorithm can be used, where the minimum power distortionless response beamformer is used to localize the sources and Tikhonov regularization algorithm is used to extract the source amplitudes. In the sound field synthesis (SFS), a 32-element rectangular loudspeaker array is employed to decode the target sound field using pressure matching technique. To establish the room response model, as required in the pressure matching step of the SFS phase, an SFA technique for nonsparse-source scenarios is utilized. Choice of regularization parameters is vital to the reproduced sound field. In the SFS phase, three SFS approaches are compared in terms of localization performance and voice reproduction quality. Experimental results obtained in a reverberant room are presented and reveal that an accurate room response model is vital to immersive rendering of the reproduced sound field.

  17. Tensor-based dynamic reconstruction method for electrical capacitance tomography

    NASA Astrophysics Data System (ADS)

    Lei, J.; Mu, H. P.; Liu, Q. B.; Li, Z. H.; Liu, S.; Wang, X. Y.

    2017-03-01

    Electrical capacitance tomography (ECT) is an attractive visualization measurement method, in which the acquisition of high-quality images is beneficial for the understanding of the underlying physical or chemical mechanisms of the dynamic behaviors of the measurement objects. In real-world measurement environments, imaging objects are often in a dynamic process, and the exploitation of the spatial-temporal correlations related to the dynamic nature will contribute to improving the imaging quality. Different from existing imaging methods that are often used in ECT measurements, in this paper a dynamic image sequence is stacked into a third-order tensor that consists of a low rank tensor and a sparse tensor within the framework of the multiple measurement vectors model and the multi-way data analysis method. The low rank tensor models the similar spatial distribution information among frames, which is slowly changing over time, and the sparse tensor captures the perturbations or differences introduced in each frame, which is rapidly changing over time. With the assistance of the Tikhonov regularization theory and the tensor-based multi-way data analysis method, a new cost function, with the considerations of the multi-frames measurement data, the dynamic evolution information of a time-varying imaging object and the characteristics of the low rank tensor and the sparse tensor, is proposed to convert the imaging task in the ECT measurement into a reconstruction problem of a third-order image tensor. An effective algorithm is developed to search for the optimal solution of the proposed cost function, and the images are reconstructed via a batching pattern. The feasibility and effectiveness of the developed reconstruction method are numerically validated.

  18. Using sparse regularization for multi-resolution tomography of the ionosphere

    NASA Astrophysics Data System (ADS)

    Panicciari, T.; Smith, N. D.; Mitchell, C. N.; Da Dalt, F.; Spencer, P. S. J.

    2015-10-01

    Computerized ionospheric tomography (CIT) is a technique that allows reconstructing the state of the ionosphere in terms of electron content from a set of slant total electron content (STEC) measurements. It is usually denoted as an inverse problem. In this experiment, the measurements are considered coming from the phase of the GPS signal and, therefore, affected by bias. For this reason the STEC cannot be considered in absolute terms but rather in relative terms. Measurements are collected from receivers not evenly distributed in space and together with limitations such as angle and density of the observations, they are the cause of instability in the operation of inversion. Furthermore, the ionosphere is a dynamic medium whose processes are continuously changing in time and space. This can affect CIT by limiting the accuracy in resolving structures and the processes that describe the ionosphere. Some inversion techniques are based on ℓ2 minimization algorithms (i.e. Tikhonov regularization) and a standard approach is implemented here using spherical harmonics as a reference to compare the new method. A new approach is proposed for CIT that aims to permit sparsity in the reconstruction coefficients by using wavelet basis functions. It is based on the ℓ1 minimization technique and wavelet basis functions due to their properties of compact representation. The ℓ1 minimization is selected because it can optimize the result with an uneven distribution of observations by exploiting the localization property of wavelets. Also illustrated is how the inter-frequency biases on the STEC are calibrated within the operation of inversion, and this is used as a way for evaluating the accuracy of the method. The technique is demonstrated using a simulation, showing the advantage of ℓ1 minimization to estimate the coefficients over the ℓ2 minimization. This is in particular true for an uneven observation geometry and especially for multi-resolution CIT.

  19. Looking inside the Ocean: Toward an Autonomous Imaging System for Monitoring Gelatinous Zooplankton

    PubMed Central

    Corgnati, Lorenzo; Marini, Simone; Mazzei, Luca; Ottaviani, Ennio; Aliani, Stefano; Conversi, Alessandra; Griffa, Annalisa

    2016-01-01

    Marine plankton abundance and dynamics in the open and interior ocean is still an unknown field. The knowledge of gelatinous zooplankton distribution is especially challenging, because this type of plankton has a very fragile structure and cannot be directly sampled using traditional net based techniques. To overcome this shortcoming, Computer Vision techniques can be successfully used for the automatic monitoring of this group.This paper presents the GUARD1 imaging system, a low-cost stand-alone instrument for underwater image acquisition and recognition of gelatinous zooplankton, and discusses the performance of three different methodologies, Tikhonov Regularization, Support Vector Machines and Genetic Programming, that have been compared in order to select the one to be run onboard the system for the automatic recognition of gelatinous zooplankton. The performance comparison results highlight the high accuracy of the three methods in gelatinous zooplankton identification, showing their good capability in robustly selecting relevant features. In particular, Genetic Programming technique achieves the same performances of the other two methods by using a smaller set of features, thus being the most efficient in avoiding computationally consuming preprocessing stages, that is a crucial requirement for running on an autonomous imaging system designed for long lasting deployments, like the GUARD1. The Genetic Programming algorithm has been installed onboard the system, that has been operationally tested in a two-months survey in the Ligurian Sea, providing satisfactory results in terms of monitoring and recognition performances. PMID:27983638

  20. Binary optimization for source localization in the inverse problem of ECG.

    PubMed

    Potyagaylo, Danila; Cortés, Elisenda Gil; Schulze, Walther H W; Dössel, Olaf

    2014-09-01

    The goal of ECG-imaging (ECGI) is to reconstruct heart electrical activity from body surface potential maps. The problem is ill-posed, which means that it is extremely sensitive to measurement and modeling errors. The most commonly used method to tackle this obstacle is Tikhonov regularization, which consists in converting the original problem into a well-posed one by adding a penalty term. The method, despite all its practical advantages, has however a serious drawback: The obtained solution is often over-smoothed, which can hinder precise clinical diagnosis and treatment planning. In this paper, we apply a binary optimization approach to the transmembrane voltage (TMV)-based problem. For this, we assume the TMV to take two possible values according to a heart abnormality under consideration. In this work, we investigate the localization of simulated ischemic areas and ectopic foci and one clinical infarction case. This affects only the choice of the binary values, while the core of the algorithms remains the same, making the approximation easily adjustable to the application needs. Two methods, a hybrid metaheuristic approach and the difference of convex functions (DC), algorithm were tested. For this purpose, we performed realistic heart simulations for a complex thorax model and applied the proposed techniques to the obtained ECG signals. Both methods enabled localization of the areas of interest, hence showing their potential for application in ECGI. For the metaheuristic algorithm, it was necessary to subdivide the heart into regions in order to obtain a stable solution unsusceptible to the errors, while the analytical DC scheme can be efficiently applied for higher dimensional problems. With the DC method, we also successfully reconstructed the activation pattern and origin of a simulated extrasystole. In addition, the DC algorithm enables iterative adjustment of binary values ensuring robust performance.

  1. Implementation of a computationally efficient least-squares algorithm for highly under-determined three-dimensional diffuse optical tomography problems.

    PubMed

    Yalavarthy, Phaneendra K; Lynch, Daniel R; Pogue, Brian W; Dehghani, Hamid; Paulsen, Keith D

    2008-05-01

    Three-dimensional (3D) diffuse optical tomography is known to be a nonlinear, ill-posed and sometimes under-determined problem, where regularization is added to the minimization to allow convergence to a unique solution. In this work, a generalized least-squares (GLS) minimization method was implemented, which employs weight matrices for both data-model misfit and optical properties to include their variances and covariances, using a computationally efficient scheme. This allows inversion of a matrix that is of a dimension dictated by the number of measurements, instead of by the number of imaging parameters. This increases the computation speed up to four times per iteration in most of the under-determined 3D imaging problems. An analytic derivation, using the Sherman-Morrison-Woodbury identity, is shown for this efficient alternative form and it is proven to be equivalent, not only analytically, but also numerically. Equivalent alternative forms for other minimization methods, like Levenberg-Marquardt (LM) and Tikhonov, are also derived. Three-dimensional reconstruction results indicate that the poor recovery of quantitatively accurate values in 3D optical images can also be a characteristic of the reconstruction algorithm, along with the target size. Interestingly, usage of GLS reconstruction methods reduces error in the periphery of the image, as expected, and improves by 20% the ability to quantify local interior regions in terms of the recovered optical contrast, as compared to LM methods. Characterization of detector photo-multiplier tubes noise has enabled the use of the GLS method for reconstructing experimental data and showed a promise for better quantification of target in 3D optical imaging. Use of these new alternative forms becomes effective when the ratio of the number of imaging property parameters exceeds the number of measurements by a factor greater than 2.

  2. Forward and Inverse Modeling of Self-potential. A Tomography of Groundwater Flow and Comparison Between Deterministic and Stochastic Inversion Methods

    NASA Astrophysics Data System (ADS)

    Quintero-Chavarria, E.; Ochoa Gutierrez, L. H.

    2016-12-01

    Applications of the Self-potential Method in the fields of Hydrogeology and Environmental Sciences have had significant developments during the last two decades with a strong use on groundwater flows identification. Although only few authors deal with the forward problem's solution -especially in geophysics literature- different inversion procedures are currently being developed but in most cases they are compared with unconventional groundwater velocity fields and restricted to structured meshes. This research solves the forward problem based on the finite element method using the St. Venant's Principle to transform a point dipole, which is the field generated by a single vector, into a distribution of electrical monopoles. Then, two simple aquifer models were generated with specific boundary conditions and head potentials, velocity fields and electric potentials in the medium were computed. With the model's surface electric potential, the inverse problem is solved to retrieve the source of electric potential (vector field associated to groundwater flow) using deterministic and stochastic approaches. The first approach was carried out by implementing a Tikhonov regularization with a stabilized operator adapted to the finite element mesh while for the second a hierarchical Bayesian model based on Markov chain Monte Carlo (McMC) and Markov Random Fields (MRF) was constructed. For all implemented methods, the result between the direct and inverse models was contrasted in two ways: 1) shape and distribution of the vector field, and 2) magnitude's histogram. Finally, it was concluded that inversion procedures are improved when the velocity field's behavior is considered, thus, the deterministic method is more suitable for unconfined aquifers than confined ones. McMC has restricted applications and requires a lot of information (particularly in potentials fields) while MRF has a remarkable response especially when dealing with confined aquifers.

  3. Dependence of the forward light scattering on the refractive index of particles

    NASA Astrophysics Data System (ADS)

    Guo, Lufang; Shen, Jianqi

    2018-05-01

    In particle sizing technique based on forward light scattering, the scattered light signal (SLS) is closely related to the relative refractive index (RRI) of the particles to the surrounding, especially when the particles are transparent (or weakly absorbent) and the particles are small in size. The interference between the diffraction (Diff) and the multiple internal reflections (MIR) of scattered light can lead to the oscillation of the SLS on RRI and the abnormal intervals, especially for narrowly-distributed small particle systems. This makes the inverse problem more difficult. In order to improve the inverse results, Tikhonov regularization algorithm with B-spline functions is proposed, in which the matrix element is calculated for a range of particle sizes instead using the mean particle diameter of size fractions. In this way, the influence of abnormal intervals on the inverse results can be eliminated. In addition, for measurements on narrowly distributed small particles, it is suggested to detect the SLS in a wider scattering angle to include more information.

  4. Information loss and reconstruction in diffuse fluorescence tomography

    PubMed Central

    Bonfert-Taylor, Petra; Leblond, Frederic; Holt, Robert W.; Tichauer, Kenneth; Pogue, Brian W.; Taylor, Edward C.

    2012-01-01

    This paper is a theoretical exploration of spatial resolution in diffuse fluorescence tomography. It is demonstrated that, given a fixed imaging geometry, one cannot—relative to standard techniques such as Tikhonov regularization and truncated singular value decomposition—improve the spatial resolution of the optical reconstructions via increasing the node density of the mesh considered for modeling light transport. Using techniques from linear algebra, it is shown that, as one increases the number of nodes beyond the number of measurements, information is lost by the forward model. It is demonstrated that this information cannot be recovered using various common reconstruction techniques. Evidence is provided showing that this phenomenon is related to the smoothing properties of the elliptic forward model that is used in the diffusion approximation to light transport in tissue. This argues for reconstruction techniques that are sensitive to boundaries, such as L1-reconstruction and the use of priors, as well as the natural approach of building a measurement geometry that reflects the desired image resolution. PMID:22472763

  5. Total variation optimization for imaging through turbid media with transmission matrix

    NASA Astrophysics Data System (ADS)

    Gong, Changmei; Shao, Xiaopeng; Wu, Tengfei; Liu, Jietao; Zhang, Jianqi

    2016-12-01

    With the transmission matrix (TM) of the whole optical system measured, the image of the object behind a turbid medium can be recovered from its speckle field by means of an image reconstruction algorithm. Instead of Tikhonov regularization algorithm (TRA), the total variation minimization by augmented Lagrangian and alternating direction algorithms (TVAL3) is introduced to recover object images. As a total variation (TV)-based approach, TVAL3 allows to effectively damp more noise and preserve more edges compared with TRA, thus providing more outstanding image quality. Different levels of detector noise and TM-measurement noise are successively added to analyze the antinoise performance of these two algorithms. Simulation results show that TVAL3 is able to recover more details and suppress more noise than TRA under different noise levels, thus providing much more excellent image quality. Furthermore, whether it be detector noise or TM-measurement noise, the reconstruction images obtained by TVAL3 at SNR=15 dB are far superior to those by TRA at SNR=50 dB.

  6. Bayesian Deconvolution for Angular Super-Resolution in Forward-Looking Scanning Radar

    PubMed Central

    Zha, Yuebo; Huang, Yulin; Sun, Zhichao; Wang, Yue; Yang, Jianyu

    2015-01-01

    Scanning radar is of notable importance for ground surveillance, terrain mapping and disaster rescue. However, the angular resolution of a scanning radar image is poor compared to the achievable range resolution. This paper presents a deconvolution algorithm for angular super-resolution in scanning radar based on Bayesian theory, which states that the angular super-resolution can be realized by solving the corresponding deconvolution problem with the maximum a posteriori (MAP) criterion. The algorithm considers that the noise is composed of two mutually independent parts, i.e., a Gaussian signal-independent component and a Poisson signal-dependent component. In addition, the Laplace distribution is used to represent the prior information about the targets under the assumption that the radar image of interest can be represented by the dominant scatters in the scene. Experimental results demonstrate that the proposed deconvolution algorithm has higher precision for angular super-resolution compared with the conventional algorithms, such as the Tikhonov regularization algorithm, the Wiener filter and the Richardson–Lucy algorithm. PMID:25806871

  7. Geometry-based ensembles: toward a structural characterization of the classification boundary.

    PubMed

    Pujol, Oriol; Masip, David

    2009-06-01

    This paper introduces a novel binary discriminative learning technique based on the approximation of the nonlinear decision boundary by a piecewise linear smooth additive model. The decision border is geometrically defined by means of the characterizing boundary points-points that belong to the optimal boundary under a certain notion of robustness. Based on these points, a set of locally robust linear classifiers is defined and assembled by means of a Tikhonov regularized optimization procedure in an additive model to create a final lambda-smooth decision rule. As a result, a very simple and robust classifier with a strong geometrical meaning and nonlinear behavior is obtained. The simplicity of the method allows its extension to cope with some of today's machine learning challenges, such as online learning, large-scale learning or parallelization, with linear computational complexity. We validate our approach on the UCI database, comparing with several state-of-the-art classification techniques. Finally, we apply our technique in online and large-scale scenarios and in six real-life computer vision and pattern recognition problems: gender recognition based on face images, intravascular ultrasound tissue classification, speed traffic sign detection, Chagas' disease myocardial damage severity detection, old musical scores clef classification, and action recognition using 3D accelerometer data from a wearable device. The results are promising and this paper opens a line of research that deserves further attention.

  8. Inverse Electrocardiographic Source Localization of Ischemia: An Optimization Framework and Finite Element Solution

    PubMed Central

    Wang, Dafang; Kirby, Robert M.; MacLeod, Rob S.; Johnson, Chris R.

    2013-01-01

    With the goal of non-invasively localizing cardiac ischemic disease using body-surface potential recordings, we attempted to reconstruct the transmembrane potential (TMP) throughout the myocardium with the bidomain heart model. The task is an inverse source problem governed by partial differential equations (PDE). Our main contribution is solving the inverse problem within a PDE-constrained optimization framework that enables various physically-based constraints in both equality and inequality forms. We formulated the optimality conditions rigorously in the continuum before deriving finite element discretization, thereby making the optimization independent of discretization choice. Such a formulation was derived for the L2-norm Tikhonov regularization and the total variation minimization. The subsequent numerical optimization was fulfilled by a primal-dual interior-point method tailored to our problem’s specific structure. Our simulations used realistic, fiber-included heart models consisting of up to 18,000 nodes, much finer than any inverse models previously reported. With synthetic ischemia data we localized ischemic regions with roughly a 10% false-negative rate or a 20% false-positive rate under conditions up to 5% input noise. With ischemia data measured from animal experiments, we reconstructed TMPs with roughly 0.9 correlation with the ground truth. While precisely estimating the TMP in general cases remains an open problem, our study shows the feasibility of reconstructing TMP during the ST interval as a means of ischemia localization. PMID:23913980

  9. Observation of a Saharan dust outbreaks in the frame of the Convective and Orographically-induced Precipitation Study

    NASA Astrophysics Data System (ADS)

    Di Girolamo, Paolo; Summa, Donato; Bhawar, Rohini; Di Iorio, Tatiana; Caccaini, Marco; Veselovskii, Igor; Kolgotin, Alexey

    2009-03-01

    The Raman lidar system BASIL was operational in Achern (Supersite R, Lat: 48.64° N, Long: 8.06° E, Elev.: 140 m) in the frame of the Convective and Orographically-induced Precipitation Study. BASIL operated continuously over a period of approx. 36 hours from 06:22 UTC on 1 August to 18:28 UTC on 2 August 2007, to cover IOPs 13 a-b. In this timeframe the signature of a severe Saharan dust outbreak episode was captured. An inversion algorithm was used to retrieve particle size distribution parameters, i.e., mean and effective radius, number, surface area, and volume concentration, and complex refractive index, as well as the parameters of a bimodal particle size distribution, from the multi-wavelength lidar data of particle backscattering and extinction. The inversion method employs Tikhonov's inversion with regularization. Size distribution parameters are estimated as a function of altitude at different times during the dust outbreak event. Retrieval results reveal the dominance in the upper dust layer of a coarse mode with radii 3-4 μm. Number density and volume concentration are in the range 100-800 cm-3 and 5-40 μm3/cm3, respectively, while real and imaginary part of the complex refractive index are in the range 1.41-1.53 and 0.003-0.014, respectively.

  10. Inverse analysis and regularisation in conditional source-term estimation modelling

    NASA Astrophysics Data System (ADS)

    Labahn, Jeffrey W.; Devaud, Cecile B.; Sipkens, Timothy A.; Daun, Kyle J.

    2014-05-01

    Conditional Source-term Estimation (CSE) obtains the conditional species mass fractions by inverting a Fredholm integral equation of the first kind. In the present work, a Bayesian framework is used to compare two different regularisation methods: zeroth-order temporal Tikhonov regulatisation and first-order spatial Tikhonov regularisation. The objectives of the current study are: (i) to elucidate the ill-posedness of the inverse problem; (ii) to understand the origin of the perturbations in the data and quantify their magnitude; (iii) to quantify the uncertainty in the solution using different priors; and (iv) to determine the regularisation method best suited to this problem. A singular value decomposition shows that the current inverse problem is ill-posed. Perturbations to the data may be caused by the use of a discrete mixture fraction grid for calculating the mixture fraction PDF. The magnitude of the perturbations is estimated using a box filter and the uncertainty in the solution is determined based on the width of the credible intervals. The width of the credible intervals is significantly reduced with the inclusion of a smoothing prior and the recovered solution is in better agreement with the exact solution. The credible intervals for temporal and spatial smoothing are shown to be similar. Credible intervals for temporal smoothing depend on the solution from the previous time step and a smooth solution is not guaranteed. For spatial smoothing, the credible intervals are not dependent upon a previous solution and better predict characteristics for higher mixture fraction values. These characteristics make spatial smoothing a promising alternative method for recovering a solution from the CSE inversion process.

  11. Quantifying Trace Amounts of Aggregates in Biopharmaceuticals Using Analytical Ultracentrifugation Sedimentation Velocity: Bayesian Analyses and F Statistics.

    PubMed

    Wafer, Lucas; Kloczewiak, Marek; Luo, Yin

    2016-07-01

    Analytical ultracentrifugation-sedimentation velocity (AUC-SV) is often used to quantify high molar mass species (HMMS) present in biopharmaceuticals. Although these species are often present in trace quantities, they have received significant attention due to their potential immunogenicity. Commonly, AUC-SV data is analyzed as a diffusion-corrected, sedimentation coefficient distribution, or c(s), using SEDFIT to numerically solve Lamm-type equations. SEDFIT also utilizes maximum entropy or Tikhonov-Phillips regularization to further allow the user to determine relevant sample information, including the number of species present, their sedimentation coefficients, and their relative abundance. However, this methodology has several, often unstated, limitations, which may impact the final analysis of protein therapeutics. These include regularization-specific effects, artificial "ripple peaks," and spurious shifts in the sedimentation coefficients. In this investigation, we experimentally verified that an explicit Bayesian approach, as implemented in SEDFIT, can largely correct for these effects. Clear guidelines on how to implement this technique and interpret the resulting data, especially for samples containing micro-heterogeneity (e.g., differential glycosylation), are also provided. In addition, we demonstrated how the Bayesian approach can be combined with F statistics to draw more accurate conclusions and rigorously exclude artifactual peaks. Numerous examples with an antibody and an antibody-drug conjugate were used to illustrate the strengths and drawbacks of each technique.

  12. Developpement de techniques de diagnostic non intrusif par tomographie optique

    NASA Astrophysics Data System (ADS)

    Dubot, Fabien

    Que ce soit dans les domaines des procedes industriels ou de l'imagerie medicale, on a assiste ces deux dernieres decennies a un developpement croissant des techniques optiques de diagnostic. L'engouement pour ces methodes repose principalement sur le fait qu'elles sont totalement non invasives, qu'elle utilisent des sources de rayonnement non nocives pour l'homme et l'environnement et qu'elles sont relativement peu couteuses et faciles a mettre en oeuvre comparees aux autres techniques d'imagerie. Une de ces techniques est la Tomographie Optique Diffuse (TOD). Cette methode d'imagerie tridimensionnelle consiste a caracteriser les proprietes radiatives d'un Milieu Semi-Transparent (MST) a partir de mesures optiques dans le proche infrarouge obtenues a l'aide d'un ensemble de sources et detecteurs situes sur la frontiere du domaine sonde. Elle repose notamment sur un modele direct de propagation de la lumiere dans le MST, fournissant les predictions, et un algorithme de minimisation d'une fonction de cout integrant les predictions et les mesures, permettant la reconstruction des parametres d'interet. Dans ce travail, le modele direct est l'approximation diffuse de l'equation de transfert radiatif dans le regime frequentiel tandis que les parametres d'interet sont les distributions spatiales des coefficients d'absorption et de diffusion reduit. Cette these est consacree au developpement d'une methode inverse robuste pour la resolution du probleme de TOD dans le domaine frequentiel. Pour repondre a cet objectif, ce travail est structure en trois parties qui constituent les principaux axes de la these. Premierement, une comparaison des algorithmes de Gauss-Newton amorti et de Broyden- Fletcher-Goldfarb-Shanno (BFGS) est proposee dans le cas bidimensionnel. Deux methodes de regularisation sont combinees pour chacun des deux algorithmes, a savoir la reduction de la dimension de l'espace de controle basee sur le maillage et la regularisation par penalisation de Tikhonov pour l'algorithme de Gauss-Newton amorti, et les regularisations basees sur le maillage et l'utilisation des gradients de Sobolev, uniformes ou spatialement dependants, lors de l'extraction du gradient de la fonction cout, pour la methode BFGS. Les resultats numeriques indiquent que l'algorithme de BFGS surpasse celui de Gauss-Newton amorti en ce qui concerne la qualite des reconstructions obtenues, le temps de calcul ou encore la facilite de selection du parametre de regularisation. Deuxiemement, une etude sur la quasi-independance du parametre de penalisation de Tikhonov optimal par rapport a la dimension de l'espace de controle dans les problemes inverses d'estimation de fonctions spatialement dependantes est menee. Cette etude fait suite a une observation realisee lors de la premiere partie de ce travail ou le parametre de Tikhonov, determine par la methode " L-curve ", se trouve etre independant de la dimension de l'espace de controle dans le cas sous-determine. Cette hypothese est demontree theoriquement puis verifiee numeriquement sur un probleme inverse lineaire de conduction de la chaleur puis sur le probleme inverse non-lineaire de TOD. La verification numerique repose sur la determination d'un parametre de Tikhonov optimal, defini comme etant celui qui minimise les ecarts entre les cibles et les reconstructions. La demonstration theorique repose sur le principe de Morozov (discrepancy principle) dans le cas lineaire, tandis qu'elle repose essentiellement sur l'hypothese que les fonctions radiatives a reconstruire sont des variables aleatoires suivant une loi normale dans le cas non-lineaire. En conclusion, la these demontre que le parametre de Tikhonov peut etre determine en utilisant une parametrisation des variables de controle associee a un maillage lâche afin de reduire les temps de calcul. Troisiemement, une methode inverse multi-echelle basee sur les ondelettes associee a l'algorithme de BFGS est developpee. Cette methode, qui s'appuie sur une reformulation du probleme inverse original en une suite de sous-problemes inverses de la plus grande echelle a la plus petite, a l'aide de la transformee en ondelettes, permet de faire face a la propriete de convergence locale de l'optimiseur et a la presence de nombreux minima locaux dans la fonction cout. Les resultats numeriques montrent que la methode proposee est plus stable vis-a-vis de l'estimation initiale des proprietes radiatives et fournit des reconstructions finales plus precises que l'algorithme de BFGS ordinaire tout en necessitant des temps de calcul semblables. Les resultats de ces travaux sont presentes dans cette these sous forme de quatre articles. Le premier article a ete accepte dans l'International Journal of Thermal Sciences, le deuxieme est accepte dans la revue Inverse Problems in Science and Engineering, le troisieme est accepte dans le Journal of Computational and Applied Mathematics et le quatrieme a ete soumis au Journal of Quantitative Spectroscopy & Radiative Transfer. Dix autres articles ont ete publies dans des comptes-rendus de conferences avec comite de lecture. Ces articles sont disponibles en format pdf sur le site de la Chaire de recherche t3e (www.t3e.info).

  13. Classifying the Sizes of Explosive Eruptions using Tephra Deposits: The Advantages of a Numerical Inversion Approach

    NASA Astrophysics Data System (ADS)

    Connor, C.; Connor, L.; White, J.

    2015-12-01

    Explosive volcanic eruptions are often classified by deposit mass and eruption column height. How well are these eruption parameters determined in older deposits, and how well can we reduce uncertainty using robust numerical and statistical methods? We describe an efficient and effective inversion and uncertainty quantification approach for estimating eruption parameters given a dataset of tephra deposit thickness and granulometry. The inversion and uncertainty quantification is implemented using the open-source PEST++ code. Inversion with PEST++ can be used with a variety of forward models and here is applied using Tephra2, a code that simulates advective and dispersive tephra transport and deposition. The Levenburg-Marquardt algorithm is combined with formal Tikhonov and subspace regularization to invert eruption parameters; a linear equation for conditional uncertainty propagation is used to estimate posterior parameter uncertainty. Both the inversion and uncertainty analysis support simultaneous analysis of the full eruption and wind-field parameterization. The combined inversion/uncertainty-quantification approach is applied to the 1992 eruption of Cerro Negro (Nicaragua), the 2011 Kirishima-Shinmoedake (Japan), and the 1913 Colima (Mexico) eruptions. These examples show that although eruption mass uncertainty is reduced by inversion against tephra isomass data, considerable uncertainty remains for many eruption and wind-field parameters, such as eruption column height. Supplementing the inversion dataset with tephra granulometry data is shown to further reduce the uncertainty of most eruption and wind-field parameters. We think the use of such robust models provides a better understanding of uncertainty in eruption parameters, and hence eruption classification, than is possible with more qualitative methods that are widely used.

  14. Towards the mechanical characterization of abdominal wall by inverse analysis.

    PubMed

    Simón-Allué, R; Calvo, B; Oberai, A A; Barbone, P E

    2017-02-01

    The aim of this study is to characterize the passive mechanical behaviour of abdominal wall in vivo in an animal model using only external cameras and numerical analysis. The main objective lies in defining a methodology that provides in vivo information of a specific patient without altering mechanical properties. It is demonstrated in the mechanical study of abdomen for hernia purposes. Mechanical tests consisted on pneumoperitoneum tests performed on New Zealand rabbits, where inner pressure was varied from 0mmHg to 12mmHg. Changes in the external abdominal surface were recorded and several points were tracked. Based on their coordinates we reconstructed a 3D finite element model of the abdominal wall, considering an incompressible hyperelastic material model defined by two parameters. The spatial distributions of these parameters (shear modulus and non linear parameter) were calculated by inverse analysis, using two different types of regularization: Total Variation Diminishing (TVD) and Tikhonov (H 1 ). After solving the inverse problem, the distribution of the material parameters were obtained along the abdominal surface. Accuracy of the results was evaluated for the last level of pressure. Results revealed a higher value of the shear modulus in a wide stripe along the craneo-caudal direction, associated with the presence of linea alba in conjunction with fascias and rectus abdominis. Non linear parameter distribution was smoother and the location of higher values varied with the regularization type. Both regularizations proved to yield in an accurate predicted displacement field, but H 1 obtained a smoother material parameter distribution while TVD included some discontinuities. The methodology here presented was able to characterize in vivo the passive non linear mechanical response of the abdominal wall. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Computing distance distributions from dipolar evolution data with overtones: RIDME spectroscopy with Gd(iii)-based spin labels.

    PubMed

    Keller, Katharina; Mertens, Valerie; Qi, Mian; Nalepa, Anna I; Godt, Adelheid; Savitsky, Anton; Jeschke, Gunnar; Yulikov, Maxim

    2017-07-21

    Extraction of distance distributions between high-spin paramagnetic centers from relaxation induced dipolar modulation enhancement (RIDME) data is affected by the presence of overtones of dipolar frequencies. As previously proposed, we account for these overtones by using a modified kernel function in Tikhonov regularization analysis. This paper analyzes the performance of such an approach on a series of model compounds with the Gd(iii)-PyMTA complex serving as paramagnetic high-spin label. We describe the calibration of the overtone coefficients for the RIDME kernel, demonstrate the accuracy of distance distributions obtained with this approach, and show that for our series of Gd-rulers RIDME technique provides more accurate distance distributions than Gd(iii)-Gd(iii) double electron-electron resonance (DEER). The analysis of RIDME data including harmonic overtones can be performed using the MATLAB-based program OvertoneAnalysis, which is available as open-source software from the web page of ETH Zurich. This approach opens a perspective for the routine use of the RIDME technique with high-spin labels in structural biology and structural studies of other soft matter.

  16. Simplified Phase Diversity algorithm based on a first-order Taylor expansion.

    PubMed

    Zhang, Dong; Zhang, Xiaobin; Xu, Shuyan; Liu, Nannan; Zhao, Luoxin

    2016-10-01

    We present a simplified solution to phase diversity when the observed object is a point source. It utilizes an iterative linearization of the point spread function (PSF) at two or more diverse planes by first-order Taylor expansion to reconstruct the initial wavefront. To enhance the influence of the PSF in the defocal plane which is usually very dim compared to that in the focal plane, we build a new model with the Tikhonov regularization function. The new model cannot only increase the computational speed, but also reduce the influence of the noise. By using the PSFs obtained from Zemax, we reconstruct the wavefront of the Hubble Space Telescope (HST) at the edge of the field of view (FOV) when the telescope is in either the nominal state or the misaligned state. We also set up an experiment, which consists of an imaging system and a deformable mirror, to validate the correctness of the presented model. The result shows that the new model can improve the computational speed with high wavefront detection accuracy.

  17. The Detection of Radiated Modes from Ducted Fan Engines

    NASA Technical Reports Server (NTRS)

    Farassat, F.; Nark, Douglas M.; Thomas, Russell H.

    2001-01-01

    The bypass duct of an aircraft engine is a low-pass filter allowing some spinning modes to radiate outside the duct. The knowledge of the radiated modes can help in noise reduction, as well as the diagnosis of noise generation mechanisms inside the duct. We propose a nonintrusive technique using a circular microphone array outside the engine measuring the complex noise spectrum on an arc of a circle. The array is placed at various axial distances from the inlet or the exhaust of the engine. Using a model of noise radiation from the duct, an overdetermined system of linear equations is constructed for the complex amplitudes of the radial modes for a fixed circumferential mode. This system of linear equations is generally singular, indicating that the problem is illposed. Tikhonov regularization is employed to solve this system of equations for the unknown amplitudes of the radiated modes. An application of our mode detection technique using measured acoustic data from a circular microphone array is presented. We show that this technique can reliably detect radiated modes with the possible exception of modes very close to cut-off.

  18. Numerical methods for the design of gradient-index optical coatings.

    PubMed

    Anzengruber, Stephan W; Klann, Esther; Ramlau, Ronny; Tonova, Diana

    2012-12-01

    We formulate the problem of designing gradient-index optical coatings as the task of solving a system of operator equations. We use iterative numerical procedures known from the theory of inverse problems to solve it with respect to the coating refractive index profile and thickness. The mathematical derivations necessary for the application of the procedures are presented, and different numerical methods (Landweber, Newton, and Gauss-Newton methods, Tikhonov minimization with surrogate functionals) are implemented. Procedures for the transformation of the gradient coating designs into quasi-gradient ones (i.e., multilayer stacks of homogeneous layers with different refractive indices) are also developed. The design algorithms work with physically available coating materials that could be produced with the modern coating technologies.

  19. The Jet Stream's Precursor of M7.7 Russia Earthquake on 2017/07/17

    NASA Astrophysics Data System (ADS)

    Wu, H. C.

    2017-12-01

    Before M>6.0 earthquakes occurred, jet stream in the epicenter area will interrupt or velocity flow lines cross. That meaning is that before earthquake happen, atmospheric pressure in high altitude suddenly dropped during 6 12 hours (Wu & Tikhonov, 2014; Wu et.al,2015). The 70 knots speed line in jet stream was crossed at the epicenter on 2017/07/13, and then M7.7 Russia earthquake happened on 2017/07/17. Lithosphere-atmosphere-ionosphere (LAI) coupling model may be explained this phenomenon : Ionization of the air produced by an increased emanation of radon at epicenter. The water molecules in the air react with these ions, and then release heat. The heat result in temperature rise and pressure drop in the air(Pulinets, Ouzounov, 2011), and then the speed line of jet stream was changed. ps.Russia earthquake:M7.7 2017-07-17 23:34:13 (UTC) 54.471°N 168.815°E 11.0 kmReference: H.C Wu, I.N. Tikhonov, 2014, "Jet streams anomalies as possible short-term precursors of earthquakes with M>6.0", Research in geophysics. H.C.Wu., Ivan N. Tikhonov, and Ariel R. Ćesped,2015, Multi-parametric analysis of earthquake precursors, Russ. J. Earth. Sci., 15, ES3002, doi:10.2205/2015ES000553 S Pulinets, D Ouzounov, 2011,"Lithosphere-Atmosphere-Ionosphere Coupling (LAIC) model-An unified concept for earthquake precursors validation", Journal of Asian Earth Sciences 41 (4), 371-382.

  20. A pratical deconvolution algorithm in multi-fiber spectra extraction

    NASA Astrophysics Data System (ADS)

    Zhang, Haotong; Li, Guangwei; Bai, Zhongrui

    2015-08-01

    Deconvolution algorithm is a very promising method in multi-fiber spectroscopy data reduction, the method can extract spectra to the photo noise level as well as improve the spectral resolution, but as mentioned in Bolton & Schlegel (2010), it is limited by its huge computation requirement and thus can not be implemented directly in actual data reduction. We develop a practical algorithm to solve the computation problem. The new algorithm can deconvolve a 2D fiber spectral image of any size with actual PSFs, which may vary with positions. We further consider the influence of noise, which is thought to be an intrinsic ill-posed problem in deconvolution algorithms. We modify our method with a Tikhonov regularization item to depress the method induced noise. A series of simulations based on LAMOST data are carried out to test our method under more real situations with poisson noise and extreme cross talk, i.e., the fiber-to-fiber distance is comparable to the FWHM of the fiber profile. Compared with the results of traditional extraction methods, i.e., the Aperture Extraction Method and the Profile Fitting Method, our method shows both higher S/N and spectral resolution. The computaion time for a noise added image with 250 fibers and 4k pixels in wavelength direction, is about 2 hours when the fiber cross talk is not in the extreme case and 3.5 hours for the extreme fiber cross talk. We finally apply our method to real LAMOST data. We find that the 1D spectrum extracted by our method has both higher SNR and resolution than the traditional methods, but there are still some suspicious weak features possibly caused by the noise sensitivity of the method around the strong emission lines. How to further attenuate the noise influence will be the topic of our future work. As we have demonstrated, multi-fiber spectra extracted by our method will have higher resolution and signal to noise ratio thus will provide more accurate information (such as higher radial velocity and metallicity measurement accuracy in stellar physics) to astronomers than traditional methods.

  1. Performance Evaluation of MIND Demons Deformable Registration of MR and CT Images in Spinal Interventions

    PubMed Central

    Reaungamornrat, S.; De Silva, T.; Uneri, A.; Goerres, J.; Jacobson, M.; Ketcha, M.; Vogt, S.; Kleinszig, G.; Khanna, A. J.; Wolinsky, J.-P.; Prince, J. L.; Siewerdsen, J. H.

    2016-01-01

    Accurate intraoperative localization of target anatomy and adjacent nervous and vascular tissue is essential to safe, effective surgery, and multimodality deformable registration can be used to identify such anatomy by fusing preoperative CT or MR images with intraoperative images. A deformable image registration method has been developed to estimate viscoelastic diffeomorphisms between preoperative MR and intraoperative CT using modality-independent neighborhood descriptors (MIND) and a Huber metric for robust registration. The method, called MIND Demons, optimizes a constrained symmetric energy functional incorporating priors on smoothness, geodesics, and invertibility by alternating between Gauss-Newton optimization and Tikhonov regularization in a multiresolution scheme. Registration performance was evaluated for the MIND Demons method with a symmetric energy formulation in comparison to an asymmetric form, and sensitivity to anisotropic MR voxel-size was analyzed in phantom experiments emulating image-guided spine-surgery in comparison to a free-form deformation (FFD) method using local mutual information (LMI). Performance was validated in a clinical study involving 15 patients undergoing intervention of the cervical, thoracic, and lumbar spine. The target registration error (TRE) for the symmetric MIND Demons formulation [1.3 ± 0.8 mm (median ± interquartile)] outperformed the asymmetric form [3.6 ± 4.4 mm]. The method demonstrated fairly minor sensitivity to anisotropic MR voxel size, with median TRE ranging 1.3 – 2.9 mm for MR slice thickness ranging 0.9 – 9.9 mm, compared to TRE = 3.2 – 4.1 mm for LMI FFD over the same range. Evaluation in clinical data demonstrated sub-voxel TRE (< 2 mm) in all fifteen cases with realistic deformations that preserved topology with sub-voxel invertibility (0.001 mm) and positive-determinant spatial Jacobians. The approach therefore appears robust against realistic anisotropic resolution characteristics in MR and yields registration accuracy suitable to application in image-guided spine-surgery. PMID:27811396

  2. Performance evaluation of MIND demons deformable registration of MR and CT images in spinal interventions.

    PubMed

    Reaungamornrat, S; De Silva, T; Uneri, A; Goerres, J; Jacobson, M; Ketcha, M; Vogt, S; Kleinszig, G; Khanna, A J; Wolinsky, J-P; Prince, J L; Siewerdsen, J H

    2016-12-07

    Accurate intraoperative localization of target anatomy and adjacent nervous and vascular tissue is essential to safe, effective surgery, and multimodality deformable registration can be used to identify such anatomy by fusing preoperative CT or MR images with intraoperative images. A deformable image registration method has been developed to estimate viscoelastic diffeomorphisms between preoperative MR and intraoperative CT using modality-independent neighborhood descriptors (MIND) and a Huber metric for robust registration. The method, called MIND Demons, optimizes a constrained symmetric energy functional incorporating priors on smoothness, geodesics, and invertibility by alternating between Gauss-Newton optimization and Tikhonov regularization in a multiresolution scheme. Registration performance was evaluated for the MIND Demons method with a symmetric energy formulation in comparison to an asymmetric form, and sensitivity to anisotropic MR voxel-size was analyzed in phantom experiments emulating image-guided spine-surgery in comparison to a free-form deformation (FFD) method using local mutual information (LMI). Performance was validated in a clinical study involving 15 patients undergoing intervention of the cervical, thoracic, and lumbar spine. The target registration error (TRE) for the symmetric MIND Demons formulation (1.3  ±  0.8 mm (median  ±  interquartile)) outperformed the asymmetric form (3.6  ±  4.4 mm). The method demonstrated fairly minor sensitivity to anisotropic MR voxel size, with median TRE ranging 1.3-2.9 mm for MR slice thickness ranging 0.9-9.9 mm, compared to TRE  =  3.2-4.1 mm for LMI FFD over the same range. Evaluation in clinical data demonstrated sub-voxel TRE (<2 mm) in all fifteen cases with realistic deformations that preserved topology with sub-voxel invertibility (0.001 mm) and positive-determinant spatial Jacobians. The approach therefore appears robust against realistic anisotropic resolution characteristics in MR and yields registration accuracy suitable to application in image-guided spine-surgery.

  3. Performance evaluation of MIND demons deformable registration of MR and CT images in spinal interventions

    NASA Astrophysics Data System (ADS)

    Reaungamornrat, S.; De Silva, T.; Uneri, A.; Goerres, J.; Jacobson, M.; Ketcha, M.; Vogt, S.; Kleinszig, G.; Khanna, A. J.; Wolinsky, J.-P.; Prince, J. L.; Siewerdsen, J. H.

    2016-12-01

    Accurate intraoperative localization of target anatomy and adjacent nervous and vascular tissue is essential to safe, effective surgery, and multimodality deformable registration can be used to identify such anatomy by fusing preoperative CT or MR images with intraoperative images. A deformable image registration method has been developed to estimate viscoelastic diffeomorphisms between preoperative MR and intraoperative CT using modality-independent neighborhood descriptors (MIND) and a Huber metric for robust registration. The method, called MIND Demons, optimizes a constrained symmetric energy functional incorporating priors on smoothness, geodesics, and invertibility by alternating between Gauss-Newton optimization and Tikhonov regularization in a multiresolution scheme. Registration performance was evaluated for the MIND Demons method with a symmetric energy formulation in comparison to an asymmetric form, and sensitivity to anisotropic MR voxel-size was analyzed in phantom experiments emulating image-guided spine-surgery in comparison to a free-form deformation (FFD) method using local mutual information (LMI). Performance was validated in a clinical study involving 15 patients undergoing intervention of the cervical, thoracic, and lumbar spine. The target registration error (TRE) for the symmetric MIND Demons formulation (1.3  ±  0.8 mm (median  ±  interquartile)) outperformed the asymmetric form (3.6  ±  4.4 mm). The method demonstrated fairly minor sensitivity to anisotropic MR voxel size, with median TRE ranging 1.3-2.9 mm for MR slice thickness ranging 0.9-9.9 mm, compared to TRE  =  3.2-4.1 mm for LMI FFD over the same range. Evaluation in clinical data demonstrated sub-voxel TRE (<2 mm) in all fifteen cases with realistic deformations that preserved topology with sub-voxel invertibility (0.001 mm) and positive-determinant spatial Jacobians. The approach therefore appears robust against realistic anisotropic resolution characteristics in MR and yields registration accuracy suitable to application in image-guided spine-surgery.

  4. LSRN: A PARALLEL ITERATIVE SOLVER FOR STRONGLY OVER- OR UNDERDETERMINED SYSTEMS*

    PubMed Central

    Meng, Xiangrui; Saunders, Michael A.; Mahoney, Michael W.

    2014-01-01

    We describe a parallel iterative least squares solver named LSRN that is based on random normal projection. LSRN computes the min-length solution to minx∈ℝn ‖Ax − b‖2, where A ∈ ℝm × n with m ≫ n or m ≪ n, and where A may be rank-deficient. Tikhonov regularization may also be included. Since A is involved only in matrix-matrix and matrix-vector multiplications, it can be a dense or sparse matrix or a linear operator, and LSRN automatically speeds up when A is sparse or a fast linear operator. The preconditioning phase consists of a random normal projection, which is embarrassingly parallel, and a singular value decomposition of size ⌈γ min(m, n)⌉ × min(m, n), where γ is moderately larger than 1, e.g., γ = 2. We prove that the preconditioned system is well-conditioned, with a strong concentration result on the extreme singular values, and hence that the number of iterations is fully predictable when we apply LSQR or the Chebyshev semi-iterative method. As we demonstrate, the Chebyshev method is particularly efficient for solving large problems on clusters with high communication cost. Numerical results show that on a shared-memory machine, LSRN is very competitive with LAPACK’s DGELSD and a fast randomized least squares solver called Blendenpik on large dense problems, and it outperforms the least squares solver from SuiteSparseQR on sparse problems without sparsity patterns that can be exploited to reduce fill-in. Further experiments show that LSRN scales well on an Amazon Elastic Compute Cloud cluster. PMID:25419094

  5. Advances in Global Full Waveform Inversion

    NASA Astrophysics Data System (ADS)

    Tromp, J.; Bozdag, E.; Lei, W.; Ruan, Y.; Lefebvre, M. P.; Modrak, R. T.; Orsvuran, R.; Smith, J. A.; Komatitsch, D.; Peter, D. B.

    2017-12-01

    Information about Earth's interior comes from seismograms recorded at its surface. Seismic imaging based on spectral-element and adjoint methods has enabled assimilation of this information for the construction of 3D (an)elastic Earth models. These methods account for the physics of wave excitation and propagation by numerically solving the equations of motion, and require the execution of complex computational procedures that challenge the most advanced high-performance computing systems. Current research is petascale; future research will require exascale capabilities. The inverse problem consists of reconstructing the characteristics of the medium from -often noisy- observations. A nonlinear functional is minimized, which involves both the misfit to the measurements and a Tikhonov-type regularization term to tackle inherent ill-posedness. Achieving scalability for the inversion process on tens of thousands of multicore processors is a task that offers many research challenges. We initiated global "adjoint tomography" using 253 earthquakes and produced the first-generation model named GLAD-M15, with a transversely isotropic model parameterization. We are currently running iterations for a second-generation anisotropic model based on the same 253 events. In parallel, we continue iterations for a transversely isotropic model with a larger dataset of 1,040 events to determine higher-resolution plume and slab images. A significant part of our research has focused on eliminating I/O bottlenecks in the adjoint tomography workflow. This has led to the development of a new Adaptable Seismic Data Format based on HDF5, and post-processing tools based on the ADIOS library developed by Oak Ridge National Laboratory. We use the Ensemble Toolkit for workflow stabilization & management to automate the workflow with minimal human interaction.

  6. Natural recharge estimation and uncertainty analysis of an adjudicated groundwater basin using a regional-scale flow and subsidence model (Antelope Valley, California, USA)

    USGS Publications Warehouse

    Siade, Adam J.; Nishikawa, Tracy; Martin, Peter

    2015-01-01

    Groundwater has provided 50–90 % of the total water supply in Antelope Valley, California (USA). The associated groundwater-level declines have led the Los Angeles County Superior Court of California to recently rule that the Antelope Valley groundwater basin is in overdraft, i.e., annual pumpage exceeds annual recharge. Natural recharge consists primarily of mountain-front recharge and is an important component of the total groundwater budget in Antelope Valley. Therefore, natural recharge plays a major role in the Court’s decision. The exact quantity and distribution of natural recharge is uncertain, with total estimates from previous studies ranging from 37 to 200 gigaliters per year (GL/year). In order to better understand the uncertainty associated with natural recharge and to provide a tool for groundwater management, a numerical model of groundwater flow and land subsidence was developed. The transient model was calibrated using PEST with water-level and subsidence data; prior information was incorporated through the use of Tikhonov regularization. The calibrated estimate of natural recharge was 36 GL/year, which is appreciably less than the value used by the court (74 GL/year). The effect of parameter uncertainty on the estimation of natural recharge was addressed using the Null-Space Monte Carlo method. A Pareto trade-off method was also used to portray the reasonableness of larger natural recharge rates. The reasonableness of the 74 GL/year value and the effect of uncertain pumpage rates were also evaluated. The uncertainty analyses indicate that the total natural recharge likely ranges between 34.5 and 54.3 GL/year.

  7. A Generic 1D Forward Modeling and Inversion Algorithm for TEM Sounding with an Arbitrary Horizontal Loop

    NASA Astrophysics Data System (ADS)

    Li, Zhanhui; Huang, Qinghua; Xie, Xingbing; Tang, Xingong; Chang, Liao

    2016-08-01

    We present a generic 1D forward modeling and inversion algorithm for transient electromagnetic (TEM) data with an arbitrary horizontal transmitting loop and receivers at any depth in a layered earth. Both the Hankel and sine transforms required in the forward algorithm are calculated using the filter method. The adjoint-equation method is used to derive the formulation of data sensitivity at any depth in non-permeable media. The inversion algorithm based on this forward modeling algorithm and sensitivity formulation is developed using the Gauss-Newton iteration method combined with the Tikhonov regularization. We propose a new data-weighting method to minimize the initial model dependence that enhances the convergence stability. On a laptop with a CPU of i7-5700HQ@3.5 GHz, the inversion iteration of a 200 layered input model with a single receiver takes only 0.34 s, while it increases to only 0.53 s for the data from four receivers at a same depth. For the case of four receivers at different depths, the inversion iteration runtime increases to 1.3 s. Modeling the data with an irregular loop and an equal-area square loop indicates that the effect of the loop geometry is significant at early times and vanishes gradually along the diffusion of TEM field. For a stratified earth, inversion of data from more than one receiver is useful in noise reducing to get a more credible layered earth. However, for a resistive layer shielded below a conductive layer, increasing the number of receivers on the ground does not have significant improvement in recovering the resistive layer. Even with a down-hole TEM sounding, the shielded resistive layer cannot be recovered if all receivers are above the shielded resistive layer. However, our modeling demonstrates remarkable improvement in detecting the resistive layer with receivers in or under this layer.

  8. Reconstruction of past equilibrium line altitude using ice extent data

    NASA Astrophysics Data System (ADS)

    Visnjevic, Vjeran; Herman, Frederic; Podladchikov, Yuri

    2017-04-01

    With the end of the Last Glacial Maximum (LGM), about 20 000 years ago, ended the most recent long-lasting cold phase in Earth's history. This last glacial advance left a strong observable imprint on the landscape, such as abandoned moraines, trimlines and other glacial geomorphic features. These features provide a valuable record of past continental climate. In particular, terminal moraines reflect the extent of glaciers and ice-caps, which itself reflects past temperature and precipitation conditions. Here we present an inverse approach, based on a Tikhonov regularization, we have recently developed to reconstruct the LGM mass balance from observed ice extent data. The ice flow model is developed using the shallow ice approximation and solved explicitly using Graphical Processing Units (GPU). The mass balance field, b, is the constrained variable defined by the ice surface S, balance rate β and the spatially variable equilibrium line altitude field (ELA): b = min (β ṡ(S(x,y)- ELA (x,y)),c). (1) where c is a maximum accumulation rate. We show that such a mass balance, and thus the spatially variable ELA field, can be inferred from the observed past ice extent and ice thickness at high resolution and very efficiently. The GPU implementation allows us solve one 1024x1024 grid points forward model run under 0.5s, which significantly reduces the time needed for our inverse method to converge. We start with synthetic test to demonstrate the method. We then apply the method to LGM ice extents of South Island of New Zealand, the Patagonian Andes, where we can see a clear influence of Westerlies on the ELA, and the European Alps. These examples show that the method is capable of constraining spatial variations in mass balance at the scale of a mountain range, and provide us with information on past continental climate.

  9. Transect-scale imaging of root zone electrical conductivity by inversion of multiple-height EMI measurements under different salinity conditions

    NASA Astrophysics Data System (ADS)

    Piero Deidda, Gian; Coppola, Antonio; Dragonetti, Giovanna; Comegna, Alessandro; Rodriguez, Giuseppe; Vignoli, Giulio

    2017-04-01

    The ability to determine the effects of salts on soils and plants, are of great importance to agriculture. To control its harmful effects, soil salinity needs to be monitored in space and time. This requires knowledge of its magnitude, temporal dynamics, and spatial variability. Soil salinity can be evaluated by measuring the bulk electrical conductivity (σb) in the field. Measurements of σb can be made with either in situ or remote devices (Rhoades and Oster, 1986; Rhoades and Corwin, 1990; Rhoades and Miyamoto, 1990). Time Domain Reflectometry (TDR) sensors allow simultaneous measurements of water content, θ, and σb. They may be calibrated in the laboratory for estimating the electrical conductivity of the soil solution (σw). However, they have a relatively small observation volume and thus they only provide local-scale measurements. The spatial range of the sensors is limited to tens of centimeters and extension of the information to a large area can be problematic. Also, information on the vertical distribution of the σb soil profile may only be obtained by installing sensors at different depths. In this sense, the TDR may be considered as an invasive technique. Compared to the TDR, non-invasive electromagnetic induction (EMI) techniques can be used for extensively mapping the bulk electrical conductivity in the field. The problem is that all these techniques give depth-weighted apparent electrical conductivity (ECa) measurements, depending on the specific depth distribution of the σb, as well as on the depth response function of the sensor used. In order to deduce the actual distribution of local σb in the soil profile, one may invert the signal coming from EMI sensors. Most studies use the linear model proposed by McNeill (1980), describing the relative depth-response of the ground conductivity meter. By using the forward linear model of McNeill, Borchers et al. (1997) implemented a Least Squares inverse procedure with second order Tikhonov regularization, to estimate σb vertical distribution from EMI field data. More recent studies (Hendrickx et al., 2002; Deidda et al., 2003; Deidda et al., 2014, among others), extended the approach to a more complicated non linear model of the response of a ground conductivity meter to changes with depth of σb. Noteworthy, these inverse procedures are only based on electromagnetic physics. Thus, they are only based on ECa readings, possibly taken with both the horizontal and vertical configurations and with the sensor at different heights above the ground, and do not require any further field calibration. Nevertheless, as discussed by Hendrickx et al. (2002), important issues on inverse approaches are about: i) the applicability to heterogeneous field soils of physical equations originally developed for the electromagnetic response of homogeneous media and ii) nonuniqueness and instability problems inherent to inverse procedures, even after Tikhonov regularization. Besides, as discussed by Cook and Walker (1992), these mathematical inversions procedures using layered-earth models were originally designed for interpreting porous systems with distinct layering. Where subsurface layers are not sharply defined, this type of inversion may be subject to considerable error. With these premises, the main aim of this study is estimating the vertical σb distribution by ECa measured using ground surface EMI methods under different salinity conditions and using TDR data as ground-truth data for validation of the inversion procedure. The latter is based on a regularized 1D inversion procedure designed to swiftly manage nonlinear multiple EMI-depth responses (Deidda et al., 2014). It is based on the coupling of the damped Gauss-Newton method with either the truncated singular value decomposition (TSVD) or the truncated generalized singular value decomposition (TGSVD), and it implements an explicit (exact) representation of the Jacobian to solve the nonlinear inverse problem. The experimental field (30 m x 15.6 m; for a total area of 468 m2) was divided into three transects 30 m long and 4.2 width, cultivated with green bean and irrigated with three different salinity levels (1 dS/m, 3 dS/m, and 6 dS/m). Each transect consisted of seven rows equipped with a sprinkler irrigation system, which supplied a water flux of 2 l/h. As for the salt application, CaCl2 were dissolved in tap water, and subsequently siphoned into the irrigation system. For each transect, 24 regularly spaced monitoring sites (1 m apart) were selected for soil measurements, using different equipments: i) a TDR100, ii), a Geonics EM-38; iii). Overall, fifteen measurement campaigns were carried out.

  10. Probabilistic Downscaling of Remote Sensing Data with Applications for Multi-Scale Biogeochemical Flux Modeling.

    PubMed

    Stoy, Paul C; Quaife, Tristan

    2015-01-01

    Upscaling ecological information to larger scales in space and downscaling remote sensing observations or model simulations to finer scales remain grand challenges in Earth system science. Downscaling often involves inferring subgrid information from coarse-scale data, and such ill-posed problems are classically addressed using regularization. Here, we apply two-dimensional Tikhonov Regularization (2DTR) to simulate subgrid surface patterns for ecological applications. Specifically, we test the ability of 2DTR to simulate the spatial statistics of high-resolution (4 m) remote sensing observations of the normalized difference vegetation index (NDVI) in a tundra landscape. We find that the 2DTR approach as applied here can capture the major mode of spatial variability of the high-resolution information, but not multiple modes of spatial variability, and that the Lagrange multiplier (γ) used to impose the condition of smoothness across space is related to the range of the experimental semivariogram. We used observed and 2DTR-simulated maps of NDVI to estimate landscape-level leaf area index (LAI) and gross primary productivity (GPP). NDVI maps simulated using a γ value that approximates the range of observed NDVI result in a landscape-level GPP estimate that differs by ca 2% from those created using observed NDVI. Following findings that GPP per unit LAI is lower near vegetation patch edges, we simulated vegetation patch edges using multiple approaches and found that simulated GPP declined by up to 12% as a result. 2DTR can generate random landscapes rapidly and can be applied to disaggregate ecological information and compare of spatial observations against simulated landscapes.

  11. Probabilistic Downscaling of Remote Sensing Data with Applications for Multi-Scale Biogeochemical Flux Modeling

    PubMed Central

    Stoy, Paul C.; Quaife, Tristan

    2015-01-01

    Upscaling ecological information to larger scales in space and downscaling remote sensing observations or model simulations to finer scales remain grand challenges in Earth system science. Downscaling often involves inferring subgrid information from coarse-scale data, and such ill-posed problems are classically addressed using regularization. Here, we apply two-dimensional Tikhonov Regularization (2DTR) to simulate subgrid surface patterns for ecological applications. Specifically, we test the ability of 2DTR to simulate the spatial statistics of high-resolution (4 m) remote sensing observations of the normalized difference vegetation index (NDVI) in a tundra landscape. We find that the 2DTR approach as applied here can capture the major mode of spatial variability of the high-resolution information, but not multiple modes of spatial variability, and that the Lagrange multiplier (γ) used to impose the condition of smoothness across space is related to the range of the experimental semivariogram. We used observed and 2DTR-simulated maps of NDVI to estimate landscape-level leaf area index (LAI) and gross primary productivity (GPP). NDVI maps simulated using a γ value that approximates the range of observed NDVI result in a landscape-level GPP estimate that differs by ca 2% from those created using observed NDVI. Following findings that GPP per unit LAI is lower near vegetation patch edges, we simulated vegetation patch edges using multiple approaches and found that simulated GPP declined by up to 12% as a result. 2DTR can generate random landscapes rapidly and can be applied to disaggregate ecological information and compare of spatial observations against simulated landscapes. PMID:26067835

  12. Variational Gaussian approximation for Poisson data

    NASA Astrophysics Data System (ADS)

    Arridge, Simon R.; Ito, Kazufumi; Jin, Bangti; Zhang, Chen

    2018-02-01

    The Poisson model is frequently employed to describe count data, but in a Bayesian context it leads to an analytically intractable posterior probability distribution. In this work, we analyze a variational Gaussian approximation to the posterior distribution arising from the Poisson model with a Gaussian prior. This is achieved by seeking an optimal Gaussian distribution minimizing the Kullback-Leibler divergence from the posterior distribution to the approximation, or equivalently maximizing the lower bound for the model evidence. We derive an explicit expression for the lower bound, and show the existence and uniqueness of the optimal Gaussian approximation. The lower bound functional can be viewed as a variant of classical Tikhonov regularization that penalizes also the covariance. Then we develop an efficient alternating direction maximization algorithm for solving the optimization problem, and analyze its convergence. We discuss strategies for reducing the computational complexity via low rank structure of the forward operator and the sparsity of the covariance. Further, as an application of the lower bound, we discuss hierarchical Bayesian modeling for selecting the hyperparameter in the prior distribution, and propose a monotonically convergent algorithm for determining the hyperparameter. We present extensive numerical experiments to illustrate the Gaussian approximation and the algorithms.

  13. A numerical investigation into the ability of the Poisson PDE to extract the mass-density from land-based gravity data: A case study of salt diapirs in the north coast of the Persian Gulf

    NASA Astrophysics Data System (ADS)

    AllahTavakoli, Yahya; Safari, Abdolreza

    2017-08-01

    This paper is counted as a numerical investigation into the capability of Poisson's Partial Differential Equation (PDE) at Earth's surface to extract the near-surface mass-density from land-based gravity data. For this purpose, first it focuses on approximating the gradient tensor of Earth's gravitational potential by means of land-based gravity data. Then, based on the concepts of both the gradient tensor and Poisson's PDE at the Earth's surface, certain formulae are proposed for the mass-density determination. Furthermore, this paper shows how the generalized Tikhonov regularization strategy can be used for enhancing the efficiency of the proposed approach. Finally, in a real case study, the formulae are applied to 6350 gravity stations located within a part of the north coast of the Persian Gulf. The case study numerically indicates that the proposed formulae, provided by Poisson's PDE, has the ability to convert land-based gravity data into the terrain mass-density which has been used for depicting areas of salt diapirs in the region of the case study.

  14. MoisturEC: a new R program for moisture content estimation from electrical conductivity data

    USGS Publications Warehouse

    Terry, Neil; Day-Lewis, Frederick D.; Werkema, Dale D.; Lane, John W.

    2018-01-01

    Noninvasive geophysical estimation of soil moisture has potential to improve understanding of flow in the unsaturated zone for problems involving agricultural management, aquifer recharge, and optimization of landfill design and operations. In principle, several geophysical techniques (e.g., electrical resistivity, electromagnetic induction, and nuclear magnetic resonance) offer insight into soil moisture, but data‐analysis tools are needed to “translate” geophysical results into estimates of soil moisture, consistent with (1) the uncertainty of this translation and (2) direct measurements of moisture. Although geostatistical frameworks exist for this purpose, straightforward and user‐friendly tools are required to fully capitalize on the potential of geophysical information for soil‐moisture estimation. Here, we present MoisturEC, a simple R program with a graphical user interface to convert measurements or images of electrical conductivity (EC) to soil moisture. Input includes EC values, point moisture estimates, and definition of either Archie parameters (based on experimental or literature values) or empirical data of moisture vs. EC. The program produces two‐ and three‐dimensional images of moisture based on available EC and direct measurements of moisture, interpolating between measurement locations using a Tikhonov regularization approach.

  15. Progress towards daily "swath" solutions from GRACE

    NASA Astrophysics Data System (ADS)

    Save, H.; Bettadpur, S. V.; Sakumura, C.

    2015-12-01

    The GRACE mission has provided invaluable and the only data of its kind that measures the total water column in the Earth System over the past 13 years. The GRACE solutions available from the project have been monthly average solutions. There have been attempts by several groups to produce shorter time-window solutions with different techniques. There is also an experimental quick-look GRACE solution available from CSR that implements a sliding window approach while applying variable daily data weights. All of these GRACE solutions require special handling for data assimilation. This study explores the possibility of generating a true daily GRACE solution by computing a daily "swath" total water storage (TWS) estimate from GRACE using the Tikhonov regularization and high resolution monthly mascon estimation implemented at CSR. This paper discusses the techniques for computing such a solution and discusses the error and uncertainty characterization. We perform comparisons with official RL05 GRACE solutions and with alternate mascon solutions from CSR to understand the impact on the science results. We evaluate these solutions with emphasis on the temporal characteristics of the signal content and validate them against multiple models and in-situ data sets.

  16. Evaluating the potential for quantitative monitoring of in situ chemical oxidation of aqueous-phase TCE using in-phase and quadrature electrical conductivity

    NASA Astrophysics Data System (ADS)

    Hort, R. D.; Revil, A.; Munakata-Marr, J.; Mao, D.

    2015-07-01

    Electrical resistivity measurements can potentially be used to remotely monitor fate and transport of ionic oxidants such as permanganate (MnO4-) during in situ chemical oxidation (ISCO) of contaminants like trichloroethene (TCE). Time-lapse two-dimensional bulk conductivity and induced polarization surveys conducted during a sand tank ISCO simulation demonstrated that MnO4- plume movement could be monitored in a qualitative manner using bulk conductivity tomograms, although chargeability was below sensitivity limits. We also examined changes to in-phase and quadrature electrical conductivity resulting from ion injection, MnO2 and Cl- production, and pH change during TCE and humate oxidation by MnO4- in homogeneous aqueous solutions and saturated porous media samples. Data from the homogeneous samples demonstrated that inversion of the sand tank resistivity data using a common Tikhonov regularization approach was insufficient to recover an accurate conductivity distribution within the tank. While changes to in-phase conductivity could be successfully modeled, quadrature conductivity values could not be directly related to TCE oxidation product or MnO4- concentrations at frequencies consistent with field induced polarization surveys, limiting the utility of quadrature conductivity for monitoring ISCO.

  17. Regional Recovery of the Disturbing Gravitational Potential from Satellite Observations of First-, Second- and Third-order Radial Derivatives of the Disturbing Gravitational Potential

    NASA Astrophysics Data System (ADS)

    Novak, P.; Pitonak, M.; Sprlak, M.

    2015-12-01

    Recently realized gravity-dedicated satellite missions allow for measuring values of scalar, vectorial (Gravity Recovery And Climate Experiment - GRACE) and second-order tensorial (Gravity field and steady-state Ocean Circulation Explorer - GOCE) parameters of the Earth's gravitational potential. Theoretical aspects related to using moving sensors for measuring elements of a third-order gravitational tensor are currently under investigation, e.g. the gravity-dedicated satellite mission OPTIMA (OPTical Interferometry for global Mass change detection from space) should measure third-order derivatives of the Earth's gravitational potential. This contribution investigates regional recovery of the disturbing gravitational potential on the Earth's surface from satellite observations of first-, second- and third-order radial derivatives of the disturbing gravitational potential. Synthetic measurements along a satellite orbit at the altitude of 250 km are synthetized from the global gravitational model EGM2008 and polluted by the Gaussian noise. The process of downward continuation is stabilized by the Tikhonov regularization. Estimated values of the disturbing gravitational potential are compared with the same quantity synthesized directly from EGM2008. Finally, this contribution also discusses merging a regional solution into a global field as a patchwork.

  18. Possibilities of the regional gravity field recovery from first-, second- and third-order radial derivatives of the disturbing gravitational potential measured on moving platforms

    NASA Astrophysics Data System (ADS)

    Pitonak, Martin; Sprlak, Michal; Novak, Pavel; Tenzer, Robert

    2016-04-01

    Recently realized gravity-dedicated satellite missions allow for measuring values of scalar, vectorial (Gravity Recovery And Climate Experiment - GRACE) and second-order tensorial (Gravity field and steady-state Ocean Circulation Explorer - GOCE) parameters of the Earth's gravitational potential. Theoretical aspects related to using moving sensors for measuring elements of the third-order gravitational tensor are currently under investigation, e.g., the gravity field-dedicated satellite mission OPTIMA (OPTical Interferometry for global Mass change detection from space) should measure third-order derivatives of the Earth's gravitational potential. This contribution investigates regional recovery of the disturbing gravitational potential on the Earth's surface from satellite and aerial observations of the first-, second- and third-order radial derivatives of the disturbing gravitational potential. Synthetic measurements along a satellite orbit at the altitude of 250 km and along an aircraft track at the altitude of 10 km are synthetized from the global gravitational model EGM2008 and polluted by the Gaussian noise. The process of downward continuation is stabilized by the Tikhonov regularization. Estimated values of the disturbing gravitational potential are compared with the same quantity synthesized directly from EGM2008.

  19. Bayesian source term determination with unknown covariance of measurements

    NASA Astrophysics Data System (ADS)

    Belal, Alkomiet; Tichý, Ondřej; Šmídl, Václav

    2017-04-01

    Determination of a source term of release of a hazardous material into the atmosphere is a very important task for emergency response. We are concerned with the problem of estimation of the source term in the conventional linear inverse problem, y = Mx, where the relationship between the vector of observations y is described using the source-receptor-sensitivity (SRS) matrix M and the unknown source term x. Since the system is typically ill-conditioned, the problem is recast as an optimization problem minR,B(y - Mx)TR-1(y - Mx) + xTB-1x. The first term minimizes the error of the measurements with covariance matrix R, and the second term is a regularization of the source term. There are different types of regularization arising for different choices of matrices R and B, for example, Tikhonov regularization assumes covariance matrix B as the identity matrix multiplied by scalar parameter. In this contribution, we adopt a Bayesian approach to make inference on the unknown source term x as well as unknown R and B. We assume prior on x to be a Gaussian with zero mean and unknown diagonal covariance matrix B. The covariance matrix of the likelihood R is also unknown. We consider two potential choices of the structure of the matrix R. First is the diagonal matrix and the second is a locally correlated structure using information on topology of the measuring network. Since the inference of the model is intractable, iterative variational Bayes algorithm is used for simultaneous estimation of all model parameters. The practical usefulness of our contribution is demonstrated on an application of the resulting algorithm to real data from the European Tracer Experiment (ETEX). This research is supported by EEA/Norwegian Financial Mechanism under project MSMT-28477/2014 Source-Term Determination of Radionuclide Releases by Inverse Atmospheric Dispersion Modelling (STRADI).

  20. Ammonia concentration distribution measurements in the exhaust of a heavy duty diesel engine based on limited data absorption tomography.

    PubMed

    Stritzke, Felix; van der Kley, Sani; Feiling, Alexander; Dreizler, Andreas; Wagner, Steven

    2017-04-03

    A multichannel tunable diode laser absorption spectrometer is used to measure absolute ammonia concentrations and their distributions in exhaust gas applications with intense CO2 and H2O background. Designed for in situ diagnostics in SCR after treatment systems with temperatures up to 800 K, the system employs a fiber coupled near-infrared distributed feedback diode laser. With the laser split into eight coplanar beams crossing the exhaust pipe, the sensor provides eight concentration measurements simultaneously. Three ammonia ro-vibrational transitions coinciding near 2200.5 nm with rather weak temperature dependency and negligible CO2/H2O interference were probed during the measurements. The line-of-sight averaged channel concentrations are transformed into 2-D ammonia distributions using limited data IR species tomography based on Tikhonov regularization. This spectrometer was successfully applied in the exhaust system of a 340 kW heavy duty diesel engine operated without oxidation catalyst or particulate filter. In this harsh environment the multi-channel sensor achieved single path ammonia detection limits of 25 to 80 ppmV with a temporal resolution of 1 Hz whereas, while operated as a single-channel sensor, these characteristics improved to 10 ppmV and 100 Hz. Spatial averaging of the reconstructed 2-D ammonia distributions shows good agreement to cross-sectional extractive measurements. In contrast to extractive methods more information about spatial inhomogeneities and transient operating conditions can be derived from the new spectrometer.

  1. Calibration of hydrological model with programme PEST

    NASA Astrophysics Data System (ADS)

    Brilly, Mitja; Vidmar, Andrej; Kryžanowski, Andrej; Bezak, Nejc; Šraj, Mojca

    2016-04-01

    PEST is tool based on minimization of an objective function related to the root mean square error between the model output and the measurement. We use "singular value decomposition", section of the PEST control file, and Tikhonov regularization method for successfully estimation of model parameters. The PEST sometimes failed if inverse problems were ill-posed, but (SVD) ensures that PEST maintains numerical stability. The choice of the initial guess for the initial parameter values is an important issue in the PEST and need expert knowledge. The flexible nature of the PEST software and its ability to be applied to whole catchments at once give results of calibration performed extremely well across high number of sub catchments. Use of parallel computing version of PEST called BeoPEST was successfully useful to speed up calibration process. BeoPEST employs smart slaves and point-to-point communications to transfer data between the master and slaves computers. The HBV-light model is a simple multi-tank-type model for simulating precipitation-runoff. It is conceptual balance model of catchment hydrology which simulates discharge using rainfall, temperature and estimates of potential evaporation. Version of HBV-light-CLI allows the user to run HBV-light from the command line. Input and results files are in XML form. This allows to easily connecting it with other applications such as pre and post-processing utilities and PEST itself. The procedure was applied on hydrological model of Savinja catchment (1852 km2) and consists of twenty one sub-catchments. Data are temporary processed on hourly basis.

  2. Theoretical stability in coefficient inverse problems for general hyperbolic equations with numerical reconstruction

    NASA Astrophysics Data System (ADS)

    Yu, Jie; Liu, Yikan; Yamamoto, Masahiro

    2018-04-01

    In this article, we investigate the determination of the spatial component in the time-dependent second order coefficient of a hyperbolic equation from both theoretical and numerical aspects. By the Carleman estimates for general hyperbolic operators and an auxiliary Carleman estimate, we establish local Hölder stability with either partial boundary or interior measurements under certain geometrical conditions. For numerical reconstruction, we minimize a Tikhonov functional which penalizes the gradient of the unknown function. Based on the resulting variational equation, we design an iteration method which is updated by solving a Poisson equation at each step. One-dimensional prototype examples illustrate the numerical performance of the proposed iteration.

  3. Is extreme learning machine feasible? A theoretical assessment (part I).

    PubMed

    Liu, Xia; Lin, Shaobo; Fang, Jian; Xu, Zongben

    2015-01-01

    An extreme learning machine (ELM) is a feedforward neural network (FNN) like learning system whose connections with output neurons are adjustable, while the connections with and within hidden neurons are randomly fixed. Numerous applications have demonstrated the feasibility and high efficiency of ELM-like systems. It has, however, been open if this is true for any general applications. In this two-part paper, we conduct a comprehensive feasibility analysis of ELM. In Part I, we provide an answer to the question by theoretically justifying the following: 1) for some suitable activation functions, such as polynomials, Nadaraya-Watson and sigmoid functions, the ELM-like systems can attain the theoretical generalization bound of the FNNs with all connections adjusted, i.e., they do not degrade the generalization capability of the FNNs even when the connections with and within hidden neurons are randomly fixed; 2) the number of hidden neurons needed for an ELM-like system to achieve the theoretical bound can be estimated; and 3) whenever the activation function is taken as polynomial, the deduced hidden layer output matrix is of full column-rank, therefore the generalized inverse technique can be efficiently applied to yield the solution of an ELM-like system, and, furthermore, for the nonpolynomial case, the Tikhonov regularization can be applied to guarantee the weak regularity while not sacrificing the generalization capability. In Part II, however, we reveal a different aspect of the feasibility of ELM: there also exists some activation functions, which makes the corresponding ELM degrade the generalization capability. The obtained results underlie the feasibility and efficiency of ELM-like systems, and yield various generalizations and improvements of the systems as well.

  4. Computed inverse resonance imaging for magnetic susceptibility map reconstruction.

    PubMed

    Chen, Zikuan; Calhoun, Vince

    2012-01-01

    This article reports a computed inverse magnetic resonance imaging (CIMRI) model for reconstructing the magnetic susceptibility source from MRI data using a 2-step computational approach. The forward T2*-weighted MRI (T2*MRI) process is broken down into 2 steps: (1) from magnetic susceptibility source to field map establishment via magnetization in the main field and (2) from field map to MR image formation by intravoxel dephasing average. The proposed CIMRI model includes 2 inverse steps to reverse the T2*MRI procedure: field map calculation from MR-phase image and susceptibility source calculation from the field map. The inverse step from field map to susceptibility map is a 3-dimensional ill-posed deconvolution problem, which can be solved with 3 kinds of approaches: the Tikhonov-regularized matrix inverse, inverse filtering with a truncated filter, and total variation (TV) iteration. By numerical simulation, we validate the CIMRI model by comparing the reconstructed susceptibility maps for a predefined susceptibility source. Numerical simulations of CIMRI show that the split Bregman TV iteration solver can reconstruct the susceptibility map from an MR-phase image with high fidelity (spatial correlation ≈ 0.99). The split Bregman TV iteration solver includes noise reduction, edge preservation, and image energy conservation. For applications to brain susceptibility reconstruction, it is important to calibrate the TV iteration program by selecting suitable values of the regularization parameter. The proposed CIMRI model can reconstruct the magnetic susceptibility source of T2*MRI by 2 computational steps: calculating the field map from the phase image and reconstructing the susceptibility map from the field map. The crux of CIMRI lies in an ill-posed 3-dimensional deconvolution problem, which can be effectively solved by the split Bregman TV iteration algorithm.

  5. Micron-size hydrogen cluster target for laser-driven proton acceleration

    NASA Astrophysics Data System (ADS)

    Jinno, S.; Kanasaki, M.; Uno, M.; Matsui, R.; Uesaka, M.; Kishimoto, Y.; Fukuda, Y.

    2018-04-01

    As a new laser-driven ion acceleration technique, we proposed a way to produce impurity-free, highly reproducible, and robust proton beams exceeding 100 MeV using a Coulomb explosion of micron-size hydrogen clusters. In this study, micron-size hydrogen clusters were generated by expanding the cooled high-pressure hydrogen gas into a vacuum via a conical nozzle connected to a solenoid valve cooled by a mechanical cryostat. The size distributions of the hydrogen clusters were evaluated by measuring the angular distribution of laser light scattered from the clusters. The data were analyzed mathematically based on the Mie scattering theory combined with the Tikhonov regularization method. The maximum size of the hydrogen cluster at 25 K and 6 MPa in the stagnation state was recognized to be 2.15 ± 0.10 μm. The mean cluster size decreased with increasing temperature, and was found to be much larger than that given by Hagena’s formula. This discrepancy suggests that the micron-size hydrogen clusters were formed by the atomization (spallation) of the liquid or supercritical fluid phase of hydrogen. In addition, the density profiles of the gas phase were evaluated for 25 to 80 K at 6 MPa using a Nomarski interferometer. Based on the measurement results and the equation of state for hydrogen, the cluster mass fraction was obtained. 3D particles-in-cell (PIC) simulations concerning the interaction processes of micron-size hydrogen clusters with high power laser pulses predicted the generation of protons exceeding 100 MeV and accelerating in a laser propagation direction via an anisotropic Coulomb explosion mechanism, thus demonstrating a future candidate in laser-driven proton sources for upcoming multi-petawatt lasers.

  6. Glacier mass variations from recent ITSG-Grace solutions: Experiences with the point-mass modeling technique in the framework of project SPICE.

    NASA Astrophysics Data System (ADS)

    Reimond, S.; Klinger, B.; Krauss, S.; Mayer-Gürr, T.; Eicker, A.; Zemp, M.

    2017-12-01

    In recent years, remotely sensed observations have become one of the most ubiquitous and valuable sources of information for glacier monitoring. In addition to altimetry and interferometry data (as observed, e.g., by the CryoSat-2 and TanDEM-X satellites), time-variable gravity field data from the GRACE satellite mission has been used by several authors to assess mass changes in glacier systems. The main challenges in this context are i) the limited spatial resolution of GRACE, ii) the gravity signal attenuation in space and iii) the problem of isolating the glaciological signal from the gravitational signatures as detected by GRACE.In order to tackle the challenges i) and ii), we thoroughly investigate the point-mass modeling technique to represent the local gravity field. Instead of simply evaluating global spherical harmonics, we operate on the normal equation level and make use of GRACE K-band ranging data (available since April 2002) processed at the Graz University of Technology. Assessing such small-scale mass changes from space-borne gravimetric data is an ill-posed problem, which we aim to stabilize by utilizing a Genetic Algorithm based Tikhonov regularization. Concerning issue iii), we evaluate three different hydrology models (i.e. GLDAS, LSDM and WGHM) for validation purposes and the derivation of error bounds. The non-glaciological signal is calculated for each region of interest and reduced from the GRACE results.We present mass variations of several alpine glacier systems (e.g. the European Alps, Svalbard or Iceland) and compare our results to glaciological observations provided by the World Glacier Monitoring Service (WGMS) and alternative inversion methods (surface density modeling).

  7. Sparsity-based acoustic inversion in cross-sectional multiscale optoacoustic imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Han, Yiyong; Tzoumas, Stratis; Nunes, Antonio

    2015-09-15

    Purpose: With recent advancement in hardware of optoacoustic imaging systems, highly detailed cross-sectional images may be acquired at a single laser shot, thus eliminating motion artifacts. Nonetheless, other sources of artifacts remain due to signal distortion or out-of-plane signals. The purpose of image reconstruction algorithms is to obtain the most accurate images from noisy, distorted projection data. Methods: In this paper, the authors use the model-based approach for acoustic inversion, combined with a sparsity-based inversion procedure. Specifically, a cost function is used that includes the L1 norm of the image in sparse representation and a total variation (TV) term. Themore » optimization problem is solved by a numerically efficient implementation of a nonlinear gradient descent algorithm. TV–L1 model-based inversion is tested in the cross section geometry for numerically generated data as well as for in vivo experimental data from an adult mouse. Results: In all cases, model-based TV–L1 inversion showed a better performance over the conventional Tikhonov regularization, TV inversion, and L1 inversion. In the numerical examples, the images reconstructed with TV–L1 inversion were quantitatively more similar to the originating images. In the experimental examples, TV–L1 inversion yielded sharper images and weaker streak artifact. Conclusions: The results herein show that TV–L1 inversion is capable of improving the quality of highly detailed, multiscale optoacoustic images obtained in vivo using cross-sectional imaging systems. As a result of its high fidelity, model-based TV–L1 inversion may be considered as the new standard for image reconstruction in cross-sectional imaging.« less

  8. Advanced methods for modeling water-levels and estimating drawdowns with SeriesSEE, an Excel add-in

    USGS Publications Warehouse

    Halford, Keith; Garcia, C. Amanda; Fenelon, Joe; Mirus, Benjamin B.

    2012-12-21

    Water-level modeling is used for multiple-well aquifer tests to reliably differentiate pumping responses from natural water-level changes in wells, or “environmental fluctuations.” Synthetic water levels are created during water-level modeling and represent the summation of multiple component fluctuations, including those caused by environmental forcing and pumping. Pumping signals are modeled by transforming step-wise pumping records into water-level changes by using superimposed Theis functions. Water-levels can be modeled robustly with this Theis-transform approach because environmental fluctuations and pumping signals are simulated simultaneously. Water-level modeling with Theis transforms has been implemented in the program SeriesSEE, which is a Microsoft® Excel add-in. Moving average, Theis, pneumatic-lag, and gamma functions transform time series of measured values into water-level model components in SeriesSEE. Earth tides and step transforms are additional computed water-level model components. Water-level models are calibrated by minimizing a sum-of-squares objective function where singular value decomposition and Tikhonov regularization stabilize results. Drawdown estimates from a water-level model are the summation of all Theis transforms minus residual differences between synthetic and measured water levels. The accuracy of drawdown estimates is limited primarily by noise in the data sets, not the Theis-transform approach. Drawdowns much smaller than environmental fluctuations have been detected across major fault structures, at distances of more than 1 mile from the pumping well, and with limited pre-pumping and recovery data at sites across the United States. In addition to water-level modeling, utilities exist in SeriesSEE for viewing, cleaning, manipulating, and analyzing time-series data.

  9. Inversion of multiwavelength Raman lidar data for retrieval of bimodal aerosol size distribution

    NASA Astrophysics Data System (ADS)

    Veselovskii, Igor; Kolgotin, Alexei; Griaznov, Vadim; Müller, Detlef; Franke, Kathleen; Whiteman, David N.

    2004-02-01

    We report on the feasibility of deriving microphysical parameters of bimodal particle size distributions from Mie-Raman lidar based on a triple Nd:YAG laser. Such an instrument provides backscatter coefficients at 355, 532, and 1064 nm and extinction coefficients at 355 and 532 nm. The inversion method employed is Tikhonov's inversion with regularization. Special attention has been paid to extend the particle size range for which this inversion scheme works to ~10 μm, which makes this algorithm applicable to large particles, e.g., investigations concerning the hygroscopic growth of aerosols. Simulations showed that surface area, volume concentration, and effective radius are derived to an accuracy of ~50% for a variety of bimodal particle size distributions. For particle size distributions with an effective radius of <1 μm the real part of the complex refractive index was retrieved to an accuracy of +/-0.05, the imaginary part was retrieved to 50% uncertainty. Simulations dealing with a mode-dependent complex refractive index showed that an average complex refractive index is derived that lies between the values for the two individual modes. Thus it becomes possible to investigate external mixtures of particle size distributions, which, for example, might be present along continental rims along which anthropogenic pollution mixes with marine aerosols. Measurement cases obtained from the Institute for Tropospheric Research six-wavelength aerosol lidar observations during the Indian Ocean Experiment were used to test the capabilities of the algorithm for experimental data sets. A benchmark test was attempted for the case representing anthropogenic aerosols between a broken cloud deck. A strong contribution of particle volume in the coarse mode of the particle size distribution was found.

  10. Inversion of multiwavelength Raman lidar data for retrieval of bimodal aerosol size distribution.

    PubMed

    Veselovskii, Igor; Kolgotin, Alexei; Griaznov, Vadim; Müller, Detlef; Franke, Kathleen; Whiteman, David N

    2004-02-10

    We report on the feasibility of deriving microphysical parameters of bimodal particle size distributions from Mie-Raman lidar based on a triple Nd:YAG laser. Such an instrument provides backscatter coefficients at 355, 532, and 1064 nm and extinction coefficients at 355 and 532 nm. The inversion method employed is Tikhonov's inversion with regularization. Special attention has been paid to extend the particle size range for which this inversion scheme works to approximately 10 microm, which makes this algorithm applicable to large particles, e.g., investigations concerning the hygroscopic growth of aerosols. Simulations showed that surface area, volume concentration, and effective radius are derived to an accuracy of approximately 50% for a variety of bimodal particle size distributions. For particle size distributions with an effective radius of < 1 microm the real part of the complex refractive index was retrieved to an accuracy of +/- 0.05, the imaginary part was retrieved to 50% uncertainty. Simulations dealing with a mode-dependent complex refractive index showed that an average complex refractive index is derived that lies between the values for the two individual modes. Thus it becomes possible to investigate external mixtures of particle size distributions, which, for example, might be present along continental rims along which anthropogenic pollution mixes with marine aerosols. Measurement cases obtained from the Institute for Tropospheric Research six-wavelength aerosol lidar observations during the Indian Ocean Experiment were used to test the capabilities of the algorithm for experimental data sets. A benchmark test was attempted for the case representing anthropogenic aerosols between a broken cloud deck. A strong contribution of particle volume in the coarse mode of the particle size distribution was found.

  11. Calculating Path-Dependent Travel Time Prediction Variance and Covariance fro a Global Tomographic P-Velocity Model

    NASA Astrophysics Data System (ADS)

    Ballard, S.; Hipp, J. R.; Encarnacao, A.; Young, C. J.; Begnaud, M. L.; Phillips, W. S.

    2012-12-01

    Seismic event locations can be made more accurate and precise by computing predictions of seismic travel time through high fidelity 3D models of the wave speed in the Earth's interior. Given the variable data quality and uneven data sampling associated with this type of model, it is essential that there be a means to calculate high-quality estimates of the path-dependent variance and covariance associated with the predicted travel times of ray paths through the model. In this paper, we describe a methodology for accomplishing this by exploiting the full model covariance matrix and show examples of path-dependent travel time prediction uncertainty computed from SALSA3D, our global, seamless 3D tomographic P-velocity model. Typical global 3D models have on the order of 1/2 million nodes, so the challenge in calculating the covariance matrix is formidable: 0.9 TB storage for 1/2 of a symmetric matrix, necessitating an Out-Of-Core (OOC) blocked matrix solution technique. With our approach the tomography matrix (G which includes Tikhonov regularization terms) is multiplied by its transpose (GTG) and written in a blocked sub-matrix fashion. We employ a distributed parallel solution paradigm that solves for (GTG)-1 by assigning blocks to individual processing nodes for matrix decomposition update and scaling operations. We first find the Cholesky decomposition of GTG which is subsequently inverted. Next, we employ OOC matrix multiplication methods to calculate the model covariance matrix from (GTG)-1 and an assumed data covariance matrix. Given the model covariance matrix, we solve for the travel-time covariance associated with arbitrary ray-paths by summing the model covariance along both ray paths. Setting the paths equal and taking the square root yields the travel prediction uncertainty for the single path.

  12. Use of time series and harmonic constituents of tidal propagation to enhance estimation of coastal aquifer heterogeneity

    USGS Publications Warehouse

    Hughes, Joseph D.; White, Jeremy T.; Langevin, Christian D.

    2010-01-01

    A synthetic two‐dimensional model of a horizontally and vertically heterogeneous confined coastal aquifer system, based on the Upper Floridan aquifer in south Florida, USA, subjected to constant recharge and a complex tidal signal was used to generate 15‐minute water‐level data at select locations over a 7‐day simulation period.   “Observed” water‐level data were generated by adding noise, representative of typical barometric pressure variations and measurement errors, to 15‐minute data from the synthetic model. Permeability was calibrated using a non‐linear gradient‐based parameter inversion approach with preferred‐value Tikhonov regularization and 1) “observed” water‐level data, 2) harmonic constituent data, or 3) a combination of “observed” water‐level and harmonic constituent data.    In all cases, high‐frequency data used in the parameter inversion process were able to characterize broad‐scale heterogeneities; the ability to discern fine‐scale heterogeneity was greater when harmonic constituent data were used.  These results suggest that the combined use of highly parameterized‐inversion techniques and high frequency time and/or processed‐harmonic constituent water‐level data could be a useful approach to better characterize aquifer heterogeneities in coastal aquifers influenced by ocean tides.

  13. Localization and separation of acoustic sources by using a 2.5-dimensional circular microphone array.

    PubMed

    Bai, Mingsian R; Lai, Chang-Sheng; Wu, Po-Chen

    2017-07-01

    Circular microphone arrays (CMAs) are sufficient in many immersive audio applications because azimuthal angles of sources are considered more important than the elevation angles in those occasions. However, the fact that CMAs do not resolve the elevation angle well can be a limitation for some applications which involves three-dimensional sound images. This paper proposes a 2.5-dimensional (2.5-D) CMA comprised of a CMA and a vertical logarithmic-spacing linear array (LLA) on the top. In the localization stage, two delay-and-sum beamformers are applied to the CMA and the LLA, respectively. The direction of arrival (DOA) is estimated from the product of two array output signals. In the separation stage, Tikhonov regularization and convex optimization are employed to extract the source amplitudes on the basis of the estimated DOA. The extracted signals from two arrays are further processed by the normalized least-mean-square algorithm with the internal iteration to yield the source signal with improved quality. To validate the 2.5-D CMA experimentally, a three-dimensionally printed circular array comprised of a 24-element CMA and an eight-element LLA is constructed. Objective perceptual evaluation of speech quality test and a subjective listening test are also undertaken.

  14. MoisturEC: A New R Program for Moisture Content Estimation from Electrical Conductivity Data.

    PubMed

    Terry, Neil; Day-Lewis, Frederick D; Werkema, Dale; Lane, John W

    2018-03-06

    Noninvasive geophysical estimation of soil moisture has potential to improve understanding of flow in the unsaturated zone for problems involving agricultural management, aquifer recharge, and optimization of landfill design and operations. In principle, several geophysical techniques (e.g., electrical resistivity, electromagnetic induction, and nuclear magnetic resonance) offer insight into soil moisture, but data-analysis tools are needed to "translate" geophysical results into estimates of soil moisture, consistent with (1) the uncertainty of this translation and (2) direct measurements of moisture. Although geostatistical frameworks exist for this purpose, straightforward and user-friendly tools are required to fully capitalize on the potential of geophysical information for soil-moisture estimation. Here, we present MoisturEC, a simple R program with a graphical user interface to convert measurements or images of electrical conductivity (EC) to soil moisture. Input includes EC values, point moisture estimates, and definition of either Archie parameters (based on experimental or literature values) or empirical data of moisture vs. EC. The program produces two- and three-dimensional images of moisture based on available EC and direct measurements of moisture, interpolating between measurement locations using a Tikhonov regularization approach. Published 2018. This article is a U.S. Government work and is in the public domain in the USA.

  15. Three-dimensional tomographic imaging for dynamic radiation behavior study using infrared imaging video bolometers in large helical device plasma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sano, Ryuichi; Iwama, Naofumi; Peterson, Byron J.

    A three-dimensional (3D) tomography system using four InfraRed imaging Video Bolometers (IRVBs) has been designed with a helical periodicity assumption for the purpose of plasma radiation measurement in the large helical device. For the spatial inversion of large sized arrays, the system has been numerically and experimentally examined using the Tikhonov regularization with the criterion of minimum generalized cross validation, which is the standard solver of inverse problems. The 3D transport code EMC3-EIRENE for impurity behavior and related radiation has been used to produce phantoms for numerical tests, and the relative calibration of the IRVB images has been carried outmore » with a simple function model of the decaying plasma in a radiation collapse. The tomography system can respond to temporal changes in the plasma profile and identify the 3D dynamic behavior of radiation, such as the radiation enhancement that starts from the inboard side of the torus, during the radiation collapse. The reconstruction results are also consistent with the output signals of a resistive bolometer. These results indicate that the designed 3D tomography system is available for the 3D imaging of radiation. The first 3D direct tomographic measurement of a magnetically confined plasma has been achieved.« less

  16. Design of parallel transmission pulses for simultaneous multislice with explicit control for peak power and local specific absorption rate.

    PubMed

    Guérin, Bastien; Setsompop, Kawin; Ye, Huihui; Poser, Benedikt A; Stenger, Andrew V; Wald, Lawrence L

    2015-05-01

    To design parallel transmit (pTx) simultaneous multislice (SMS) spokes pulses with explicit control for peak power and local and global specific absorption rate (SAR). We design SMS pTx least-squares and magnitude least squares spokes pulses while constraining local SAR using the virtual observation points (VOPs) compression of SAR matrices. We evaluate our approach in simulations of a head (7T) and a body (3T) coil with eight channels arranged in two z-rows. For many of our simulations, control of average power by Tikhonov regularization of the SMS pTx spokes pulse design yielded pulses that violated hardware and SAR safety limits. On the other hand, control of peak power alone yielded pulses that violated local SAR limits. Pulses optimized with control of both local SAR and peak power satisfied all constraints and therefore had the best excitation performance under limited power and SAR constraints. These results extend our previous results for single slice pTx excitations but are more pronounced because of the large power demands and SAR of SMS pulses. Explicit control of local SAR and peak power is required to generate optimal SMS pTx excitations satisfying both the system's hardware limits and regulatory safety limits. © 2014 Wiley Periodicals, Inc.

  17. Coupling of Sentinel-1, Sentinel-2 and ALOS-2 to assess coseismic deformation and earthquake-induced landslides following 26 June, 2016 earthquake in Kyrgyzstan

    NASA Astrophysics Data System (ADS)

    Vajedian, Sanaz; Motagh, Mahdi; Wetzel, Hans-Ulrich; Teshebaeva, Kanayim

    2017-04-01

    The active deformation in Kyrgyzstan results from the collision between Indian and Asia tectonic plates at a rate of 29 ± 1 mm/yr. This collision is accommodated by deformation on prominent faults, which can be ruptured coseismically and trigger other hazards like landslides. Many earthquake and earthquake-induced landslides in Kyrgyzstan occur in mountainous areas, where limited accessibility makes ground-based measurements for the assessment of their impact a challenging task. In this context, remote sensing measurements are extraordinary useful as they improve our knowledge about coseismic rupture process and provide information on other types of hazards that are triggered during and/or after the earthquakes. This investigation aims to use L-band ALOS/PALSAR, C-band Sentinel-1, Sentinel-2 data to evaluate fault slip model and coseismic-induced landslides related to 26 June 2016 Sary-Tash earthquake, southwest Kyrgyzstan. First we implement three methods to measure coseismic surface motion using radar data including Interferometric SAR (InSAR) analysis, SAR tracking technique and multiple aperture InSAR (MAI), followed by using Genetic Algorithm (GA) to invert the final displacement field to infer combination of orientation, location and slip on rectangular uniform slip fault plane. Slip distribution analysis is done by applying Tikhonov regularization to solve the constrained least-square method with Laplacian smoothing approach. The estimated coseismic slip model suggests a nearly W-E thrusting fault ruptured during the earthquake event in which the main rupture occurred at a depth between 11 and 14 km. Second, the local phase shifts related to landslides are inferred by detailed analysis pre-seismic, coseismic and postseismic C-band and L-band interferograms and the results are compared with the interpretations derived from Sentinel-2 data acquired before and after the earthquake.

  18. Comparison of quantitative myocardial perfusion imaging CT to fluorescent microsphere-based flow from high-resolution cryo-images

    NASA Astrophysics Data System (ADS)

    Eck, Brendan L.; Fahmi, Rachid; Levi, Jacob; Fares, Anas; Wu, Hao; Li, Yuemeng; Vembar, Mani; Dhanantwari, Amar; Bezerra, Hiram G.; Wilson, David L.

    2016-03-01

    Myocardial perfusion imaging using CT (MPI-CT) has the potential to provide quantitative measures of myocardial blood flow (MBF) which can aid the diagnosis of coronary artery disease. We evaluated the quantitative accuracy of MPI-CT in a porcine model of balloon-induced LAD coronary artery ischemia guided by fractional flow reserve (FFR). We quantified MBF at baseline (FFR=1.0) and under moderate ischemia (FFR=0.7) using MPI-CT and compared to fluorescent microsphere-based MBF from high-resolution cryo-images. Dynamic, contrast-enhanced CT images were obtained using a spectral detector CT (Philips Healthcare). Projection-based mono-energetic images were reconstructed and processed to obtain MBF. Three MBF quantification approaches were evaluated: singular value decomposition (SVD) with fixed Tikhonov regularization (ThSVD), SVD with regularization determined by the L-Curve criterion (LSVD), and Johnson-Wilson parameter estimation (JW). The three approaches over-estimated MBF compared to cryo-images. JW produced the most accurate MBF, with average error 33.3+/-19.2mL/min/100g, whereas LSVD and ThSVD had greater over-estimation, 59.5+/-28.3mL/min/100g and 78.3+/-25.6 mL/min/100g, respectively. Relative blood flow as assessed by a flow ratio of LAD-to-remote myocardium was strongly correlated between JW and cryo-imaging, with R2=0.97, compared to R2=0.88 and 0.78 for LSVD and ThSVD, respectively. We assessed tissue impulse response functions (IRFs) from each approach for sources of error. While JW was constrained to physiologic solutions, both LSVD and ThSVD produced IRFs with non-physiologic properties due to noise. The L-curve provided noise-adaptive regularization but did not eliminate non-physiologic IRF properties or optimize for MBF accuracy. These findings suggest that model-based MPI-CT approaches may be more appropriate for quantitative MBF estimation and that cryo-imaging can support the development of MPI-CT by providing spatial distributions of MBF.

  19. An algorithm for variational data assimilation of contact concentration measurements for atmospheric chemistry models

    NASA Astrophysics Data System (ADS)

    Penenko, Alexey; Penenko, Vladimir

    2014-05-01

    Contact concentration measurement data assimilation problem is considered for convection-diffusion-reaction models originating from the atmospheric chemistry study. High dimensionality of models imposes strict requirements on the computational efficiency of the algorithms. Data assimilation is carried out within the variation approach on a single time step of the approximated model. A control function is introduced into the source term of the model to provide flexibility for data assimilation. This function is evaluated as the minimum of the target functional that connects its norm to a misfit between measured and model-simulated data. In the case mathematical model acts as a natural Tikhonov regularizer for the ill-posed measurement data inversion problem. This provides flow-dependent and physically-plausible structure of the resulting analysis and reduces a need to calculate model error covariance matrices that are sought within conventional approach to data assimilation. The advantage comes at the cost of the adjoint problem solution. This issue is solved within the frameworks of splitting-based realization of the basic convection-diffusion-reaction model. The model is split with respect to physical processes and spatial variables. A contact measurement data is assimilated on each one-dimensional convection-diffusion splitting stage. In this case a computationally-efficient direct scheme for both direct and adjoint problem solution can be constructed based on the matrix sweep method. Data assimilation (or regularization) parameter that regulates ratio between model and data in the resulting analysis is obtained with Morozov discrepancy principle. For the proper performance the algorithm takes measurement noise estimation. In the case of Gaussian errors the probability that the used Chi-squared-based estimate is the upper one acts as the assimilation parameter. A solution obtained can be used as the initial guess for data assimilation algorithms that assimilate outside the splitting stages and involve iterations. Splitting method stage that is responsible for chemical transformation processes is realized with the explicit discrete-analytical scheme with respect to time. The scheme is based on analytical extraction of the exponential terms from the solution. This provides unconditional positive sign for the evaluated concentrations. Splitting-based structure of the algorithm provides means for efficient parallel realization. The work is partially supported by the Programs No 4 of Presidium RAS and No 3 of Mathematical Department of RAS, by RFBR project 11-01-00187 and Integrating projects of SD RAS No 8 and 35. Our studies are in the line with the goals of COST Action ES1004.

  20. Three dimensional lithospheric magnetization structures beneath Australia derived by inverse modeling of CHAMP satellite magnetic field model

    NASA Astrophysics Data System (ADS)

    Du, Jinsong; Chen, Chao; Lesur, Vincent; Li, Yaoguo; Lane, Richard; Liang, Qing; Wang, Haoran

    2014-05-01

    We present an inversion algorithm for magnetic anomaly data in spherical coordinates to image the three dimensional (3-D) susceptibility distributions in the lithosphere. The method assumes that remanent magnetization is absent and that the magnetic anomalies are solely the result of lateral variations in magnetic susceptibility. To take into account the curvature of the Earth, the 3-D model is comprised of a set of spherical prisms (referred to as tesseroids), each of which has a constant isotropic susceptibility. The inversion method is formulated with a specifically designed model objective function and radial weighting function in spherical coordinates. A Tikhonov regularization technique is used to obtain an optimal solution with data misfit consistent with the estimated error level. Results for regional synthetic models with different magnetized inclinations and declinations are presented to demonstrate the capability of the method to recover large scale lithospheric magnetic structures. We have applied the algoithm to study the lithospheric susceptibility structures in the Australia region using magnetic anomaly data from the GRIMM_L120v0.0 model, which is based on ten years of CHAMP satellite data. As a self-constrained inversion, the maximum depths variation of magnetization layer is estimated first and then incorporated to the three dimensional (3-D) inversion. Results showed that the susceptibility anomalies concentrate in the depth range from 25 km to 45 km, i.e., focused in the lower crust. In addition, the results showed that the susceptibilities in continental lithosphere are higher than those in oceanic lithosphere. The inverted 3-D susceptibility distribution in the region of Australia reveals significant features related to tectonics, surface heat-flux, crustal thickness and Curie isotherm depths. In general, the higher susceptibility anomalies correlate with Precambrian rocks, and the lower susceptibility anomalies correlate with younger orogenic belts, suture zones and modern uplifts. In details, the inverted susceptibility distribution shows differences in the magnetic structures between the eastern and western parts of the Yilgarn Craton, and three lower susceptibility belts from north to south in the Eromanga Basin and the Gawler Craton with high susceptibility that extend to the ocean and then to the west.

  1. Expedition 49 Preflight

    NASA Image and Video Library

    2016-10-07

    Expedition 49 backup crew members Mark Vande Hei of NASA, left, Alexander Misurkin, center, and Nikolai Tikhonov of Roscomos, left, report to Russian space officials after arriving in Baikonur, Kazakhstan on Friday, Oct. 7, 2016. The trio are preparing for launch to the International Spacestation in their Soyuz MS-02 spacecraft from the Baikonur Cosmodrome in Kazakhstan on October 19, 2016. Photo Credit: (NASA/Victor Zelentsov)

  2. On an application of Tikhonov's fixed point theorem to a nonlocal Cahn-Hilliard type system modeling phase separation

    NASA Astrophysics Data System (ADS)

    Colli, Pierluigi; Gilardi, Gianni; Sprekels, Jürgen

    2016-06-01

    This paper investigates a nonlocal version of a model for phase separation on an atomic lattice that was introduced by P. Podio-Guidugli (2006) [36]. The model consists of an initial-boundary value problem for a nonlinearly coupled system of two partial differential equations governing the evolution of an order parameter ρ and the chemical potential μ. Singular contributions to the local free energy in the form of logarithmic or double-obstacle potentials are admitted. In contrast to the local model, which was studied by P. Podio-Guidugli and the present authors in a series of recent publications, in the nonlocal case the equation governing the evolution of the order parameter contains in place of the Laplacian a nonlocal expression that originates from nonlocal contributions to the free energy and accounts for possible long-range interactions between the atoms. It is shown that just as in the local case the model equations are well posed, where the technique of proving existence is entirely different: it is based on an application of Tikhonov's fixed point theorem in a rather unusual separable and reflexive Banach space.

  3. Source term identification in atmospheric modelling via sparse optimization

    NASA Astrophysics Data System (ADS)

    Adam, Lukas; Branda, Martin; Hamburger, Thomas

    2015-04-01

    Inverse modelling plays an important role in identifying the amount of harmful substances released into atmosphere during major incidents such as power plant accidents or volcano eruptions. Another possible application of inverse modelling lies in the monitoring the CO2 emission limits where only observations at certain places are available and the task is to estimate the total releases at given locations. This gives rise to minimizing the discrepancy between the observations and the model predictions. There are two standard ways of solving such problems. In the first one, this discrepancy is regularized by adding additional terms. Such terms may include Tikhonov regularization, distance from a priori information or a smoothing term. The resulting, usually quadratic, problem is then solved via standard optimization solvers. The second approach assumes that the error term has a (normal) distribution and makes use of Bayesian modelling to identify the source term. Instead of following the above-mentioned approaches, we utilize techniques from the field of compressive sensing. Such techniques look for a sparsest solution (solution with the smallest number of nonzeros) of a linear system, where a maximal allowed error term may be added to this system. Even though this field is a developed one with many possible solution techniques, most of them do not consider even the simplest constraints which are naturally present in atmospheric modelling. One of such examples is the nonnegativity of release amounts. We believe that the concept of a sparse solution is natural in both problems of identification of the source location and of the time process of the source release. In the first case, it is usually assumed that there are only few release points and the task is to find them. In the second case, the time window is usually much longer than the duration of the actual release. In both cases, the optimal solution should contain a large amount of zeros, giving rise to the concept of sparsity. In the paper, we summarize several optimization techniques which are used for finding sparse solutions and propose their modifications to handle selected constraints such as nonnegativity constraints and simple linear constraints, for example the minimal or maximal amount of total release. These techniques range from successive convex approximations to solution of one nonconvex problem. On simple examples, we explain these techniques and compare them from the point of implementation simplicity, approximation capability and convergence properties. Finally, these methods will be applied on the European Tracer Experiment (ETEX) data and the results will be compared with the current state of arts techniques such as regularized least squares or Bayesian approach. The obtained results show the surprisingly good results of these techniques. This research is supported by EEA/Norwegian Financial Mechanism under project 7F14287 STRADI.

  4. Expedition 49 Preflight

    NASA Image and Video Library

    2016-10-07

    Expedition 49 backup crew members Mark Vande Hei of NASA, left, Alexander Misurkin, center, and Nikolai Tikhonov of Roscomos, left, exit the Gagarin Cosmonaut Training Center (GCTC) aircraft after arriving in Baikonur, Kazakhstan on Friday, Oct. 7, 2016. The trio are preparing for launch to the International Spacestation in their Soyuz MS-02 spacecraft from the Baikonur Cosmodrome in Kazakhstan on October 19, 2016. Photo Credit: (NASA/Victor Zelentsov)

  5. A distribution-based parametrization for improved tomographic imaging of solute plumes

    USGS Publications Warehouse

    Pidlisecky, Adam; Singha, K.; Day-Lewis, F. D.

    2011-01-01

    Difference geophysical tomography (e.g. radar, resistivity and seismic) is used increasingly for imaging fluid flow and mass transport associated with natural and engineered hydrologic phenomena, including tracer experiments, in situ remediation and aquifer storage and recovery. Tomographic data are collected over time, inverted and differenced against a background image to produce 'snapshots' revealing changes to the system; these snapshots readily provide qualitative information on the location and morphology of plumes of injected tracer, remedial amendment or stored water. In principle, geometric moments (i.e. total mass, centres of mass, spread, etc.) calculated from difference tomograms can provide further quantitative insight into the rates of advection, dispersion and mass transfer; however, recent work has shown that moments calculated from tomograms are commonly biased, as they are strongly affected by the subjective choice of regularization criteria. Conventional approaches to regularization (Tikhonov) and parametrization (image pixels) result in tomograms which are subject to artefacts such as smearing or pixel estimates taking on the sign opposite to that expected for the plume under study. Here, we demonstrate a novel parametrization for imaging plumes associated with hydrologic phenomena. Capitalizing on the mathematical analogy between moment-based descriptors of plumes and the moment-based parameters of probability distributions, we design an inverse problem that (1) is overdetermined and computationally efficient because the image is described by only a few parameters, (2) produces tomograms consistent with expected plume behaviour (e.g. changes of one sign relative to the background image), (3) yields parameter estimates that are readily interpreted for plume morphology and offer direct insight into hydrologic processes and (4) requires comparatively few data to achieve reasonable model estimates. We demonstrate the approach in a series of numerical examples based on straight-ray difference-attenuation radar monitoring of the transport of an ionic tracer, and show that the methodology outlined here is particularly effective when limited data are available. ?? 2011 The Authors Geophysical Journal International ?? 2011 RAS.

  6. Robust point matching via vector field consensus.

    PubMed

    Jiayi Ma; Ji Zhao; Jinwen Tian; Yuille, Alan L; Zhuowen Tu

    2014-04-01

    In this paper, we propose an efficient algorithm, called vector field consensus, for establishing robust point correspondences between two sets of points. Our algorithm starts by creating a set of putative correspondences which can contain a very large number of false correspondences, or outliers, in addition to a limited number of true correspondences (inliers). Next, we solve for correspondence by interpolating a vector field between the two point sets, which involves estimating a consensus of inlier points whose matching follows a nonparametric geometrical constraint. We formulate this a maximum a posteriori (MAP) estimation of a Bayesian model with hidden/latent variables indicating whether matches in the putative set are outliers or inliers. We impose nonparametric geometrical constraints on the correspondence, as a prior distribution, using Tikhonov regularizers in a reproducing kernel Hilbert space. MAP estimation is performed by the EM algorithm which by also estimating the variance of the prior model (initialized to a large value) is able to obtain good estimates very quickly (e.g., avoiding many of the local minima inherent in this formulation). We illustrate this method on data sets in 2D and 3D and demonstrate that it is robust to a very large number of outliers (even up to 90%). We also show that in the special case where there is an underlying parametric geometrical model (e.g., the epipolar line constraint) that we obtain better results than standard alternatives like RANSAC if a large number of outliers are present. This suggests a two-stage strategy, where we use our nonparametric model to reduce the size of the putative set and then apply a parametric variant of our approach to estimate the geometric parameters. Our algorithm is computationally efficient and we provide code for others to use it. In addition, our approach is general and can be applied to other problems, such as learning with a badly corrupted training data set.

  7. A non-invasive Hall current distribution measurement system for Hall Effect thrusters

    NASA Astrophysics Data System (ADS)

    Mullins, Carl Raymond

    A direct, accurate method to measure thrust produced by a Hall Effect thruster on orbit does not currently exist. The ability to calculate produced thrust will enable timely and precise maneuvering of spacecraft---a capability particularly important to satellite formation flying. The means to determine thrust directly is achievable by remotely measuring the magnetic field of the thruster and solving the inverse magnetostatic problem for the Hall current density distribution. For this thesis, the magnetic field was measured by employing an array of eight tunneling magnetoresistive (TMR) sensors capable of milligauss sensitivity when placed in a high background field. The array was positioned outside the channel of a 1.5 kW Colorado State University Hall thruster equipped with a center-mounted electride cathode. In this location, the static magnetic field is approximately 30 Gauss, which is within the linear operating range of the TMR sensors. Furthermore, the induced field at this distance is greater than tens of milligauss, which is within the sensitivity range of the TMR sensors. Due to the nature of the inverse problem, the induced-field measurements do not provide the Hall current density by a simple inversion; however, a Tikhonov regularization of the induced field along with a non-negativity constraint and a zero boundary condition provides current density distributions. Our system measures the sensor outputs at 2 MHz allowing the determination of the Hall current density distribution as a function of time. These data are shown in contour plots in sequential frames. The measured ratios between the average Hall current and the discharge current ranged from 0.1 to 10 over a range of operating conditions from 1.3 kW to 2.2 kW. The temporal inverse solution at 2.0 kW exhibited a breathing mode of 37 kHz, which was in agreement with temporal measurements of the discharge current.

  8. Experimental investigation of piston heat transfer under conventional diesel and reactivity-controlled compression ignition combustion regimes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Splitter, Derek A; Hendricks, Terry Lee; Ghandhi, Jaal B

    2014-01-01

    The piston of a heavy-duty single-cylinder research engine was instrumented with 11 fast-response surface thermocouples, and a commercial wireless telemetry system was used to transmit the signals from the moving piston. The raw thermocouple data were processed using an inverse heat conduction method that included Tikhonov regularization to recover transient heat flux. By applying symmetry, the data were compiled to provide time-resolved spatial maps of the piston heat flux and surface temperature. A detailed comparison was made between conventional diesel combustion and reactivity-controlled compression ignition combustion operations at matched conditions of load, speed, boost pressure, and combustion phasing. The integratedmore » piston heat transfer was found to be 24% lower, and the mean surface temperature was 25 C lower for reactivity-controlled compression ignition operation as compared to conventional diesel combustion, in spite of the higher peak heat release rate. Lower integrated piston heat transfer for reactivity-controlled compression ignition was found over all the operating conditions tested. The results showed that increasing speed decreased the integrated heat transfer for conventional diesel combustion and reactivity-controlled compression ignition. The effect of the start of injection timing was found to strongly influence conventional diesel combustion heat flux, but had a negligible effect on reactivity-controlled compression ignition heat flux, even in the limit of near top dead center high-reactivity fuel injection timings. These results suggest that the role of the high-reactivity fuel injection does not significantly affect the thermal environment even though it is important for controlling the ignition timing and heat release rate shape. The integrated heat transfer and the dynamic surface heat flux were found to be insensitive to changes in boost pressure for both conventional diesel combustion and reactivity-controlled compression ignition. However, for reactivity-controlled compression ignition, the mean surface temperature increased with changes in boost suggesting that equivalence ratio affects steady-state heat transfer.« less

  9. Observing Trace Gases Of The Arctic And Subarctic Stratosphere By TELIS

    NASA Astrophysics Data System (ADS)

    Xu, Jian; Schreier, Franz; Doicu, Adrian; Vogt, Peter; Birk, Manfred; Wagner, Georg; Trautmann, Thomas

    2013-12-01

    The Terahertz and submillimeter Limb Sounder (TELIS) is a balloon-borne cryogenic heterodyne spectrometer developed by a consortium of European institutes, which was mounted together with the Michelson Interferometer for Passive Atmospheric Sounding - Balloon (MIPAS- B) and the mini- Differential Optical Absorption Spectroscopy (mini-DOAS) instruments on a stratospheric gondola. The TELIS instrument is designed to monitor the vertical distribution of stratospheric state parameters associated with ozone destruction and climate change in Arctic and subarctic areas. The broad spectral coverage of TELIS is achieved by utilizing three frequency channels: a tunable 1.8THz channel based on a solid state local oscillator and a hot electron bolometer as mixer, a 480-650GHz channel with the Superconducting Integrated Receiver (SIR) technology, and a highly compact 500 GHz channel developed by the German Aerospace Center (DLR), the Netherlands Institute for Space Research (SRON), and the Rutherford Apple- ton Laboratory (RAL), respectively. Furthermore, an ex- tended spectral range is observed by the combination of TELIS and MIPAS-B, which can be employed for cross validation of several gas concentrations. Between 2009 and 2011 three successful scientific flights have been launched in Kiruna, Sweden and all relevant atmospheric gas species were seen by TELIS over an altitude range of 10-32.5 km. For estimation of concentration profiles from TELIS measurements, a constrained nonlinear least squares fitting framework along with var- ious Tikhonov-type regularization methods has been developed. In this work we present recent retrieval results from latest calibrated spectra during the 2010 flight. Emphasis is placed on ozone (O3) and hydrogen chloride (HCl), and error issues pertaining to the main instrumental uncertainty terms including nonlinearity in the calibration procedure, sideband ratio and pointing offset are investigated. The retrieved profiles are validated against other limb sounding instruments, e.g. the Superconducting Submillimeter-Wave Limb-Emission Sounder (SMILES), the Microwave Limb Sounder (MLS), and MIPAS-B.

  10. Calculating Path-Dependent Travel Time Prediction Variance and Covariance for the SALSA3D Global Tomographic P-Velocity Model with a Distributed Parallel Multi-Core Computer

    NASA Astrophysics Data System (ADS)

    Hipp, J. R.; Encarnacao, A.; Ballard, S.; Young, C. J.; Phillips, W. S.; Begnaud, M. L.

    2011-12-01

    Recently our combined SNL-LANL research team has succeeded in developing a global, seamless 3D tomographic P-velocity model (SALSA3D) that provides superior first P travel time predictions at both regional and teleseismic distances. However, given the variable data quality and uneven data sampling associated with this type of model, it is essential that there be a means to calculate high-quality estimates of the path-dependent variance and covariance associated with the predicted travel times of ray paths through the model. In this paper, we show a methodology for accomplishing this by exploiting the full model covariance matrix. Our model has on the order of 1/2 million nodes, so the challenge in calculating the covariance matrix is formidable: 0.9 TB storage for 1/2 of a symmetric matrix, necessitating an Out-Of-Core (OOC) blocked matrix solution technique. With our approach the tomography matrix (G which includes Tikhonov regularization terms) is multiplied by its transpose (GTG) and written in a blocked sub-matrix fashion. We employ a distributed parallel solution paradigm that solves for (GTG)-1 by assigning blocks to individual processing nodes for matrix decomposition update and scaling operations. We first find the Cholesky decomposition of GTG which is subsequently inverted. Next, we employ OOC matrix multiply methods to calculate the model covariance matrix from (GTG)-1 and an assumed data covariance matrix. Given the model covariance matrix we solve for the travel-time covariance associated with arbitrary ray-paths by integrating the model covariance along both ray paths. Setting the paths equal gives variance for that path. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  11. Superresolution parallel magnetic resonance imaging: Application to functional and spectroscopic imaging

    PubMed Central

    Otazo, Ricardo; Lin, Fa-Hsuan; Wiggins, Graham; Jordan, Ramiro; Sodickson, Daniel; Posse, Stefan

    2009-01-01

    Standard parallel magnetic resonance imaging (MRI) techniques suffer from residual aliasing artifacts when the coil sensitivities vary within the image voxel. In this work, a parallel MRI approach known as Superresolution SENSE (SURE-SENSE) is presented in which acceleration is performed by acquiring only the central region of k-space instead of increasing the sampling distance over the complete k-space matrix and reconstruction is explicitly based on intra-voxel coil sensitivity variation. In SURE-SENSE, parallel MRI reconstruction is formulated as a superresolution imaging problem where a collection of low resolution images acquired with multiple receiver coils are combined into a single image with higher spatial resolution using coil sensitivities acquired with high spatial resolution. The effective acceleration of conventional gradient encoding is given by the gain in spatial resolution, which is dictated by the degree of variation of the different coil sensitivity profiles within the low resolution image voxel. Since SURE-SENSE is an ill-posed inverse problem, Tikhonov regularization is employed to control noise amplification. Unlike standard SENSE, for which acceleration is constrained to the phase-encoding dimension/s, SURE-SENSE allows acceleration along all encoding directions — for example, two-dimensional acceleration of a 2D echo-planar acquisition. SURE-SENSE is particularly suitable for low spatial resolution imaging modalities such as spectroscopic imaging and functional imaging with high temporal resolution. Application to echo-planar functional and spectroscopic imaging in human brain is presented using two-dimensional acceleration with a 32-channel receiver coil. PMID:19341804

  12. Non-invasive Hall current distribution measurement in a Hall effect thruster

    NASA Astrophysics Data System (ADS)

    Mullins, Carl R.; Farnell, Casey C.; Farnell, Cody C.; Martinez, Rafael A.; Liu, David; Branam, Richard D.; Williams, John D.

    2017-01-01

    A means is presented to determine the Hall current density distribution in a closed drift thruster by remotely measuring the magnetic field and solving the inverse problem for the current density. The magnetic field was measured by employing an array of eight tunneling magnetoresistive (TMR) sensors capable of milligauss sensitivity when placed in a high background field. The array was positioned just outside the thruster channel on a 1.5 kW Hall thruster equipped with a center-mounted hollow cathode. In the sensor array location, the static magnetic field is approximately 30 G, which is within the linear operating range of the TMR sensors. Furthermore, the induced field at this distance is approximately tens of milligauss, which is within the sensitivity range of the TMR sensors. Because of the nature of the inverse problem, the induced-field measurements do not provide the Hall current density by a simple inversion; however, a Tikhonov regularization of the induced field does provide the current density distributions. These distributions are shown as a function of time in contour plots. The measured ratios between the average Hall current and the average discharge current ranged from 6.1 to 7.3 over a range of operating conditions from 1.3 kW to 2.2 kW. The temporal inverse solution at 1.5 kW exhibited a breathing mode frequency of 24 kHz, which was in agreement with temporal measurements of the discharge current.

  13. Non-invasive Hall current distribution measurement in a Hall effect thruster.

    PubMed

    Mullins, Carl R; Farnell, Casey C; Farnell, Cody C; Martinez, Rafael A; Liu, David; Branam, Richard D; Williams, John D

    2017-01-01

    A means is presented to determine the Hall current density distribution in a closed drift thruster by remotely measuring the magnetic field and solving the inverse problem for the current density. The magnetic field was measured by employing an array of eight tunneling magnetoresistive (TMR) sensors capable of milligauss sensitivity when placed in a high background field. The array was positioned just outside the thruster channel on a 1.5 kW Hall thruster equipped with a center-mounted hollow cathode. In the sensor array location, the static magnetic field is approximately 30 G, which is within the linear operating range of the TMR sensors. Furthermore, the induced field at this distance is approximately tens of milligauss, which is within the sensitivity range of the TMR sensors. Because of the nature of the inverse problem, the induced-field measurements do not provide the Hall current density by a simple inversion; however, a Tikhonov regularization of the induced field does provide the current density distributions. These distributions are shown as a function of time in contour plots. The measured ratios between the average Hall current and the average discharge current ranged from 6.1 to 7.3 over a range of operating conditions from 1.3 kW to 2.2 kW. The temporal inverse solution at 1.5 kW exhibited a breathing mode frequency of 24 kHz, which was in agreement with temporal measurements of the discharge current.

  14. ZEEMAN DOPPLER MAPS: ALWAYS UNIQUE, NEVER SPURIOUS?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stift, Martin J.; Leone, Francesco

    Numerical models of atomic diffusion in magnetic atmospheres of ApBp stars predict abundance structures that differ from the empirical maps derived with (Zeeman) Doppler mapping (ZDM). An in-depth analysis of this apparent disagreement investigates the detectability by means of ZDM of a variety of abundance structures, including (warped) rings predicted by theory, but also complex spot-like structures. Even when spectra of high signal-to-noise ratio are available, it can prove difficult or altogether impossible to correctly recover shapes, positions, and abundances of a mere handful of spots, notwithstanding the use of all four Stokes parameters and an exactly known field geometry;more » the recovery of (warped) rings can be equally challenging. Inversions of complex abundance maps that are based on just one or two spectral lines usually permit multiple solutions. It turns out that it can by no means be guaranteed that any of the regularization functions in general use for ZDM (maximum entropy or Tikhonov) will lead to a true abundance map instead of some spurious one. Attention is drawn to the need for a study that would elucidate the relation between the stratified, field-dependent abundance structures predicted by diffusion theory on the one hand, and empirical maps obtained by means of “canonical” ZDM, i.e., with mean atmospheres and unstratified abundances, on the other hand. Finally, we point out difficulties arising from the three-dimensional nature of the atomic diffusion process in magnetic ApBp star atmospheres.« less

  15. Retrieval of spheroid particle size distribution from spectral extinction data in the independent mode using PCA approach

    NASA Astrophysics Data System (ADS)

    Tang, Hong; Lin, Jian-Zhong

    2013-01-01

    An improved anomalous diffraction approximation (ADA) method is presented for calculating the extinction efficiency of spheroids firstly. In this approach, the extinction efficiency of spheroid particles can be calculated with good accuracy and high efficiency in a wider size range by combining the Latimer method and the ADA theory, and this method can present a more general expression for calculating the extinction efficiency of spheroid particles with various complex refractive indices and aspect ratios. Meanwhile, the visible spectral extinction with varied spheroid particle size distributions and complex refractive indices is surveyed. Furthermore, a selection principle about the spectral extinction data is developed based on PCA (principle component analysis) of first derivative spectral extinction. By calculating the contribution rate of first derivative spectral extinction, the spectral extinction with more significant features can be selected as the input data, and those with less features is removed from the inversion data. In addition, we propose an improved Tikhonov iteration method to retrieve the spheroid particle size distributions in the independent mode. Simulation experiments indicate that the spheroid particle size distributions obtained with the proposed method coincide fairly well with the given distributions, and this inversion method provides a simple, reliable and efficient method to retrieve the spheroid particle size distributions from the spectral extinction data.

  16. Conflict or Consensus: East Germany, the Soviet Union and Deutschlandpolitik 1958-1984.

    DTIC Science & Technology

    1986-06-01

    post Stalin world. The result was that in the 1957-1962 period there still existed submerged conflicts within the Kremlin, which did not allow a...EXPERIENCE OF FULL POLITBURO MEMBERS (Fall 1984) Name Birth Age Cand. Full Years as ’Vs Date Mbr . Mbr . Full Mbr . Tikhonov 1905 78 1978 1980 3 " - Ustinov...X AGE AND EXPERIENCE OF FORMER POLITBURO MEMBERS (1980-1983) Name Birth Death Age at Full Years as Date Date Death/Removal Mbr . Full Mbr . Pelshe 1899

  17. Brain source localization: A new method based on MUltiple SIgnal Classification algorithm and spatial sparsity of the field signal for electroencephalogram measurements

    NASA Astrophysics Data System (ADS)

    Vergallo, P.; Lay-Ekuakille, A.

    2013-08-01

    Brain activity can be recorded by means of EEG (Electroencephalogram) electrodes placed on the scalp of the patient. The EEG reflects the activity of groups of neurons located in the head, and the fundamental problem in neurophysiology is the identification of the sources responsible of brain activity, especially if a seizure occurs and in this case it is important to identify it. The studies conducted in order to formalize the relationship between the electromagnetic activity in the head and the recording of the generated external field allow to know pattern of brain activity. The inverse problem, that is given the sampling field at different electrodes the underlying asset must be determined, is more difficult because the problem may not have a unique solution, or the search for the solution is made difficult by a low spatial resolution which may not allow to distinguish between activities involving sources close to each other. Thus, sources of interest may be obscured or not detected and known method in source localization problem as MUSIC (MUltiple SIgnal Classification) could fail. Many advanced source localization techniques achieve a best resolution by exploiting sparsity: if the number of sources is small as a result, the neural power vs. location is sparse. In this work a solution based on the spatial sparsity of the field signal is presented and analyzed to improve MUSIC method. For this purpose, it is necessary to set a priori information of the sparsity in the signal. The problem is formulated and solved using a regularization method as Tikhonov, which calculates a solution that is the better compromise between two cost functions to minimize, one related to the fitting of the data, and another concerning the maintenance of the sparsity of the signal. At the first, the method is tested on simulated EEG signals obtained by the solution of the forward problem. Relatively to the model considered for the head and brain sources, the result obtained allows to have a significant improvement compared to the classical MUSIC method, with a small margin of uncertainty about the exact location of the sources. In fact, the constraints of the spatial sparsity on the signal field allow to concentrate power in the directions of active sources, and consequently it is possible to calculate the position of the sources within the considered volume conductor. Later, the method is tested on the real EEG data too. The result is in accordance with the clinical report even if improvements are necessary to have further accurate estimates of the positions of the sources.

  18. Acoustic imaging in application to reconstruction of rough rigid surface with airborne ultrasound waves

    NASA Astrophysics Data System (ADS)

    Krynkin, A.; Dolcetti, G.; Hunting, S.

    2017-02-01

    Accurate reconstruction of the surface roughness is of high importance to various areas of science and engineering. One important application of this technology is for remote monitoring of open channel flows through observing its dynamic surface roughness. In this paper a novel airborne acoustic method of roughness reconstruction is proposed and tested with a static rigid rough surface. This method is based on the acoustic holography principle and Kirchhoff approximation which make use of acoustic pressure data collected at multiple receiver points spread along an arch. The Tikhonov regularisation and generalised cross validation technique are used to solve the underdetermined system of equations for the acoustic pressures. The experimental data are collected above a roughness created with a 3D printer. For the given surface, it is shown that the proposed method works well with the various number of receiver positions. In this paper, the tested ratios between the number of surface points at which the surface elevation can be reconstructed and number of receiver positions are 2.5, 5, and 7.5. It is shown that, in a region comparable with the projected size of the main directivity lobe, the method is able to reconstruct the spatial spectrum density of the actual surface elevation with the accuracy of 20%.

  19. Acoustic imaging in application to reconstruction of rough rigid surface with airborne ultrasound waves.

    PubMed

    Krynkin, A; Dolcetti, G; Hunting, S

    2017-02-01

    Accurate reconstruction of the surface roughness is of high importance to various areas of science and engineering. One important application of this technology is for remote monitoring of open channel flows through observing its dynamic surface roughness. In this paper a novel airborne acoustic method of roughness reconstruction is proposed and tested with a static rigid rough surface. This method is based on the acoustic holography principle and Kirchhoff approximation which make use of acoustic pressure data collected at multiple receiver points spread along an arch. The Tikhonov regularisation and generalised cross validation technique are used to solve the underdetermined system of equations for the acoustic pressures. The experimental data are collected above a roughness created with a 3D printer. For the given surface, it is shown that the proposed method works well with the various number of receiver positions. In this paper, the tested ratios between the number of surface points at which the surface elevation can be reconstructed and number of receiver positions are 2.5, 5, and 7.5. It is shown that, in a region comparable with the projected size of the main directivity lobe, the method is able to reconstruct the spatial spectrum density of the actual surface elevation with the accuracy of 20%.

  20. Spatially constrained Bayesian inversion of frequency- and time-domain electromagnetic data from the Tellus projects

    NASA Astrophysics Data System (ADS)

    Kiyan, Duygu; Rath, Volker; Delhaye, Robert

    2017-04-01

    The frequency- and time-domain airborne electromagnetic (AEM) data collected under the Tellus projects of the Geological Survey of Ireland (GSI) which represent a wealth of information on the multi-dimensional electrical structure of Ireland's near-surface. Our project, which was funded by GSI under the framework of their Short Call Research Programme, aims to develop and implement inverse techniques based on various Bayesian methods for these densely sampled data. We have developed a highly flexible toolbox using Python language for the one-dimensional inversion of AEM data along the flight lines. The computational core is based on an adapted frequency- and time-domain forward modelling core derived from the well-tested open-source code AirBeo, which was developed by the CSIRO (Australia) and the AMIRA consortium. Three different inversion methods have been implemented: (i) Tikhonov-type inversion including optimal regularisation methods (Aster el al., 2012; Zhdanov, 2015), (ii) Bayesian MAP inversion in parameter and data space (e.g. Tarantola, 2005), and (iii) Full Bayesian inversion with Markov Chain Monte Carlo (Sambridge and Mosegaard, 2002; Mosegaard and Sambridge, 2002), all including different forms of spatial constraints. The methods have been tested on synthetic and field data. This contribution will introduce the toolbox and present case studies on the AEM data from the Tellus projects.

  1. The predictive consequences of parameterization

    NASA Astrophysics Data System (ADS)

    White, J.; Hughes, J. D.; Doherty, J. E.

    2013-12-01

    In numerical groundwater modeling, parameterization is the process of selecting the aspects of a computer model that will be allowed to vary during history matching. This selection process is dependent on professional judgment and is, therefore, inherently subjective. Ideally, a robust parameterization should be commensurate with the spatial and temporal resolution of the model and should include all uncertain aspects of the model. Limited computing resources typically require reducing the number of adjustable parameters so that only a subset of the uncertain model aspects are treated as estimable parameters; the remaining aspects are treated as fixed parameters during history matching. We use linear subspace theory to develop expressions for the predictive error incurred by fixing parameters. The predictive error is comprised of two terms. The first term arises directly from the sensitivity of a prediction to fixed parameters. The second term arises from prediction-sensitive adjustable parameters that are forced to compensate for fixed parameters during history matching. The compensation is accompanied by inappropriate adjustment of otherwise uninformed, null-space parameter components. Unwarranted adjustment of null-space components away from prior maximum likelihood values may produce bias if a prediction is sensitive to those components. The potential for subjective parameterization choices to corrupt predictions is examined using a synthetic model. Several strategies are evaluated, including use of piecewise constant zones, use of pilot points with Tikhonov regularization and use of the Karhunen-Loeve transformation. The best choice of parameterization (as defined by minimum error variance) is strongly dependent on the types of predictions to be made by the model.

  2. A Generalized Framework for Reduced-Order Modeling of a Wind Turbine Wake

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamilton, Nicholas; Viggiano, Bianca; Calaf, Marc

    A reduced-order model for a wind turbine wake is sought from large eddy simulation data. Fluctuating velocity fields are combined in the correlation tensor to form the kernel of the proper orthogonal decomposition (POD). Proper orthogonal decomposition modes resulting from the decomposition represent the spatially coherent turbulence structures in the wind turbine wake; eigenvalues delineate the relative amount of turbulent kinetic energy associated with each mode. Back-projecting the POD modes onto the velocity snapshots produces dynamic coefficients that express the amplitude of each mode in time. A reduced-order model of the wind turbine wake (wakeROM) is defined through a seriesmore » of polynomial parameters that quantify mode interaction and the evolution of each POD mode coefficients. The resulting system of ordinary differential equations models the wind turbine wake composed only of the large-scale turbulent dynamics identified by the POD. Tikhonov regularization is used to recalibrate the dynamical system by adding additional constraints to the minimization seeking polynomial parameters, reducing error in the modeled mode coefficients. The wakeROM is periodically reinitialized with new initial conditions found by relating the incoming turbulent velocity to the POD mode coefficients through a series of open-loop transfer functions. The wakeROM reproduces mode coefficients to within 25.2%, quantified through the normalized root-mean-square error. A high-level view of the modeling approach is provided as a platform to discuss promising research directions, alternate processes that could benefit stability and efficiency, and desired extensions of the wakeROM.« less

  3. Sparsity-based acoustic inversion in cross-sectional multiscale optoacoustic imaging.

    PubMed

    Han, Yiyong; Tzoumas, Stratis; Nunes, Antonio; Ntziachristos, Vasilis; Rosenthal, Amir

    2015-09-01

    With recent advancement in hardware of optoacoustic imaging systems, highly detailed cross-sectional images may be acquired at a single laser shot, thus eliminating motion artifacts. Nonetheless, other sources of artifacts remain due to signal distortion or out-of-plane signals. The purpose of image reconstruction algorithms is to obtain the most accurate images from noisy, distorted projection data. In this paper, the authors use the model-based approach for acoustic inversion, combined with a sparsity-based inversion procedure. Specifically, a cost function is used that includes the L1 norm of the image in sparse representation and a total variation (TV) term. The optimization problem is solved by a numerically efficient implementation of a nonlinear gradient descent algorithm. TV-L1 model-based inversion is tested in the cross section geometry for numerically generated data as well as for in vivo experimental data from an adult mouse. In all cases, model-based TV-L1 inversion showed a better performance over the conventional Tikhonov regularization, TV inversion, and L1 inversion. In the numerical examples, the images reconstructed with TV-L1 inversion were quantitatively more similar to the originating images. In the experimental examples, TV-L1 inversion yielded sharper images and weaker streak artifact. The results herein show that TV-L1 inversion is capable of improving the quality of highly detailed, multiscale optoacoustic images obtained in vivo using cross-sectional imaging systems. As a result of its high fidelity, model-based TV-L1 inversion may be considered as the new standard for image reconstruction in cross-sectional imaging.

  4. Regional flow in a complex coastal aquifer system: Combining voxel geological modelling with regularized calibration

    NASA Astrophysics Data System (ADS)

    Meyer, Rena; Engesgaard, Peter; Høyer, Anne-Sophie; Jørgensen, Flemming; Vignoli, Giulio; Sonnenborg, Torben O.

    2018-07-01

    Low-lying coastal regions are often highly populated, constitute sensitive habitats and are at the same time exposed to challenging hydrological environments due to surface flooding from storm events and saltwater intrusion, which both may affect drinking water supply from shallow and deeper aquifers. Near the Wadden Sea at the border of Southern Denmark and Northern Germany, the hydraulic system (connecting groundwater, river water, and the sea) was altered over centuries (until the 19th century) by e.g. the construction of dikes and drains to prevent flooding and allow agricultural use. Today, massive saltwater intrusions extend up to 20 km inland. In order to understand the regional flow, a methodological approach was developed that combined: (1) a highly-resolved voxel geological model, (2) a ∼1 million node groundwater model with 46 hydrofacies coupled to rivers, drains and the sea, (3) Tikhonov regularization calibration using hydraulic heads and average stream discharges as targets and (4) parameter uncertainty analysis. It is relatively new to use voxel models for constructing geological models that often have been simplified to stacked, pseudo-3D layer geology. The study is therefore one of the first to combine a voxel geological model with state-of-the-art flow calibration techniques. The results show that voxel geological modelling, where lithofacies information are transferred to each volumetric element, is a useful method to preserve 3D geological heterogeneity on a local scale, which is important when distinct geological features such as buried valleys are abundant. Furthermore, it is demonstrated that simpler geological models and simpler calibration methods do not perform as well. The proposed approach is applicable to many other systems, because it combines advanced and flexible geological modelling and flow calibration techniques. This has led to new insights in the regional flow patterns and especially about water cycling in the marsh area near the coast based on the ability to define six predictive scenarios from the linear analysis of parameter uncertainty. The results show that the coastal system near the Danish-German border is mainly controlled by flow in the two aquifers separated by a thick clay layer, and several deep high-permeable buried valleys that connect the sea with the interior and the two aquifers. The drained marsh area acts like a huge regional sink limiting submarine groundwater discharge. With respect to water balance, the greatest sensitivity to parameter uncertainty was observed in the drained marsh area, where some scenarios showed increased flow of sea water into the interior and increased drainage. We speculate that the massive salt water intrusion may be caused by a combination of the preferential pathways provided by the buried valleys, the marsh drainage and relatively high hydraulic conductivities in the two main aquifers as described by one of the scenarios. This is currently under investigation by using a salt water transport model.

  5. Potential effects of groundwater pumping on water levels, phreatophytes, and spring discharges in Spring and Snake Valleys, White Pine County, Nevada, and adjacent areas in Nevada and Utah

    USGS Publications Warehouse

    Halford, Keith J.; Plume, Russell W.

    2011-01-01

    Assessing hydrologic effects of developing groundwater supplies in Snake Valley required numerical, groundwater-flow models to estimate the timing and magnitude of capture from streams, springs, wetlands, and phreatophytes. Estimating general water-table decline also required groundwater simulation. The hydraulic conductivity of basin fill and transmissivity of basement-rock distributions in Spring and Snake Valleys were refined by calibrating a steady state, three-dimensional, MODFLOW model of the carbonate-rock province to predevelopment conditions. Hydraulic properties and boundary conditions were defined primarily from the Regional Aquifer-System Analysis (RASA) model except in Spring and Snake Valleys. This locally refined model was referred to as the Great Basin National Park calibration (GBNP-C) model. Groundwater discharges from phreatophyte areas and springs in Spring and Snake Valleys were simulated as specified discharges in the GBNP-C model. These discharges equaled mapped rates and measured discharges, respectively. Recharge, hydraulic conductivity, and transmissivity were distributed throughout Spring and Snake Valleys with pilot points and interpolated to model cells with kriging in geologically similar areas. Transmissivity of the basement rocks was estimated because thickness is correlated poorly with transmissivity. Transmissivity estimates were constrained by aquifer-test results in basin-fill and carbonate-rock aquifers. Recharge, hydraulic conductivity, and transmissivity distributions of the GBNP-C model were estimated by minimizing a weighted composite, sum-of-squares objective function that included measurement and Tikhonov regularization observations. Tikhonov regularization observations were equations that defined preferred relations between the pilot points. Measured water levels, water levels that were simulated with RASA, depth-to-water beneath distributed groundwater and spring discharges, land-surface altitudes, spring discharge at Fish Springs, and changes in discharge on selected creek reaches were measurement observations. The effects of uncertain distributed groundwater-discharge estimates in Spring and Snake Valleys on transmissivity estimates were bounded with alternative models. Annual distributed groundwater discharges from Spring and Snake Valleys in the alternative models totaled 151,000 and 227,000 acre-feet, respectively and represented 20 percent differences from the 187,000 acre-feet per year that discharges from the GBNP-C model. Transmissivity estimates in the basin fill between Baker and Big Springs changed less than 50 percent between the two alternative models. Potential effects of pumping from Snake Valley were estimated with the Great Basin National Park predictive (GBNP-P) model, which is a transient groundwater-flow model. The hydraulic conductivity of basin fill and transmissivity of basement rock were the GBNP-C model distributions. Specific yields were defined from aquifer tests. Captures of distributed groundwater and spring discharges were simulated in the GBNP-P model using a combination of well and drain packages in MODFLOW. Simulated groundwater captures could not exceed measured groundwater-discharge rates. Four groundwater-development scenarios were investigated where total annual withdrawals ranged from 10,000 to 50,000 acre-feet during a 200-year pumping period. Four additional scenarios also were simulated that added the effects of existing pumping in Snake Valley. Potential groundwater pumping locations were limited to nine proposed points of diversion. Results are presented as maps of groundwater capture and drawdown, time series of drawdowns and discharges from selected wells, and time series of discharge reductions from selected springs and control volumes. Simulated drawdown propagation was attenuated where groundwater discharge could be captured. General patterns of groundwater capture and water-table declines were similar for all scenarios. Simulated drawdowns greater than 1 ft propagated outside of Spring and Snake Valleys after 200 years of pumping in all scenarios.

  6. Validation of Earth atmosphere models using solar EUV observations from the CORONAS and PROBA2 satellites in occultation mode

    NASA Astrophysics Data System (ADS)

    Slemzin, Vladimir; Ulyanov, Artyom; Gaikovich, Konstantin; Kuzin, Sergey; Pertsov, Andrey; Berghmans, David; Dominique, Marie

    2016-02-01

    Aims: Knowledge of properties of the Earth's upper atmosphere is important for predicting the lifetime of low-orbit spacecraft as well as for planning operation of space instruments whose data may be distorted by atmospheric effects. The accuracy of the models commonly used for simulating the structure of the atmosphere is limited by the scarcity of the observations they are based on, so improvement of these models requires validation under different atmospheric conditions. Measurements of the absorption of the solar extreme ultraviolet (EUV) radiation in the upper atmosphere below 500 km by instruments operating on low-Earth orbits (LEO) satellites provide efficient means for such validation as well as for continuous monitoring of the upper atmosphere and for studying its response to the solar and geomagnetic activity. Method: This paper presents results of measurements of the solar EUV radiation in the 17 nm wavelength band made with the SPIRIT and TESIS telescopes on board the CORONAS satellites and the SWAP telescope on board the PROBA2 satellite in the occulted parts of the satellite orbits. The transmittance profiles of the atmosphere at altitudes between 150 and 500 km were derived from different phases of solar activity during solar cycles 23 and 24 in the quiet state of the magnetosphere and during the development of a geomagnetic storm. We developed a mathematical procedure based on the Tikhonov regularization method for solution of ill-posed problems in order to retrieve extinction coefficients from the transmittance profiles. The transmittance profiles derived from the data and the retrieved extinction coefficients are compared with simulations carried out with the NRLMSISE-00 atmosphere model maintained by Naval Research Laboratory (USA) and the DTM-2013 model developed at CNES in the framework of the FP7 project ATMOP. Results: Under quiet and slightly disturbed magnetospheric conditions during high and low solar activity the extinction coefficients calculated by both models agreed with the measurements within the data errors. The NRLMSISE-00 model was not able to predict the enhancement of extinction above 300 km observed after 14 h from the beginning of a geomagnetic storm whereas the DTM-2013 model described this variation with good accuracy.

  7. Joint release rate estimation and measurement-by-measurement model correction for atmospheric radionuclide emission in nuclear accidents: An application to wind tunnel experiments.

    PubMed

    Li, Xinpeng; Li, Hong; Liu, Yun; Xiong, Wei; Fang, Sheng

    2018-03-05

    The release rate of atmospheric radionuclide emissions is a critical factor in the emergency response to nuclear accidents. However, there are unavoidable biases in radionuclide transport models, leading to inaccurate estimates. In this study, a method that simultaneously corrects these biases and estimates the release rate is developed. Our approach provides a more complete measurement-by-measurement correction of the biases with a coefficient matrix that considers both deterministic and stochastic deviations. This matrix and the release rate are jointly solved by the alternating minimization algorithm. The proposed method is generic because it does not rely on specific features of transport models or scenarios. It is validated against wind tunnel experiments that simulate accidental releases in a heterogonous and densely built nuclear power plant site. The sensitivities to the position, number, and quality of measurements and extendibility of the method are also investigated. The results demonstrate that this method effectively corrects the model biases, and therefore outperforms Tikhonov's method in both release rate estimation and model prediction. The proposed approach is robust to uncertainties and extendible with various center estimators, thus providing a flexible framework for robust source inversion in real accidents, even if large uncertainties exist in multiple factors. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Interpretation of Flow Logs from Nevada Test Site Boreholes to Estimate Hydraulic Conductivity Using Numerical Simulations Constrained by Single-Well Aquifer Tests

    USGS Publications Warehouse

    Garcia, C. Amanda; Halford, Keith J.; Laczniak, Randell J.

    2010-01-01

    Hydraulic conductivities of volcanic and carbonate lithologic units at the Nevada Test Site were estimated from flow logs and aquifer-test data. Borehole flow and drawdown were integrated and interpreted using a radial, axisymmetric flow model, AnalyzeHOLE. This integrated approach is used because complex well completions and heterogeneous aquifers and confining units produce vertical flow in the annular space and aquifers adjacent to the wellbore. AnalyzeHOLE simulates vertical flow, in addition to horizontal flow, which accounts for converging flow toward screen ends and diverging flow toward transmissive intervals. Simulated aquifers and confining units uniformly are subdivided by depth into intervals in which the hydraulic conductivity is estimated with the Parameter ESTimation (PEST) software. Between 50 and 150 hydraulic-conductivity parameters were estimated by minimizing weighted differences between simulated and measured flow and drawdown. Transmissivity estimates from single-well or multiple-well aquifer tests were used to constrain estimates of hydraulic conductivity. The distribution of hydraulic conductivity within each lithology had a minimum variance because estimates were constrained with Tikhonov regularization. AnalyzeHOLE simulated hydraulic-conductivity estimates for lithologic units across screened and cased intervals are as much as 100 times less than those estimated using proportional flow-log analyses applied across screened intervals only. Smaller estimates of hydraulic conductivity for individual lithologic units are simulated because sections of the unit behind cased intervals of the wellbore are not assumed to be impermeable, and therefore, can contribute flow to the wellbore. Simulated hydraulic-conductivity estimates vary by more than three orders of magnitude across a lithologic unit, indicating a high degree of heterogeneity in volcanic and carbonate-rock units. The higher water transmitting potential of carbonate-rock units relative to volcanic-rock units is exemplified by the large difference in their estimated maximum hydraulic conductivity; 4,000 and 400 feet per day, respectively. Simulated minimum estimates of hydraulic conductivity are inexact and represent the lower detection limit of the method. Minimum thicknesses of lithologic intervals also were defined for comparing AnalyzeHOLE results to hydraulic properties in regional ground-water flow models.

  9. Optimal distance of multi-plane sensor in three-dimensional electrical impedance tomography.

    PubMed

    Hao, Zhenhua; Yue, Shihong; Sun, Benyuan; Wang, Huaxiang

    2017-12-01

    Electrical impedance tomography (EIT) is a visual imaging technique for obtaining the conductivity and permittivity distributions in the domain of interest. As an advanced technique, EIT has the potential to be a valuable tool for continuously bedside monitoring of pulmonary function. The EIT applications in any three-dimensional (3 D) field are very limited to the 3 D effects, i.e. the distribution of electric field spreads far beyond the electrode plane. The 3 D effects can result in measurement errors and image distortion. An important way to overcome the 3 D effect is to use the multiple groups of sensors. The aim of this paper is to find the best space resolution of EIT image over various electrode planes and select an optimal plane spacing in a 3 D EIT sensor, and provide guidance for 3 D EIT electrodes placement in monitoring lung function. In simulation and experiment, several typical conductivity distribution models, such as one rod (central, midway and edge), two rods and three rods, are set at different plane spacings between the two electrode planes. A Tikhonov regularization algorithm is utilized for reconstructing the images; the relative error and the correlation coefficient are utilized for evaluating the image quality. Based on numerical simulation and experimental results, the image performance at different spacing conditions is evaluated. The results demonstrate that there exists an optimal plane spacing between the two electrode planes for 3 D EIT sensor. And then the selection of the optimal plane spacing between the electrode planes is suggested for the electrodes placement of multi-plane EIT sensor.

  10. Finite-fault source inversion using teleseismic P waves: Simple parameterization and rapid analysis

    USGS Publications Warehouse

    Mendoza, C.; Hartzell, S.

    2013-01-01

    We examine the ability of teleseismic P waves to provide a timely image of the rupture history for large earthquakes using a simple, 2D finite‐fault source parameterization. We analyze the broadband displacement waveforms recorded for the 2010 Mw∼7 Darfield (New Zealand) and El Mayor‐Cucapah (Baja California) earthquakes using a single planar fault with a fixed rake. Both of these earthquakes were observed to have complicated fault geometries following detailed source studies conducted by other investigators using various data types. Our kinematic, finite‐fault analysis of the events yields rupture models that similarly identify the principal areas of large coseismic slip along the fault. The results also indicate that the amount of stabilization required to spatially smooth the slip across the fault and minimize the seismic moment is related to the amplitudes of the observed P waveforms and can be estimated from the absolute values of the elements of the coefficient matrix. This empirical relationship persists for earthquakes of different magnitudes and is consistent with the stabilization constraint obtained from the L‐curve in Tikhonov regularization. We use the relation to estimate the smoothing parameters for the 2011 Mw 7.1 East Turkey, 2012 Mw 8.6 Northern Sumatra, and 2011 Mw 9.0 Tohoku, Japan, earthquakes and invert the teleseismic P waves in a single step to recover timely, preliminary slip models that identify the principal source features observed in finite‐fault solutions obtained by the U.S. Geological Survey National Earthquake Information Center (USGS/NEIC) from the analysis of body‐ and surface‐wave data. These results indicate that smoothing constraints can be estimated a priori to derive a preliminary, first‐order image of the coseismic slip using teleseismic records.

  11. ℓ1-Regularized full-waveform inversion with prior model information based on orthant-wise limited memory quasi-Newton method

    NASA Astrophysics Data System (ADS)

    Dai, Meng-Xue; Chen, Jing-Bo; Cao, Jian

    2017-07-01

    Full-waveform inversion (FWI) is an ill-posed optimization problem which is sensitive to noise and initial model. To alleviate the ill-posedness of the problem, regularization techniques are usually adopted. The ℓ1-norm penalty is a robust regularization method that preserves contrasts and edges. The Orthant-Wise Limited-Memory Quasi-Newton (OWL-QN) method extends the widely-used limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) method to the ℓ1-regularized optimization problems and inherits the efficiency of L-BFGS. To take advantage of the ℓ1-regularized method and the prior model information obtained from sonic logs and geological information, we implement OWL-QN algorithm in ℓ1-regularized FWI with prior model information in this paper. Numerical experiments show that this method not only improve the inversion results but also has a strong anti-noise ability.

  12. Novel harmonic regularization approach for variable selection in Cox's proportional hazards model.

    PubMed

    Chu, Ge-Jin; Liang, Yong; Wang, Jia-Xuan

    2014-01-01

    Variable selection is an important issue in regression and a number of variable selection methods have been proposed involving nonconvex penalty functions. In this paper, we investigate a novel harmonic regularization method, which can approximate nonconvex Lq  (1/2 < q < 1) regularizations, to select key risk factors in the Cox's proportional hazards model using microarray gene expression data. The harmonic regularization method can be efficiently solved using our proposed direct path seeking approach, which can produce solutions that closely approximate those for the convex loss function and the nonconvex regularization. Simulation results based on the artificial datasets and four real microarray gene expression datasets, such as real diffuse large B-cell lymphoma (DCBCL), the lung cancer, and the AML datasets, show that the harmonic regularization method can be more accurate for variable selection than existing Lasso series methods.

  13. Novel Harmonic Regularization Approach for Variable Selection in Cox's Proportional Hazards Model

    PubMed Central

    Chu, Ge-Jin; Liang, Yong; Wang, Jia-Xuan

    2014-01-01

    Variable selection is an important issue in regression and a number of variable selection methods have been proposed involving nonconvex penalty functions. In this paper, we investigate a novel harmonic regularization method, which can approximate nonconvex Lq  (1/2 < q < 1) regularizations, to select key risk factors in the Cox's proportional hazards model using microarray gene expression data. The harmonic regularization method can be efficiently solved using our proposed direct path seeking approach, which can produce solutions that closely approximate those for the convex loss function and the nonconvex regularization. Simulation results based on the artificial datasets and four real microarray gene expression datasets, such as real diffuse large B-cell lymphoma (DCBCL), the lung cancer, and the AML datasets, show that the harmonic regularization method can be more accurate for variable selection than existing Lasso series methods. PMID:25506389

  14. Selection of regularization parameter for l1-regularized damage detection

    NASA Astrophysics Data System (ADS)

    Hou, Rongrong; Xia, Yong; Bao, Yuequan; Zhou, Xiaoqing

    2018-06-01

    The l1 regularization technique has been developed for structural health monitoring and damage detection through employing the sparsity condition of structural damage. The regularization parameter, which controls the trade-off between data fidelity and solution size of the regularization problem, exerts a crucial effect on the solution. However, the l1 regularization problem has no closed-form solution, and the regularization parameter is usually selected by experience. This study proposes two strategies of selecting the regularization parameter for the l1-regularized damage detection problem. The first method utilizes the residual and solution norms of the optimization problem and ensures that they are both small. The other method is based on the discrepancy principle, which requires that the variance of the discrepancy between the calculated and measured responses is close to the variance of the measurement noise. The two methods are applied to a cantilever beam and a three-story frame. A range of the regularization parameter, rather than one single value, can be determined. When the regularization parameter in this range is selected, the damage can be accurately identified even for multiple damage scenarios. This range also indicates the sensitivity degree of the damage identification problem to the regularization parameter.

  15. Degradation in finite-harmonic subcarrier demodulation

    NASA Technical Reports Server (NTRS)

    Feria, Y.; Townes, S.; Pham, T.

    1995-01-01

    Previous estimates on the degradations due to a subcarrier loop assume a square-wave subcarrier. This article provides a closed-form expression for the degradations due to the subcarrier loop when a finite number of harmonics are used to demodulate the subcarrier, as in the case of the buffered telemetry demodulator. We compared the degradations using a square wave and using finite harmonics in the subcarrier demodulation and found that, for a low loop signal-to-noise ratio, using finite harmonics leads to a lower degradation. The analysis is under the assumption that the phase noise in the subcarrier (SC) loop has a Tikhonov distribution. This assumption is valid for first-order loops.

  16. Application of Turchin's method of statistical regularization

    NASA Astrophysics Data System (ADS)

    Zelenyi, Mikhail; Poliakova, Mariia; Nozik, Alexander; Khudyakov, Alexey

    2018-04-01

    During analysis of experimental data, one usually needs to restore a signal after it has been convoluted with some kind of apparatus function. According to Hadamard's definition this problem is ill-posed and requires regularization to provide sensible results. In this article we describe an implementation of the Turchin's method of statistical regularization based on the Bayesian approach to the regularization strategy.

  17. Novel Calibration Algorithm for a Three-Axis Strapdown Magnetometer

    PubMed Central

    Liu, Yan Xia; Li, Xi Sheng; Zhang, Xiao Juan; Feng, Yi Bo

    2014-01-01

    A complete error calibration model with 12 independent parameters is established by analyzing the three-axis magnetometer error mechanism. The said model conforms to an ellipsoid restriction, the parameters of the ellipsoid equation are estimated, and the ellipsoid coefficient matrix is derived. However, the calibration matrix cannot be determined completely, as there are fewer ellipsoid parameters than calibration model parameters. Mathematically, the calibration matrix derived from the ellipsoid coefficient matrix by a different matrix decomposition method is not unique, and there exists an unknown rotation matrix R between them. This paper puts forward a constant intersection angle method (angles between the geomagnetic field and gravitational field are fixed) to estimate R. The Tikhonov method is adopted to solve the problem that rounding errors or other errors may seriously affect the calculation results of R when the condition number of the matrix is very large. The geomagnetic field vector and heading error are further corrected by R. The constant intersection angle method is convenient and practical, as it is free from any additional calibration procedure or coordinate transformation. In addition, the simulation experiment indicates that the heading error declines from ±1° calibrated by classical ellipsoid fitting to ±0.2° calibrated by a constant intersection angle method, and the signal-to-noise ratio is 50 dB. The actual experiment exhibits that the heading error is further corrected from ±0.8° calibrated by the classical ellipsoid fitting to ±0.3° calibrated by a constant intersection angle method. PMID:24831110

  18. Correction of engineering servicing regularity of transporttechnological machines in operational process

    NASA Astrophysics Data System (ADS)

    Makarova, A. N.; Makarov, E. I.; Zakharov, N. S.

    2018-03-01

    In the article, the issue of correcting engineering servicing regularity on the basis of actual dependability data of cars in operation is considered. The purpose of the conducted research is to increase dependability of transport-technological machines by correcting engineering servicing regularity. The subject of the research is the mechanism of engineering servicing regularity influence on reliability measure. On the basis of the analysis of researches carried out before, a method of nonparametric estimation of car failure measure according to actual time-to-failure data was chosen. A possibility of describing the failure measure dependence on engineering servicing regularity by various mathematical models is considered. It is proven that the exponential model is the most appropriate for that purpose. The obtained results can be used as a separate method of engineering servicing regularity correction with certain operational conditions taken into account, as well as for the technical-economical and economical-stochastic methods improvement. Thus, on the basis of the conducted researches, a method of engineering servicing regularity correction of transport-technological machines in the operational process was developed. The use of that method will allow decreasing the number of failures.

  19. Fast Quantitative Susceptibility Mapping with L1-Regularization and Automatic Parameter Selection

    PubMed Central

    Bilgic, Berkin; Fan, Audrey P.; Polimeni, Jonathan R.; Cauley, Stephen F.; Bianciardi, Marta; Adalsteinsson, Elfar; Wald, Lawrence L.; Setsompop, Kawin

    2014-01-01

    Purpose To enable fast reconstruction of quantitative susceptibility maps with Total Variation penalty and automatic regularization parameter selection. Methods ℓ1-regularized susceptibility mapping is accelerated by variable-splitting, which allows closed-form evaluation of each iteration of the algorithm by soft thresholding and FFTs. This fast algorithm also renders automatic regularization parameter estimation practical. A weighting mask derived from the magnitude signal can be incorporated to allow edge-aware regularization. Results Compared to the nonlinear Conjugate Gradient (CG) solver, the proposed method offers 20× speed-up in reconstruction time. A complete pipeline including Laplacian phase unwrapping, background phase removal with SHARP filtering and ℓ1-regularized dipole inversion at 0.6 mm isotropic resolution is completed in 1.2 minutes using Matlab on a standard workstation compared to 22 minutes using the Conjugate Gradient solver. This fast reconstruction allows estimation of regularization parameters with the L-curve method in 13 minutes, which would have taken 4 hours with the CG algorithm. Proposed method also permits magnitude-weighted regularization, which prevents smoothing across edges identified on the magnitude signal. This more complicated optimization problem is solved 5× faster than the nonlinear CG approach. Utility of the proposed method is also demonstrated in functional BOLD susceptibility mapping, where processing of the massive time-series dataset would otherwise be prohibitive with the CG solver. Conclusion Online reconstruction of regularized susceptibility maps may become feasible with the proposed dipole inversion. PMID:24259479

  20. An adaptive regularization parameter choice strategy for multispectral bioluminescence tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feng Jinchao; Qin Chenghu; Jia Kebin

    2011-11-15

    Purpose: Bioluminescence tomography (BLT) provides an effective tool for monitoring physiological and pathological activities in vivo. However, the measured data in bioluminescence imaging are corrupted by noise. Therefore, regularization methods are commonly used to find a regularized solution. Nevertheless, for the quality of the reconstructed bioluminescent source obtained by regularization methods, the choice of the regularization parameters is crucial. To date, the selection of regularization parameters remains challenging. With regards to the above problems, the authors proposed a BLT reconstruction algorithm with an adaptive parameter choice rule. Methods: The proposed reconstruction algorithm uses a diffusion equation for modeling the bioluminescentmore » photon transport. The diffusion equation is solved with a finite element method. Computed tomography (CT) images provide anatomical information regarding the geometry of the small animal and its internal organs. To reduce the ill-posedness of BLT, spectral information and the optimal permissible source region are employed. Then, the relationship between the unknown source distribution and multiview and multispectral boundary measurements is established based on the finite element method and the optimal permissible source region. Since the measured data are noisy, the BLT reconstruction is formulated as l{sub 2} data fidelity and a general regularization term. When choosing the regularization parameters for BLT, an efficient model function approach is proposed, which does not require knowledge of the noise level. This approach only requests the computation of the residual and regularized solution norm. With this knowledge, we construct the model function to approximate the objective function, and the regularization parameter is updated iteratively. Results: First, the micro-CT based mouse phantom was used for simulation verification. Simulation experiments were used to illustrate why multispectral data were used rather than monochromatic data. Furthermore, the study conducted using an adaptive regularization parameter demonstrated our ability to accurately localize the bioluminescent source. With the adaptively estimated regularization parameter, the reconstructed center position of the source was (20.37, 31.05, 12.95) mm, and the distance to the real source was 0.63 mm. The results of the dual-source experiments further showed that our algorithm could localize the bioluminescent sources accurately. The authors then presented experimental evidence that the proposed algorithm exhibited its calculated efficiency over the heuristic method. The effectiveness of the new algorithm was also confirmed by comparing it with the L-curve method. Furthermore, various initial speculations regarding the regularization parameter were used to illustrate the convergence of our algorithm. Finally, in vivo mouse experiment further illustrates the effectiveness of the proposed algorithm. Conclusions: Utilizing numerical, physical phantom and in vivo examples, we demonstrated that the bioluminescent sources could be reconstructed accurately with automatic regularization parameters. The proposed algorithm exhibited superior performance than both the heuristic regularization parameter choice method and L-curve method based on the computational speed and localization error.« less

  1. Regularized Dual Averaging Image Reconstruction for Full-Wave Ultrasound Computed Tomography.

    PubMed

    Matthews, Thomas P; Wang, Kun; Li, Cuiping; Duric, Neb; Anastasio, Mark A

    2017-05-01

    Ultrasound computed tomography (USCT) holds great promise for breast cancer screening. Waveform inversion-based image reconstruction methods account for higher order diffraction effects and can produce high-resolution USCT images, but are computationally demanding. Recently, a source encoding technique has been combined with stochastic gradient descent (SGD) to greatly reduce image reconstruction times. However, this method bundles the stochastic data fidelity term with the deterministic regularization term. This limitation can be overcome by replacing SGD with a structured optimization method, such as the regularized dual averaging method, that exploits knowledge of the composition of the cost function. In this paper, the dual averaging method is combined with source encoding techniques to improve the effectiveness of regularization while maintaining the reduced reconstruction times afforded by source encoding. It is demonstrated that each iteration can be decomposed into a gradient descent step based on the data fidelity term and a proximal update step corresponding to the regularization term. Furthermore, the regularization term is never explicitly differentiated, allowing nonsmooth regularization penalties to be naturally incorporated. The wave equation is solved by the use of a time-domain method. The effectiveness of this approach is demonstrated through computer simulation and experimental studies. The results suggest that the dual averaging method can produce images with less noise and comparable resolution to those obtained by the use of SGD.

  2. Spatial resolution properties of motion-compensated tomographic image reconstruction methods.

    PubMed

    Chun, Se Young; Fessler, Jeffrey A

    2012-07-01

    Many motion-compensated image reconstruction (MCIR) methods have been proposed to correct for subject motion in medical imaging. MCIR methods incorporate motion models to improve image quality by reducing motion artifacts and noise. This paper analyzes the spatial resolution properties of MCIR methods and shows that nonrigid local motion can lead to nonuniform and anisotropic spatial resolution for conventional quadratic regularizers. This undesirable property is akin to the known effects of interactions between heteroscedastic log-likelihoods (e.g., Poisson likelihood) and quadratic regularizers. This effect may lead to quantification errors in small or narrow structures (such as small lesions or rings) of reconstructed images. This paper proposes novel spatial regularization design methods for three different MCIR methods that account for known nonrigid motion. We develop MCIR regularization designs that provide approximately uniform and isotropic spatial resolution and that match a user-specified target spatial resolution. Two-dimensional PET simulations demonstrate the performance and benefits of the proposed spatial regularization design methods.

  3. A LSQR-type method provides a computationally efficient automated optimal choice of regularization parameter in diffuse optical tomography.

    PubMed

    Prakash, Jaya; Yalavarthy, Phaneendra K

    2013-03-01

    Developing a computationally efficient automated method for the optimal choice of regularization parameter in diffuse optical tomography. The least-squares QR (LSQR)-type method that uses Lanczos bidiagonalization is known to be computationally efficient in performing the reconstruction procedure in diffuse optical tomography. The same is effectively deployed via an optimization procedure that uses the simplex method to find the optimal regularization parameter. The proposed LSQR-type method is compared with the traditional methods such as L-curve, generalized cross-validation (GCV), and recently proposed minimal residual method (MRM)-based choice of regularization parameter using numerical and experimental phantom data. The results indicate that the proposed LSQR-type and MRM-based methods performance in terms of reconstructed image quality is similar and superior compared to L-curve and GCV-based methods. The proposed method computational complexity is at least five times lower compared to MRM-based method, making it an optimal technique. The LSQR-type method was able to overcome the inherent limitation of computationally expensive nature of MRM-based automated way finding the optimal regularization parameter in diffuse optical tomographic imaging, making this method more suitable to be deployed in real-time.

  4. 29 CFR 778.209 - Method of inclusion of bonus in regular rate.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 3 2012-07-01 2012-07-01 false Method of inclusion of bonus in regular rate. 778.209 Section 778.209 Labor Regulations Relating to Labor (Continued) WAGE AND HOUR DIVISION, DEPARTMENT OF... COMPENSATION Payments That May Be Excluded From the âRegular Rateâ Bonuses § 778.209 Method of inclusion of...

  5. 29 CFR 778.209 - Method of inclusion of bonus in regular rate.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 3 2013-07-01 2013-07-01 false Method of inclusion of bonus in regular rate. 778.209 Section 778.209 Labor Regulations Relating to Labor (Continued) WAGE AND HOUR DIVISION, DEPARTMENT OF... COMPENSATION Payments That May Be Excluded From the âRegular Rateâ Bonuses § 778.209 Method of inclusion of...

  6. 29 CFR 778.209 - Method of inclusion of bonus in regular rate.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 3 2014-07-01 2014-07-01 false Method of inclusion of bonus in regular rate. 778.209 Section 778.209 Labor Regulations Relating to Labor (Continued) WAGE AND HOUR DIVISION, DEPARTMENT OF... COMPENSATION Payments That May Be Excluded From the âRegular Rateâ Bonuses § 778.209 Method of inclusion of...

  7. 29 CFR 778.209 - Method of inclusion of bonus in regular rate.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 3 2010-07-01 2010-07-01 false Method of inclusion of bonus in regular rate. 778.209 Section 778.209 Labor Regulations Relating to Labor (Continued) WAGE AND HOUR DIVISION, DEPARTMENT OF... COMPENSATION Payments That May Be Excluded From the âRegular Rateâ Bonuses § 778.209 Method of inclusion of...

  8. 29 CFR 778.209 - Method of inclusion of bonus in regular rate.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 29 Labor 3 2011-07-01 2011-07-01 false Method of inclusion of bonus in regular rate. 778.209 Section 778.209 Labor Regulations Relating to Labor (Continued) WAGE AND HOUR DIVISION, DEPARTMENT OF... COMPENSATION Payments That May Be Excluded From the âRegular Rateâ Bonuses § 778.209 Method of inclusion of...

  9. Multiple graph regularized protein domain ranking.

    PubMed

    Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin

    2012-11-19

    Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications.

  10. The hypergraph regularity method and its applications

    PubMed Central

    Rödl, V.; Nagle, B.; Skokan, J.; Schacht, M.; Kohayakawa, Y.

    2005-01-01

    Szemerédi's regularity lemma asserts that every graph can be decomposed into relatively few random-like subgraphs. This random-like behavior enables one to find and enumerate subgraphs of a given isomorphism type, yielding the so-called counting lemma for graphs. The combined application of these two lemmas is known as the regularity method for graphs and has proved useful in graph theory, combinatorial geometry, combinatorial number theory, and theoretical computer science. Here, we report on recent advances in the regularity method for k-uniform hypergraphs, for arbitrary k ≥ 2. This method, purely combinatorial in nature, gives alternative proofs of density theorems originally due to E. Szemerédi, H. Furstenberg, and Y. Katznelson. Further results in extremal combinatorics also have been obtained with this approach. The two main components of the regularity method for k-uniform hypergraphs, the regularity lemma and the counting lemma, have been obtained recently: Rödl and Skokan (based on earlier work of Frankl and Rödl) generalized Szemerédi's regularity lemma to k-uniform hypergraphs, and Nagle, Rödl, and Schacht succeeded in proving a counting lemma accompanying the Rödl–Skokan hypergraph regularity lemma. The counting lemma is proved by reducing the counting problem to a simpler one previously investigated by Kohayakawa, Rödl, and Skokan. Similar results were obtained independently by W. T. Gowers, following a different approach. PMID:15919821

  11. Multiple graph regularized protein domain ranking

    PubMed Central

    2012-01-01

    Background Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. Results To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. Conclusion The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications. PMID:23157331

  12. Local/non-local regularized image segmentation using graph-cuts: application to dynamic and multispectral MRI.

    PubMed

    Hanson, Erik A; Lundervold, Arvid

    2013-11-01

    Multispectral, multichannel, or time series image segmentation is important for image analysis in a wide range of applications. Regularization of the segmentation is commonly performed using local image information causing the segmented image to be locally smooth or piecewise constant. A new spatial regularization method, incorporating non-local information, was developed and tested. Our spatial regularization method applies to feature space classification in multichannel images such as color images and MR image sequences. The spatial regularization involves local edge properties, region boundary minimization, as well as non-local similarities. The method is implemented in a discrete graph-cut setting allowing fast computations. The method was tested on multidimensional MRI recordings from human kidney and brain in addition to simulated MRI volumes. The proposed method successfully segment regions with both smooth and complex non-smooth shapes with a minimum of user interaction.

  13. Image deblurring based on nonlocal regularization with a non-convex sparsity constraint

    NASA Astrophysics Data System (ADS)

    Zhu, Simiao; Su, Zhenming; Li, Lian; Yang, Yi

    2018-04-01

    In recent years, nonlocal regularization methods for image restoration (IR) have drawn more and more attention due to the promising results obtained when compared to the traditional local regularization methods. Despite the success of this technique, in order to obtain computational efficiency, a convex regularizing functional is exploited in most existing methods, which is equivalent to imposing a convex prior on the nonlocal difference operator output. However, our conducted experiment illustrates that the empirical distribution of the output of the nonlocal difference operator especially in the seminal work of Kheradmand et al. should be characterized with an extremely heavy-tailed distribution rather than a convex distribution. Therefore, in this paper, we propose a nonlocal regularization-based method with a non-convex sparsity constraint for image deblurring. Finally, an effective algorithm is developed to solve the corresponding non-convex optimization problem. The experimental results demonstrate the effectiveness of the proposed method.

  14. Spatially adapted second-order total generalized variational image deblurring model under impulse noise

    NASA Astrophysics Data System (ADS)

    Zhong, Qiu-Xiang; Wu, Chuan-Sheng; Shu, Qiao-Ling; Liu, Ryan Wen

    2018-04-01

    Image deblurring under impulse noise is a typical ill-posed problem which requires regularization methods to guarantee high-quality imaging. L1-norm data-fidelity term and total variation (TV) regularizer have been combined to contribute the popular regularization method. However, the TV-regularized variational image deblurring model often suffers from the staircase-like artifacts leading to image quality degradation. To enhance image quality, the detailpreserving total generalized variation (TGV) was introduced to replace TV to eliminate the undesirable artifacts. The resulting nonconvex optimization problem was effectively solved using the alternating direction method of multipliers (ADMM). In addition, an automatic method for selecting spatially adapted regularization parameters was proposed to further improve deblurring performance. Our proposed image deblurring framework is able to remove blurring and impulse noise effects while maintaining the image edge details. Comprehensive experiments have been conducted to demonstrate the superior performance of our proposed method over several state-of-the-art image deblurring methods.

  15. A novel Tikhonov-based approach for harmonized high-accuracy retrieval of methane columns and profiles from NDACC FTIR network measurements. Application to global validation of ENVISAT/SCIAMACHY biases

    NASA Astrophysics Data System (ADS)

    Sussmann, R.; Forster, F.; Borsdorff, T.; Buchwitz, M.; Duchatelet, P.; Frankenberg, C.; Hase, F.; Jones, N.; Petersen, K.; Taylor, J.

    2009-04-01

    The first goal of this paper is to present an original approach for retrieval of methane columns and profiles from ground-based mid-infrared solar FTIR routine measurements performed within the Network for the Detection of Atmospheric Composition Change (NDACC). It is based on an altitude constant Tikhonov first order (L1) regularization, applied to inversion of methane profiles given in units of percentage of the volume mixing ratios at each layer altitude. A mathematical presentation of this regularization matrix can be found in Sussmann and Borsdorff (2007, equations B3 and B4 therein). We show that this approach is ideally suited to achieve a harmonized retrieval for a set of different, globally distributed FTIR stations, since it is extraordinarily simple and robust. This is because it is directly related to the well tested classical retrieval approach of simple volume mixing ratio profile scaling (via one altitude constant scaling factor), but allows for some additional flexibility in the shape of the retrieved profiles. This helps to get the total columns better integrated, even in the presence of spectral perturbations (e.g., from clouds). The amount of flexibility of the retrieved profile shape (relative to the a priori profile) can easily be tuned empirically versus one figure of merit, like minimum diurnal variation of the retrieved columns, or targeting at minimum profile oscillations within the retrieved ensemble. Sensitivity studies will be presented showing the optimization procedure and an error characterization of the new retrieval. Based on this approach and in order to guarantee a station-to-station consistency of <1 % for satellite validation we performed a general harmonization effort for 13 selected globally distributed NDACC FTIR stations. Station-to-station biases are eliminated by using identical micro-windows, spectroscopic line lists, retrieval parameters, sources of ancillary data like pressure-temperature profiles, and water vapor data for deriving dry air columns. Furthermore, a geophysically consistent set of priori profiles for the retrievals at all stations was established. Global satellite measurements of column-averaged methane have recently shown a step forward in data quality via year 2003 and 2004 retrievals from two different processors, namely IMAP-DOAS ver. 49 and WFM-DOAS ver. 1.0 (Frankenberg et al., 2008; Buchwitz et al., 2008). Accuracy and precision have approached the order of 1 %, and can be considered for inverse modelling of sources and sinks. This means at the same time that the quality requirements for ground-based validation data have become higher. This has been addressed by our harmonization effort described above. Our network validation study utilizes the validation strategy developed during the first validation of ENVISAT/SCIAMACHY column-averaged methane by FTIR (Sussmann et al., 2005). The outcome of the new study is the accurate determination of the satellite-ground station biases as a function of latitude on global scale. Acknowledgments Funding by the EC-project HYMN (contract GOCE 037048) and the DLR project SATVAL-A (DLR 50EE 0702) is gratefully acknowledged. We thank for valuable contributions of T. Blumenstock (FZK/IMK-ASF), J.P. Burrows (Univ. Bremen), B. Dils (BIRA), J. Hannigan (NCAR), J. Klyft (Chalmers), E. Mahieu (Univ. Liege), M. De Mazière (BIRA), J. Mellqvist (Chalmers), J. Notholt (Univ. Bremen), M. Rettinger (FZK/IMK-IFU), O. Schneising (Univ. Bremen), K. Strong (Univ. Toronto), and C. Vigouroux (BIRA). References Frankenberg C., Bergamaschi. P., Butz, A., Houweling, S., Meirink, J.F., Notholt, J., Petersen, A.K., Schrijver, H., Warneke, T., Aben, I.: Tropical methane emissions: A revised view from SCIAMACHY onboard ENVISAT, Geophys. Res. Lett., 35, L15811, doi:10.1029/2008GL034300, 2008. Schneising, O., Buchwitz, M., Burrows, J. P., Bovensmann, H., Bergamaschi, P., and Peters, W., Three years of greenhouse gas column-averaged dry air mole fractions retrieved from satellite - Part 2: Methane, Atmos. Chem. Phys. Discuss., 8, 8273-8326, 2008. Sussmann, R. Stremme, W., Buchwitz, M., and de Beek, R.: Validation of ENVISAT/SCIAMACHY columnar methane by solar FTIR spectrometry at the Ground-Truthing Station Zugspitze, Atmos. Chem. Phys., 5, 2419-2429, 2005. Sussmann, R. and Borsdorff, T.: Technical note: Interference errors in infrared remote sounding of the atmosphere, Atmos. Chem. Phys., 7, 3537-3557, 2007.

  16. Regularized Generalized Canonical Correlation Analysis

    ERIC Educational Resources Information Center

    Tenenhaus, Arthur; Tenenhaus, Michel

    2011-01-01

    Regularized generalized canonical correlation analysis (RGCCA) is a generalization of regularized canonical correlation analysis to three or more sets of variables. It constitutes a general framework for many multi-block data analysis methods. It combines the power of multi-block data analysis methods (maximization of well identified criteria) and…

  17. Decomposition of Near-Infrared Spectroscopy Signals Using Oblique Subspace Projections: Applications in Brain Hemodynamic Monitoring.

    PubMed

    Caicedo, Alexander; Varon, Carolina; Hunyadi, Borbala; Papademetriou, Maria; Tachtsidis, Ilias; Van Huffel, Sabine

    2016-01-01

    Clinical data is comprised by a large number of synchronously collected biomedical signals that are measured at different locations. Deciphering the interrelationships of these signals can yield important information about their dependence providing some useful clinical diagnostic data. For instance, by computing the coupling between Near-Infrared Spectroscopy signals (NIRS) and systemic variables the status of the hemodynamic regulation mechanisms can be assessed. In this paper we introduce an algorithm for the decomposition of NIRS signals into additive components. The algorithm, SIgnal DEcomposition base on Obliques Subspace Projections (SIDE-ObSP), assumes that the measured NIRS signal is a linear combination of the systemic measurements, following the linear regression model y = Ax + ϵ . SIDE-ObSP decomposes the output such that, each component in the decomposition represents the sole linear influence of one corresponding regressor variable. This decomposition scheme aims at providing a better understanding of the relation between NIRS and systemic variables, and to provide a framework for the clinical interpretation of regression algorithms, thereby, facilitating their introduction into clinical practice. SIDE-ObSP combines oblique subspace projections (ObSP) with the structure of a mean average system in order to define adequate signal subspaces. To guarantee smoothness in the estimated regression parameters, as observed in normal physiological processes, we impose a Tikhonov regularization using a matrix differential operator. We evaluate the performance of SIDE-ObSP by using a synthetic dataset, and present two case studies in the field of cerebral hemodynamics monitoring using NIRS. In addition, we compare the performance of this method with other system identification techniques. In the first case study data from 20 neonates during the first 3 days of life was used, here SIDE-ObSP decoupled the influence of changes in arterial oxygen saturation from the NIRS measurements, facilitating the use of NIRS as a surrogate measure for cerebral blood flow (CBF). The second case study used data from a 3-years old infant under Extra Corporeal Membrane Oxygenation (ECMO), here SIDE-ObSP decomposed cerebral/peripheral tissue oxygenation, as a sum of the partial contributions from different systemic variables, facilitating the comparison between the effects of each systemic variable on the cerebral/peripheral hemodynamics.

  18. Miniaturized Near Infrared Heterodyne Spectroradiometer for Monitoring CO2, CH4 and CO in the Earth Atmosphere

    NASA Astrophysics Data System (ADS)

    Klimchuk, A., Sr.; Rodin, A.; Nadezhdinskiy, A.; Churbanov, D.; Spiridonov, M.

    2014-12-01

    The paper describes the concept of a compact, lightweight heterodyne NIR spectro-radiometer suitable for atmospheric sounding with solar occultations, and the first measurement of CO2 and CH4 absorption near 1.60mm and 1.65 mm with spectral resolution l/dl ~ 5*107. Highly stabilized DFB laser was used as local oscillator, while single model quartz fiber Y-coupler served as a diplexer. Radiation mixed in the single mode fiber was detected by quadratic detector using p-i-n diode within the bandpass of ~10 MHz. Wavelength coverage of spectral measurement was provided by sweeping local oscillator frequency in the range 1,1 см-1. With the exposure time of 10 min, the absorption spectrum of the atmosphere over Moscow has been recorded with S/N ~ 300. We retrieved methane vertical profile using Tikhonov method of smooth functional, which takes into account a priori information about first guess profile. The reference to model methane profile means that the regularization procedure always selects a priorivalues unless the measurements contradict this assumption.The retrieved methane profile demonstrates higher abundances in the lower scale height compared to the assumed model profile, well expected in the megalopolis center. The retrievals sensitivity is limited by 10 ppb, with the exception of the lower part of the profile where the tendency to lower values is revealed. Thus the methane abundance variations may be evaluated with relative accuracy better than 1%, which fits the requirements of greenhouse gas monitoring. The retrievals sensitivity of CO2 is about 1-2 ppm. CO2 observations was also used to estimate stratoshere wind by doppler shift of absorption line. Due to higher spectral resolution, lower sensitivity to atmospheric temperatures and other external factors, compared to heterodyne measurements in the thermal IR spectral range, the described technique provides accuracy comparable with much more complicated high resolution measurements now used in TCCON stations. Relative simplicity of the proposed scheme opens a perspective to employ it for high resolution spectroscopy in various applications. In particular, it may allow solar occultation observations of CO2, CО, CH4, H2S, C2H4 and other gases from spacecraft, airborne or ground-based platforms.

  19. X-ray computed tomography using curvelet sparse regularization.

    PubMed

    Wieczorek, Matthias; Frikel, Jürgen; Vogel, Jakob; Eggl, Elena; Kopp, Felix; Noël, Peter B; Pfeiffer, Franz; Demaret, Laurent; Lasser, Tobias

    2015-04-01

    Reconstruction of x-ray computed tomography (CT) data remains a mathematically challenging problem in medical imaging. Complementing the standard analytical reconstruction methods, sparse regularization is growing in importance, as it allows inclusion of prior knowledge. The paper presents a method for sparse regularization based on the curvelet frame for the application to iterative reconstruction in x-ray computed tomography. In this work, the authors present an iterative reconstruction approach based on the alternating direction method of multipliers using curvelet sparse regularization. Evaluation of the method is performed on a specifically crafted numerical phantom dataset to highlight the method's strengths. Additional evaluation is performed on two real datasets from commercial scanners with different noise characteristics, a clinical bone sample acquired in a micro-CT and a human abdomen scanned in a diagnostic CT. The results clearly illustrate that curvelet sparse regularization has characteristic strengths. In particular, it improves the restoration and resolution of highly directional, high contrast features with smooth contrast variations. The authors also compare this approach to the popular technique of total variation and to traditional filtered backprojection. The authors conclude that curvelet sparse regularization is able to improve reconstruction quality by reducing noise while preserving highly directional features.

  20. Coarse-graining time series data: Recurrence plot of recurrence plots and its application for music

    NASA Astrophysics Data System (ADS)

    Fukino, Miwa; Hirata, Yoshito; Aihara, Kazuyuki

    2016-02-01

    We propose a nonlinear time series method for characterizing two layers of regularity simultaneously. The key of the method is using the recurrence plots hierarchically, which allows us to preserve the underlying regularities behind the original time series. We demonstrate the proposed method with musical data. The proposed method enables us to visualize both the local and the global musical regularities or two different features at the same time. Furthermore, the determinism scores imply that the proposed method may be useful for analyzing emotional response to the music.

  1. Coarse-graining time series data: Recurrence plot of recurrence plots and its application for music.

    PubMed

    Fukino, Miwa; Hirata, Yoshito; Aihara, Kazuyuki

    2016-02-01

    We propose a nonlinear time series method for characterizing two layers of regularity simultaneously. The key of the method is using the recurrence plots hierarchically, which allows us to preserve the underlying regularities behind the original time series. We demonstrate the proposed method with musical data. The proposed method enables us to visualize both the local and the global musical regularities or two different features at the same time. Furthermore, the determinism scores imply that the proposed method may be useful for analyzing emotional response to the music.

  2. A novel approach of ensuring layout regularity correct by construction in advanced technologies

    NASA Astrophysics Data System (ADS)

    Ahmed, Shafquat Jahan; Vaderiya, Yagnesh; Gupta, Radhika; Parthasarathy, Chittoor; Marin, Jean-Claude; Robert, Frederic

    2017-03-01

    In advanced technology nodes, layout regularity has become a mandatory prerequisite to create robust designs less sensitive to variations in manufacturing process in order to improve yield and minimizing electrical variability. In this paper we describe a method for designing regular full custom layouts based on design and process co-optimization. The method includes various design rule checks that can be used on-the-fly during leaf-cell layout development. We extract a Layout Regularity Index (LRI) from the layouts based on the jogs, alignments and pitches used in the design for any given metal layer. Regularity Index of a layout is the direct indicator of manufacturing yield and is used to compare the relative health of different layout blocks in terms of process friendliness. The method has been deployed for 28nm and 40nm technology nodes for Memory IP and is being extended to other IPs (IO, standard-cell). We have quantified the gain of layout regularity with the deployed method on printability and electrical characteristics by process-variation (PV) band simulation analysis and have achieved up-to 5nm reduction in PV band.

  3. Iterative Nonlocal Total Variation Regularization Method for Image Restoration

    PubMed Central

    Xu, Huanyu; Sun, Quansen; Luo, Nan; Cao, Guo; Xia, Deshen

    2013-01-01

    In this paper, a Bregman iteration based total variation image restoration algorithm is proposed. Based on the Bregman iteration, the algorithm splits the original total variation problem into sub-problems that are easy to solve. Moreover, non-local regularization is introduced into the proposed algorithm, and a method to choose the non-local filter parameter locally and adaptively is proposed. Experiment results show that the proposed algorithms outperform some other regularization methods. PMID:23776560

  4. Analysis of the iteratively regularized Gauss-Newton method under a heuristic rule

    NASA Astrophysics Data System (ADS)

    Jin, Qinian; Wang, Wei

    2018-03-01

    The iteratively regularized Gauss-Newton method is one of the most prominent regularization methods for solving nonlinear ill-posed inverse problems when the data is corrupted by noise. In order to produce a useful approximate solution, this iterative method should be terminated properly. The existing a priori and a posteriori stopping rules require accurate information on the noise level, which may not be available or reliable in practical applications. In this paper we propose a heuristic selection rule for this regularization method, which requires no information on the noise level. By imposing certain conditions on the noise, we derive a posteriori error estimates on the approximate solutions under various source conditions. Furthermore, we establish a convergence result without using any source condition. Numerical results are presented to illustrate the performance of our heuristic selection rule.

  5. Damage identification method for continuous girder bridges based on spatially-distributed long-gauge strain sensing under moving loads

    NASA Astrophysics Data System (ADS)

    Wu, Bitao; Wu, Gang; Yang, Caiqian; He, Yi

    2018-05-01

    A novel damage identification method for concrete continuous girder bridges based on spatially-distributed long-gauge strain sensing is presented in this paper. First, the variation regularity of the long-gauge strain influence line of continuous girder bridges which changes with the location of vehicles on the bridge is studied. According to this variation regularity, a calculation method for the distribution regularity of the area of long-gauge strain history is investigated. Second, a numerical simulation of damage identification based on the distribution regularity of the area of long-gauge strain history is conducted, and the results indicate that this method is effective for identifying damage and is not affected by the speed, axle number and weight of vehicles. Finally, a real bridge test on a highway is conducted, and the experimental results also show that this method is very effective for identifying damage in continuous girder bridges, and the local element stiffness distribution regularity can be revealed at the same time. This identified information is useful for maintaining of continuous girder bridges on highways.

  6. An update of the Death Valley regional groundwater flow system transient model, Nevada and California

    USGS Publications Warehouse

    Belcher, Wayne R.; Sweetkind, Donald S.; Faunt, Claudia C.; Pavelko, Michael T.; Hill, Mary C.

    2017-01-19

    Since the original publication of the Death Valley regional groundwater flow system (DVRFS) numerical model in 2004, more information on the regional groundwater flow system in the form of new data and interpretations has been compiled. Cooperators such as the Bureau of Land Management, National Park Service, U.S. Fish and Wildlife Service, the Department of Energy, and Nye County, Nevada, recognized a need to update the existing regional numerical model to maintain its viability as a groundwater management tool for regional stakeholders. The existing DVRFS numerical flow model was converted to MODFLOW-2005, updated with the latest available data, and recalibrated. Five main data sets were revised: (1) recharge from precipitation varying in time and space, (2) pumping data, (3) water-level observations, (4) an updated regional potentiometric map, and (5) a revision to the digital hydrogeologic framework model.The resulting DVRFS version 2.0 (v. 2.0) numerical flow model simulates groundwater flow conditions for the Death Valley region from 1913 to 2003 to correspond to the time frame for the most recently published (2008) water-use data. The DVRFS v 2.0 model was calibrated by using the Tikhonov regularization functionality in the parameter estimation and predictive uncertainty software PEST. In order to assess the accuracy of the numerical flow model in simulating regional flow, the fit of simulated to target values (consisting of hydraulic heads and flows, including evapotranspiration and spring discharge, flow across the model boundary, and interbasin flow; the regional water budget; values of parameter estimates; and sensitivities) was evaluated. This evaluation showed that DVRFS v. 2.0 simulates conditions similar to DVRFS v. 1.0. Comparisons of the target values with simulated values also indicate that they match reasonably well and in some cases (boundary flows and discharge) significantly better than in DVRFS v. 1.0.

  7. Coseismic fault slip associated with the 1992 M(sub w) 6.1 Joshua Tree, California, earthquake: Implications for the Joshua Tree-Landers earthquake sequence

    NASA Technical Reports Server (NTRS)

    Bennett, Richard A.; Reilinger, Robert E.; Rodi, William; Li, Yingping; Toksoz, M. Nafi; Hudnut, Ken

    1995-01-01

    Coseismic surface deformation associated with the M(sub w) 6.1, April 23, 1992, Joshua Tree earthquake is well represented by estimates of geodetic monument displacements at 20 locations independently derived from Global Positioning System and trilateration measurements. The rms signal to noise ratio for these inferred displacements is 1.8 with near-fault displacement estimates exceeding 40 mm. In order to determine the long-wavelength distribution of slip over the plane of rupture, a Tikhonov regularization operator is applied to these estimates which minimizes stress variability subject to purely right-lateral slip and zero surface slip constraints. The resulting slip distribution yields a geodetic moment estimate of 1.7 x 10(exp 18) N m with corresponding maximum slip around 0.8 m and compares well with independent and complementary information including seismic moment and source time function estimates and main shock and aftershock locations. From empirical Green's functions analyses, a rupture duration of 5 s is obtained which implies a rupture radius of 6-8 km. Most of the inferred slip lies to the north of the hypocenter, consistent with northward rupture propagation. Stress drop estimates are in the range of 2-4 MPa. In addition, predicted Coulomb stress increases correlate remarkably well with the distribution of aftershock hypocenters; most of the aftershocks occur in areas for which the mainshock rupture produced stress increases larger than about 0.1 MPa. In contrast, predicted stress changes are near zero at the hypocenter of the M(sub w) 7.3, June 28, 1992, Landers earthquake which nucleated about 20 km beyond the northernmost edge of the Joshua Tree rupture. Based on aftershock migrations and the predicted static stress field, we speculate that redistribution of Joshua Tree-induced stress perturbations played a role in the spatio-temporal development of the earth sequence culminating in the Landers event.

  8. Regularized matrix regression

    PubMed Central

    Zhou, Hua; Li, Lexin

    2014-01-01

    Summary Modern technologies are producing a wealth of data with complex structures. For instance, in two-dimensional digital imaging, flow cytometry and electroencephalography, matrix-type covariates frequently arise when measurements are obtained for each combination of two underlying variables. To address scientific questions arising from those data, new regression methods that take matrices as covariates are needed, and sparsity or other forms of regularization are crucial owing to the ultrahigh dimensionality and complex structure of the matrix data. The popular lasso and related regularization methods hinge on the sparsity of the true signal in terms of the number of its non-zero coefficients. However, for the matrix data, the true signal is often of, or can be well approximated by, a low rank structure. As such, the sparsity is frequently in the form of low rank of the matrix parameters, which may seriously violate the assumption of the classical lasso. We propose a class of regularized matrix regression methods based on spectral regularization. A highly efficient and scalable estimation algorithm is developed, and a degrees-of-freedom formula is derived to facilitate model selection along the regularization path. Superior performance of the method proposed is demonstrated on both synthetic and real examples. PMID:24648830

  9. A two-component Matched Interface and Boundary (MIB) regularization for charge singularity in implicit solvation

    NASA Astrophysics Data System (ADS)

    Geng, Weihua; Zhao, Shan

    2017-12-01

    We present a new Matched Interface and Boundary (MIB) regularization method for treating charge singularity in solvated biomolecules whose electrostatics are described by the Poisson-Boltzmann (PB) equation. In a regularization method, by decomposing the potential function into two or three components, the singular component can be analytically represented by the Green's function, while other components possess a higher regularity. Our new regularization combines the efficiency of two-component schemes with the accuracy of the three-component schemes. Based on this regularization, a new MIB finite difference algorithm is developed for solving both linear and nonlinear PB equations, where the nonlinearity is handled by using the inexact-Newton's method. Compared with the existing MIB PB solver based on a three-component regularization, the present algorithm is simpler to implement by circumventing the work to solve a boundary value Poisson equation inside the molecular interface and to compute related interface jump conditions numerically. Moreover, the new MIB algorithm becomes computationally less expensive, while maintains the same second order accuracy. This is numerically verified by calculating the electrostatic potential and solvation energy on the Kirkwood sphere on which the analytical solutions are available and on a series of proteins with various sizes.

  10. Multipole Vortex Blobs (MVB): Symplectic Geometry and Dynamics.

    PubMed

    Holm, Darryl D; Jacobs, Henry O

    2017-01-01

    Vortex blob methods are typically characterized by a regularization length scale, below which the dynamics are trivial for isolated blobs. In this article, we observe that the dynamics need not be trivial if one is willing to consider distributional derivatives of Dirac delta functionals as valid vorticity distributions. More specifically, a new singular vortex theory is presented for regularized Euler fluid equations of ideal incompressible flow in the plane. We determine the conditions under which such regularized Euler fluid equations may admit vorticity singularities which are stronger than delta functions, e.g., derivatives of delta functions. We also describe the symplectic geometry associated with these augmented vortex structures, and we characterize the dynamics as Hamiltonian. Applications to the design of numerical methods similar to vortex blob methods are also discussed. Such findings illuminate the rich dynamics which occur below the regularization length scale and enlighten our perspective on the potential for regularized fluid models to capture multiscale phenomena.

  11. s-SMOOTH: Sparsity and Smoothness Enhanced EEG Brain Tomography

    PubMed Central

    Li, Ying; Qin, Jing; Hsin, Yue-Loong; Osher, Stanley; Liu, Wentai

    2016-01-01

    EEG source imaging enables us to reconstruct current density in the brain from the electrical measurements with excellent temporal resolution (~ ms). The corresponding EEG inverse problem is an ill-posed one that has infinitely many solutions. This is due to the fact that the number of EEG sensors is usually much smaller than that of the potential dipole locations, as well as noise contamination in the recorded signals. To obtain a unique solution, regularizations can be incorporated to impose additional constraints on the solution. An appropriate choice of regularization is critically important for the reconstruction accuracy of a brain image. In this paper, we propose a novel Sparsity and SMOOthness enhanced brain TomograpHy (s-SMOOTH) method to improve the reconstruction accuracy by integrating two recently proposed regularization techniques: Total Generalized Variation (TGV) regularization and ℓ1−2 regularization. TGV is able to preserve the source edge and recover the spatial distribution of the source intensity with high accuracy. Compared to the relevant total variation (TV) regularization, TGV enhances the smoothness of the image and reduces staircasing artifacts. The traditional TGV defined on a 2D image has been widely used in the image processing field. In order to handle 3D EEG source images, we propose a voxel-based Total Generalized Variation (vTGV) regularization that extends the definition of second-order TGV from 2D planar images to 3D irregular surfaces such as cortex surface. In addition, the ℓ1−2 regularization is utilized to promote sparsity on the current density itself. We demonstrate that ℓ1−2 regularization is able to enhance sparsity and accelerate computations than ℓ1 regularization. The proposed model is solved by an efficient and robust algorithm based on the difference of convex functions algorithm (DCA) and the alternating direction method of multipliers (ADMM). Numerical experiments using synthetic data demonstrate the advantages of the proposed method over other state-of-the-art methods in terms of total reconstruction accuracy, localization accuracy and focalization degree. The application to the source localization of event-related potential data further demonstrates the performance of the proposed method in real-world scenarios. PMID:27965529

  12. Nested Conjugate Gradient Algorithm with Nested Preconditioning for Non-linear Image Restoration.

    PubMed

    Skariah, Deepak G; Arigovindan, Muthuvel

    2017-06-19

    We develop a novel optimization algorithm, which we call Nested Non-Linear Conjugate Gradient algorithm (NNCG), for image restoration based on quadratic data fitting and smooth non-quadratic regularization. The algorithm is constructed as a nesting of two conjugate gradient (CG) iterations. The outer iteration is constructed as a preconditioned non-linear CG algorithm; the preconditioning is performed by the inner CG iteration that is linear. The inner CG iteration, which performs preconditioning for outer CG iteration, itself is accelerated by an another FFT based non-iterative preconditioner. We prove that the method converges to a stationary point for both convex and non-convex regularization functionals. We demonstrate experimentally that proposed method outperforms the well-known majorization-minimization method used for convex regularization, and a non-convex inertial-proximal method for non-convex regularization functional.

  13. Direct Regularized Estimation of Retinal Vascular Oxygen Tension Based on an Experimental Model

    PubMed Central

    Yildirim, Isa; Ansari, Rashid; Yetik, I. Samil; Shahidi, Mahnaz

    2014-01-01

    Phosphorescence lifetime imaging is commonly used to generate oxygen tension maps of retinal blood vessels by classical least squares (LS) estimation method. A spatial regularization method was later proposed and provided improved results. However, both methods obtain oxygen tension values from the estimates of intermediate variables, and do not yield an optimum estimate of oxygen tension values, due to their nonlinear dependence on the ratio of intermediate variables. In this paper, we provide an improved solution by devising a regularized direct least squares (RDLS) method that exploits available knowledge in studies that provide models of oxygen tension in retinal arteries and veins, unlike the earlier regularized LS approach where knowledge about intermediate variables is limited. The performance of the proposed RDLS method is evaluated by investigating and comparing the bias, variance, oxygen tension maps, 1-D profiles of arterial oxygen tension, and mean absolute error with those of earlier methods, and its superior performance both quantitatively and qualitatively is demonstrated. PMID:23732915

  14. Estimation of High-Dimensional Graphical Models Using Regularized Score Matching

    PubMed Central

    Lin, Lina; Drton, Mathias; Shojaie, Ali

    2017-01-01

    Graphical models are widely used to model stochastic dependences among large collections of variables. We introduce a new method of estimating undirected conditional independence graphs based on the score matching loss, introduced by Hyvärinen (2005), and subsequently extended in Hyvärinen (2007). The regularized score matching method we propose applies to settings with continuous observations and allows for computationally efficient treatment of possibly non-Gaussian exponential family models. In the well-explored Gaussian setting, regularized score matching avoids issues of asymmetry that arise when applying the technique of neighborhood selection, and compared to existing methods that directly yield symmetric estimates, the score matching approach has the advantage that the considered loss is quadratic and gives piecewise linear solution paths under ℓ1 regularization. Under suitable irrepresentability conditions, we show that ℓ1-regularized score matching is consistent for graph estimation in sparse high-dimensional settings. Through numerical experiments and an application to RNAseq data, we confirm that regularized score matching achieves state-of-the-art performance in the Gaussian case and provides a valuable tool for computationally efficient estimation in non-Gaussian graphical models. PMID:28638498

  15. Krylov subspace iterative methods for boundary element method based near-field acoustic holography.

    PubMed

    Valdivia, Nicolas; Williams, Earl G

    2005-02-01

    The reconstruction of the acoustic field for general surfaces is obtained from the solution of a matrix system that results from a boundary integral equation discretized using boundary element methods. The solution to the resultant matrix system is obtained using iterative regularization methods that counteract the effect of noise on the measurements. These methods will not require the calculation of the singular value decomposition, which can be expensive when the matrix system is considerably large. Krylov subspace methods are iterative methods that have the phenomena known as "semi-convergence," i.e., the optimal regularization solution is obtained after a few iterations. If the iteration is not stopped, the method converges to a solution that generally is totally corrupted by errors on the measurements. For these methods the number of iterations play the role of the regularization parameter. We will focus our attention to the study of the regularizing properties from the Krylov subspace methods like conjugate gradients, least squares QR and the recently proposed Hybrid method. A discussion and comparison of the available stopping rules will be included. A vibrating plate is considered as an example to validate our results.

  16. Higher order total variation regularization for EIT reconstruction.

    PubMed

    Gong, Bo; Schullcke, Benjamin; Krueger-Ziolek, Sabine; Zhang, Fan; Mueller-Lisse, Ullrich; Moeller, Knut

    2018-01-08

    Electrical impedance tomography (EIT) attempts to reveal the conductivity distribution of a domain based on the electrical boundary condition. This is an ill-posed inverse problem; its solution is very unstable. Total variation (TV) regularization is one of the techniques commonly employed to stabilize reconstructions. However, it is well known that TV regularization induces staircase effects, which are not realistic in clinical applications. To reduce such artifacts, modified TV regularization terms considering a higher order differential operator were developed in several previous studies. One of them is called total generalized variation (TGV) regularization. TGV regularization has been successively applied in image processing in a regular grid context. In this study, we adapted TGV regularization to the finite element model (FEM) framework for EIT reconstruction. Reconstructions using simulation and clinical data were performed. First results indicate that, in comparison to TV regularization, TGV regularization promotes more realistic images. Graphical abstract Reconstructed conductivity changes located on selected vertical lines. For each of the reconstructed images as well as the ground truth image, conductivity changes located along the selected left and right vertical lines are plotted. In these plots, the notation GT in the legend stands for ground truth, TV stands for total variation method, and TGV stands for total generalized variation method. Reconstructed conductivity distributions from the GREIT algorithm are also demonstrated.

  17. Solving the hypersingular boundary integral equation in three-dimensional acoustics using a regularization relationship.

    PubMed

    Yan, Zai You; Hung, Kin Chew; Zheng, Hui

    2003-05-01

    Regularization of the hypersingular integral in the normal derivative of the conventional Helmholtz integral equation through a double surface integral method or regularization relationship has been studied. By introducing the new concept of discretized operator matrix, evaluation of the double surface integrals is reduced to calculate the product of two discretized operator matrices. Such a treatment greatly improves the computational efficiency. As the number of frequencies to be computed increases, the computational cost of solving the composite Helmholtz integral equation is comparable to that of solving the conventional Helmholtz integral equation. In this paper, the detailed formulation of the proposed regularization method is presented. The computational efficiency and accuracy of the regularization method are demonstrated for a general class of acoustic radiation and scattering problems. The radiation of a pulsating sphere, an oscillating sphere, and a rigid sphere insonified by a plane acoustic wave are solved using the new method with curvilinear quadrilateral isoparametric elements. It is found that the numerical results rapidly converge to the corresponding analytical solutions as finer meshes are applied.

  18. Primal-dual convex optimization in large deformation diffeomorphic metric mapping: LDDMM meets robust regularizers

    NASA Astrophysics Data System (ADS)

    Hernandez, Monica

    2017-12-01

    This paper proposes a method for primal-dual convex optimization in variational large deformation diffeomorphic metric mapping problems formulated with robust regularizers and robust image similarity metrics. The method is based on Chambolle and Pock primal-dual algorithm for solving general convex optimization problems. Diagonal preconditioning is used to ensure the convergence of the algorithm to the global minimum. We consider three robust regularizers liable to provide acceptable results in diffeomorphic registration: Huber, V-Huber and total generalized variation. The Huber norm is used in the image similarity term. The primal-dual equations are derived for the stationary and the non-stationary parameterizations of diffeomorphisms. The resulting algorithms have been implemented for running in the GPU using Cuda. For the most memory consuming methods, we have developed a multi-GPU implementation. The GPU implementations allowed us to perform an exhaustive evaluation study in NIREP and LPBA40 databases. The experiments showed that, for all the considered regularizers, the proposed method converges to diffeomorphic solutions while better preserving discontinuities at the boundaries of the objects compared to baseline diffeomorphic registration methods. In most cases, the evaluation showed a competitive performance for the robust regularizers, close to the performance of the baseline diffeomorphic registration methods.

  19. Introduction of Total Variation Regularization into Filtered Backprojection Algorithm

    NASA Astrophysics Data System (ADS)

    Raczyński, L.; Wiślicki, W.; Klimaszewski, K.; Krzemień, W.; Kowalski, P.; Shopa, R. Y.; Białas, P.; Curceanu, C.; Czerwiński, E.; Dulski, K.; Gajos, A.; Głowacz, B.; Gorgol, M.; Hiesmayr, B.; Jasińska, B.; Kisielewska-Kamińska, D.; Korcyl, G.; Kozik, T.; Krawczyk, N.; Kubicz, E.; Mohammed, M.; Pawlik-Niedźwiecka, M.; Niedźwiecki, S.; Pałka, M.; Rudy, Z.; Sharma, N. G.; Sharma, S.; Silarski, M.; Skurzok, M.; Wieczorek, A.; Zgardzińska, B.; Zieliński, M.; Moskal, P.

    In this paper we extend the state-of-the-art filtered backprojection (FBP) method with application of the concept of Total Variation regularization. We compare the performance of the new algorithm with the most common form of regularizing in the FBP image reconstruction via apodizing functions. The methods are validated in terms of cross-correlation coefficient between reconstructed and real image of radioactive tracer distribution using standard Derenzo-type phantom. We demonstrate that the proposed approach results in higher cross-correlation values with respect to the standard FBP method.

  20. 42 CFR 61.6 - Method of application.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 1 2011-10-01 2011-10-01 false Method of application. 61.6 Section 61.6 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES FELLOWSHIPS, INTERNSHIPS, TRAINING FELLOWSHIPS Regular Fellowships § 61.6 Method of application. Application for a regular fellowship shall be...

  1. 42 CFR 61.6 - Method of application.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 1 2013-10-01 2013-10-01 false Method of application. 61.6 Section 61.6 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES FELLOWSHIPS, INTERNSHIPS, TRAINING FELLOWSHIPS Regular Fellowships § 61.6 Method of application. Application for a regular fellowship shall be...

  2. 42 CFR 61.6 - Method of application.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 1 2012-10-01 2012-10-01 false Method of application. 61.6 Section 61.6 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES FELLOWSHIPS, INTERNSHIPS, TRAINING FELLOWSHIPS Regular Fellowships § 61.6 Method of application. Application for a regular fellowship shall be...

  3. 42 CFR 61.6 - Method of application.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 1 2010-10-01 2010-10-01 false Method of application. 61.6 Section 61.6 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES FELLOWSHIPS, INTERNSHIPS, TRAINING FELLOWSHIPS Regular Fellowships § 61.6 Method of application. Application for a regular fellowship shall be...

  4. 42 CFR 61.6 - Method of application.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 42 Public Health 1 2014-10-01 2014-10-01 false Method of application. 61.6 Section 61.6 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES FELLOWSHIPS, INTERNSHIPS, TRAINING FELLOWSHIPS Regular Fellowships § 61.6 Method of application. Application for a regular fellowship shall be...

  5. Dictionary learning-based spatiotemporal regularization for 3D dense speckle tracking

    NASA Astrophysics Data System (ADS)

    Lu, Allen; Zontak, Maria; Parajuli, Nripesh; Stendahl, John C.; Boutagy, Nabil; Eberle, Melissa; O'Donnell, Matthew; Sinusas, Albert J.; Duncan, James S.

    2017-03-01

    Speckle tracking is a common method for non-rigid tissue motion analysis in 3D echocardiography, where unique texture patterns are tracked through the cardiac cycle. However, poor tracking often occurs due to inherent ultrasound issues, such as image artifacts and speckle decorrelation; thus regularization is required. Various methods, such as optical flow, elastic registration, and block matching techniques have been proposed to track speckle motion. Such methods typically apply spatial and temporal regularization in a separate manner. In this paper, we propose a joint spatiotemporal regularization method based on an adaptive dictionary representation of the dense 3D+time Lagrangian motion field. Sparse dictionaries have good signal adaptive and noise-reduction properties; however, they are prone to quantization errors. Our method takes advantage of the desirable noise suppression, while avoiding the undesirable quantization error. The idea is to enforce regularization only on the poorly tracked trajectories. Specifically, our method 1.) builds data-driven 4-dimensional dictionary of Lagrangian displacements using sparse learning, 2.) automatically identifies poorly tracked trajectories (outliers) based on sparse reconstruction errors, and 3.) performs sparse reconstruction of the outliers only. Our approach can be applied on dense Lagrangian motion fields calculated by any method. We demonstrate the effectiveness of our approach on a baseline block matching speckle tracking and evaluate performance of the proposed algorithm using tracking and strain accuracy analysis.

  6. A dynamical regularization algorithm for solving inverse source problems of elliptic partial differential equations

    NASA Astrophysics Data System (ADS)

    Zhang, Ye; Gong, Rongfang; Cheng, Xiaoliang; Gulliksson, Mårten

    2018-06-01

    This study considers the inverse source problem for elliptic partial differential equations with both Dirichlet and Neumann boundary data. The unknown source term is to be determined by additional boundary conditions. Unlike the existing methods found in the literature, which usually employ the first-order in time gradient-like system (such as the steepest descent methods) for numerically solving the regularized optimization problem with a fixed regularization parameter, we propose a novel method with a second-order in time dissipative gradient-like system and a dynamical selected regularization parameter. A damped symplectic scheme is proposed for the numerical solution. Theoretical analysis is given for both the continuous model and the numerical algorithm. Several numerical examples are provided to show the robustness of the proposed algorithm.

  7. History matching by spline approximation and regularization in single-phase areal reservoirs

    NASA Technical Reports Server (NTRS)

    Lee, T. Y.; Kravaris, C.; Seinfeld, J.

    1986-01-01

    An automatic history matching algorithm is developed based on bi-cubic spline approximations of permeability and porosity distributions and on the theory of regularization to estimate permeability or porosity in a single-phase, two-dimensional real reservoir from well pressure data. The regularization feature of the algorithm is used to convert the ill-posed history matching problem into a well-posed problem. The algorithm employs the conjugate gradient method as its core minimization method. A number of numerical experiments are carried out to evaluate the performance of the algorithm. Comparisons with conventional (non-regularized) automatic history matching algorithms indicate the superiority of the new algorithm with respect to the parameter estimates obtained. A quasioptimal regularization parameter is determined without requiring a priori information on the statistical properties of the observations.

  8. General phase regularized reconstruction using phase cycling.

    PubMed

    Ong, Frank; Cheng, Joseph Y; Lustig, Michael

    2018-07-01

    To develop a general phase regularized image reconstruction method, with applications to partial Fourier imaging, water-fat imaging and flow imaging. The problem of enforcing phase constraints in reconstruction was studied under a regularized inverse problem framework. A general phase regularized reconstruction algorithm was proposed to enable various joint reconstruction of partial Fourier imaging, water-fat imaging and flow imaging, along with parallel imaging (PI) and compressed sensing (CS). Since phase regularized reconstruction is inherently non-convex and sensitive to phase wraps in the initial solution, a reconstruction technique, named phase cycling, was proposed to render the overall algorithm invariant to phase wraps. The proposed method was applied to retrospectively under-sampled in vivo datasets and compared with state of the art reconstruction methods. Phase cycling reconstructions showed reduction of artifacts compared to reconstructions without phase cycling and achieved similar performances as state of the art results in partial Fourier, water-fat and divergence-free regularized flow reconstruction. Joint reconstruction of partial Fourier + water-fat imaging + PI + CS, and partial Fourier + divergence-free regularized flow imaging + PI + CS were demonstrated. The proposed phase cycling reconstruction provides an alternative way to perform phase regularized reconstruction, without the need to perform phase unwrapping. It is robust to the choice of initial solutions and encourages the joint reconstruction of phase imaging applications. Magn Reson Med 80:112-125, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  9. Evaluation of uncertainty for regularized deconvolution: A case study in hydrophone measurements.

    PubMed

    Eichstädt, S; Wilkens, V

    2017-06-01

    An estimation of the measurand in dynamic metrology usually requires a deconvolution based on a dynamic calibration of the measuring system. Since deconvolution is, mathematically speaking, an ill-posed inverse problem, some kind of regularization is required to render the problem stable and obtain usable results. Many approaches to regularized deconvolution exist in the literature, but the corresponding evaluation of measurement uncertainties is, in general, an unsolved issue. In particular, the uncertainty contribution of the regularization itself is a topic of great importance, because it has a significant impact on the estimation result. Here, a versatile approach is proposed to express prior knowledge about the measurand based on a flexible, low-dimensional modeling of an upper bound on the magnitude spectrum of the measurand. This upper bound allows the derivation of an uncertainty associated with the regularization method in line with the guidelines in metrology. As a case study for the proposed method, hydrophone measurements in medical ultrasound with an acoustic working frequency of up to 7.5 MHz are considered, but the approach is applicable for all kinds of estimation methods in dynamic metrology, where regularization is required and which can be expressed as a multiplication in the frequency domain.

  10. Thermal depth profiling of vascular lesions: automated regularization of reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Verkruysse, Wim; Choi, Bernard; Zhang, Jenny R.; Kim, Jeehyun; Nelson, J. Stuart

    2008-03-01

    Pulsed photo-thermal radiometry (PPTR) is a non-invasive, non-contact diagnostic technique used to locate cutaneous chromophores such as melanin (epidermis) and hemoglobin (vascular structures). Clinical utility of PPTR is limited because it typically requires trained user intervention to regularize the inversion solution. Herein, the feasibility of automated regularization was studied. A second objective of this study was to depart from modeling port wine stain PWS, a vascular skin lesion frequently studied with PPTR, as strictly layered structures since this may influence conclusions regarding PPTR reconstruction quality. Average blood vessel depths, diameters and densities derived from histology of 30 PWS patients were used to generate 15 randomized lesion geometries for which we simulated PPTR signals. Reconstruction accuracy for subjective regularization was compared with that for automated regularization methods. The objective regularization approach performed better. However, the average difference was much smaller than the variation between the 15 simulated profiles. Reconstruction quality depended more on the actual profile to be reconstructed than on the reconstruction algorithm or regularization method. Similar, or better, accuracy reconstructions can be achieved with an automated regularization procedure which enhances prospects for user friendly implementation of PPTR to optimize laser therapy on an individual patient basis.

  11. Minimum entropy deconvolution optimized sinusoidal synthesis and its application to vibration based fault detection

    NASA Astrophysics Data System (ADS)

    Li, Gang; Zhao, Qing

    2017-03-01

    In this paper, a minimum entropy deconvolution based sinusoidal synthesis (MEDSS) filter is proposed to improve the fault detection performance of the regular sinusoidal synthesis (SS) method. The SS filter is an efficient linear predictor that exploits the frequency properties during model construction. The phase information of the harmonic components is not used in the regular SS filter. However, the phase relationships are important in differentiating noise from characteristic impulsive fault signatures. Therefore, in this work, the minimum entropy deconvolution (MED) technique is used to optimize the SS filter during the model construction process. A time-weighted-error Kalman filter is used to estimate the MEDSS model parameters adaptively. Three simulation examples and a practical application case study are provided to illustrate the effectiveness of the proposed method. The regular SS method and the autoregressive MED (ARMED) method are also implemented for comparison. The MEDSS model has demonstrated superior performance compared to the regular SS method and it also shows comparable or better performance with much less computational intensity than the ARMED method.

  12. Automatic Aircraft Collision Avoidance System and Method

    NASA Technical Reports Server (NTRS)

    Skoog, Mark (Inventor); Hook, Loyd (Inventor); McWherter, Shaun (Inventor); Willhite, Jaimie (Inventor)

    2014-01-01

    The invention is a system and method of compressing a DTM to be used in an Auto-GCAS system using a semi-regular geometric compression algorithm. In general, the invention operates by first selecting the boundaries of the three dimensional map to be compressed and dividing the three dimensional map data into regular areas. Next, a type of free-edged, flat geometric surface is selected which will be used to approximate terrain data of the three dimensional map data. The flat geometric surface is used to approximate terrain data for each regular area. The approximations are checked to determine if they fall within selected tolerances. If the approximation for a specific regular area is within specified tolerance, the data is saved for that specific regular area. If the approximation for a specific area falls outside the specified tolerances, the regular area is divided and a flat geometric surface approximation is made for each of the divided areas. This process is recursively repeated until all of the regular areas are approximated by flat geometric surfaces. Finally, the compressed three dimensional map data is provided to the automatic ground collision system for an aircraft.

  13. Renormalized stress-energy tensor for stationary black holes

    NASA Astrophysics Data System (ADS)

    Levi, Adam

    2017-01-01

    We continue the presentation of the pragmatic mode-sum regularization (PMR) method for computing the renormalized stress-energy tensor (RSET). We show in detail how to employ the t -splitting variant of the method, which was first presented for ⟨ϕ2⟩ren , to compute the RSET in a stationary, asymptotically flat background. This variant of the PMR method was recently used to compute the RSET for an evaporating spinning black hole. As an example for regularization, we demonstrate here the computation of the RSET for a minimally coupled, massless scalar field on Schwarzschild background in all three vacuum states. We discuss future work and possible improvements of the regularization schemes in the PMR method.

  14. On the regularized fermionic projector of the vacuum

    NASA Astrophysics Data System (ADS)

    Finster, Felix

    2008-03-01

    We construct families of fermionic projectors with spherically symmetric regularization, which satisfy the condition of a distributional MP-product. The method is to analyze regularization tails with a power law or logarithmic scaling in composite expressions in the fermionic projector. The resulting regularizations break the Lorentz symmetry and give rise to a multilayer structure of the fermionic projector near the light cone. Furthermore, we construct regularizations which go beyond the distributional MP-product in that they yield additional distributional contributions supported at the origin. The remaining freedom for the regularization parameters and the consequences for the normalization of the fermionic states are discussed.

  15. Accounting for the Effect of Earth's Rotation in Magnetotelluric Inference

    NASA Astrophysics Data System (ADS)

    Riegert, D. L.; Thomson, D. J.

    2017-12-01

    The study of geomagnetism has been documented as far back as 1722 when the watchmaker G. Graham constructed a more sensitive compass and showed that the variations in geomagnetic direction varied with an irregular daily pattern. Increased interest in geomagnetism in geomagnetism began at the end of the 19th century (Lamb, Schuster, Chapman, and Price). The Magnetotelluric Method was first introduced in the 1950's (Cagniard and Tikhonov), and, at its core, is simply a regression problem. The result of this method is a transfer function estimate which describes the earth's response to magnetic field variations. This estimate can then be used to infer the earth's subsurface structure; useful for applications such as natural resource exploration. The statistical problem of estimating a transfer function between geomagnetic and induced current measurements has evolved since the 1950's due to a variety of problems: non-stationarity, outliers, and violation of Gaussian assumptions. To address some of these issues, robust regression methods (Chave and Thomson, 2004) and the remote reference method (Gambel, 1979) have been proposed and used. The current method seems to provide reasonable estimates, but still requires a large amount of data. Using the multitaper method of spectral analysis (Thomson, 1982), taking long (greater than 4 months) blocks of geomagnetic data, and concentrating on frequencies below 1000 microhertz to avoid ultraviolet effects, one finds that:1) the cross-spectra are dominated by many offset frequencies including plus and minus 1 and 2 cycles per day;2) the coherence at these offset frequencies is often stronger than at zero offset;3) there are strong couplings from the "quasi two-day" cycle;4) frequencines are usually not symmetric;5) the spectra are dominated by the normal modes of the Sun. This talk will discuss the method of incorporating these observations into the transfer function estimation model, some of the difficulties that arose, their solutions, and current results.

  16. Inversion of geothermal heat flux in a thermomechanically coupled nonlinear Stokes ice sheet model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Hongyu; Petra, Noemi; Stadler, Georg

    We address the inverse problem of inferring the basal geothermal heat flux from surface velocity observations using a steady-state thermomechanically coupled nonlinear Stokes ice flow model. This is a challenging inverse problem since the map from basal heat flux to surface velocity observables is indirect: the heat flux is a boundary condition for the thermal advection–diffusion equation, which couples to the nonlinear Stokes ice flow equations; together they determine the surface ice flow velocity. This multiphysics inverse problem is formulated as a nonlinear least-squares optimization problem with a cost functional that includes the data misfit between surface velocity observations andmore » model predictions. A Tikhonov regularization term is added to render the problem well posed. We derive adjoint-based gradient and Hessian expressions for the resulting partial differential equation (PDE)-constrained optimization problem and propose an inexact Newton method for its solution. As a consequence of the Petrov–Galerkin discretization of the energy equation, we show that discretization and differentiation do not commute; that is, the order in which we discretize the cost functional and differentiate it affects the correctness of the gradient. Using two- and three-dimensional model problems, we study the prospects for and limitations of the inference of the geothermal heat flux field from surface velocity observations. The results show that the reconstruction improves as the noise level in the observations decreases and that short-wavelength variations in the geothermal heat flux are difficult to recover. We analyze the ill-posedness of the inverse problem as a function of the number of observations by examining the spectrum of the Hessian of the cost functional. Motivated by the popularity of operator-split or staggered solvers for forward multiphysics problems – i.e., those that drop two-way coupling terms to yield a one-way coupled forward Jacobian – we study the effect on the inversion of a one-way coupling of the adjoint energy and Stokes equations. Here, we show that taking such a one-way coupled approach for the adjoint equations can lead to an incorrect gradient and premature termination of optimization iterations. This is due to loss of a descent direction stemming from inconsistency of the gradient with the contours of the cost functional. Nevertheless, one may still obtain a reasonable approximate inverse solution particularly if important features of the reconstructed solution emerge early in optimization iterations, before the premature termination.« less

  17. Inversion of geothermal heat flux in a thermomechanically coupled nonlinear Stokes ice sheet model

    DOE PAGES

    Zhu, Hongyu; Petra, Noemi; Stadler, Georg; ...

    2016-07-13

    We address the inverse problem of inferring the basal geothermal heat flux from surface velocity observations using a steady-state thermomechanically coupled nonlinear Stokes ice flow model. This is a challenging inverse problem since the map from basal heat flux to surface velocity observables is indirect: the heat flux is a boundary condition for the thermal advection–diffusion equation, which couples to the nonlinear Stokes ice flow equations; together they determine the surface ice flow velocity. This multiphysics inverse problem is formulated as a nonlinear least-squares optimization problem with a cost functional that includes the data misfit between surface velocity observations andmore » model predictions. A Tikhonov regularization term is added to render the problem well posed. We derive adjoint-based gradient and Hessian expressions for the resulting partial differential equation (PDE)-constrained optimization problem and propose an inexact Newton method for its solution. As a consequence of the Petrov–Galerkin discretization of the energy equation, we show that discretization and differentiation do not commute; that is, the order in which we discretize the cost functional and differentiate it affects the correctness of the gradient. Using two- and three-dimensional model problems, we study the prospects for and limitations of the inference of the geothermal heat flux field from surface velocity observations. The results show that the reconstruction improves as the noise level in the observations decreases and that short-wavelength variations in the geothermal heat flux are difficult to recover. We analyze the ill-posedness of the inverse problem as a function of the number of observations by examining the spectrum of the Hessian of the cost functional. Motivated by the popularity of operator-split or staggered solvers for forward multiphysics problems – i.e., those that drop two-way coupling terms to yield a one-way coupled forward Jacobian – we study the effect on the inversion of a one-way coupling of the adjoint energy and Stokes equations. Here, we show that taking such a one-way coupled approach for the adjoint equations can lead to an incorrect gradient and premature termination of optimization iterations. This is due to loss of a descent direction stemming from inconsistency of the gradient with the contours of the cost functional. Nevertheless, one may still obtain a reasonable approximate inverse solution particularly if important features of the reconstructed solution emerge early in optimization iterations, before the premature termination.« less

  18. Improved resistivity imaging of groundwater solute plumes using POD-based inversion

    NASA Astrophysics Data System (ADS)

    Oware, E. K.; Moysey, S. M.; Khan, T.

    2012-12-01

    We propose a new approach for enforcing physics-based regularization in electrical resistivity imaging (ERI) problems. The approach utilizes a basis-constrained inversion where an optimal set of basis vectors is extracted from training data by Proper Orthogonal Decomposition (POD). The key aspect of the approach is that Monte Carlo simulation of flow and transport is used to generate a training dataset, thereby intrinsically capturing the physics of the underlying flow and transport models in a non-parametric form. POD allows for these training data to be projected onto a subspace of the original domain, resulting in the extraction of a basis for the inversion that captures characteristics of the groundwater flow and transport system, while simultaneously allowing for dimensionality reduction of the original problem in the projected space We use two different synthetic transport scenarios in heterogeneous media to illustrate how the POD-based inversion compares with standard Tikhonov and coupled inversion. The first scenario had a single source zone leading to a unimodal solute plume (synthetic #1), whereas, the second scenario had two source zones that produced a bimodal plume (synthetic #2). For both coupled inversion and the POD approach, the conceptual flow and transport model used considered only a single source zone for both scenarios. Results were compared based on multiple metrics (concentration root-mean square error (RMSE), peak concentration, and total solute mass). In addition, results for POD inversion based on 3 different data densities (120, 300, and 560 data points) and varying number of selected basis images (100, 300, and 500) were compared. For synthetic #1, we found that all three methods provided qualitatively reasonable reproduction of the true plume. Quantitatively, the POD inversion performed best overall for each metric considered. Moreover, since synthetic #1 was consistent with the conceptual transport model, a small number of basis vectors (100) contained enough a priori information to constrain the inversion. Increasing the amount of data or number of selected basis images did not translate into significant improvement in imaging results. For synthetic #2, the RMSE and error in total mass were lowest for the POD inversion. However, the peak concentration was significantly overestimated by the POD approach. Regardless, the POD-based inversion was the only technique that could capture the bimodality of the plume in the reconstructed image, thus providing critical information that could be used to reconceptualize the transport problem. We also found that, in the case of synthetic #2, increasing the number of resistivity measurements and the number of selected basis vectors allowed for significant improvements in the reconstructed images.

  19. Inversion of geothermal heat flux in a thermomechanically coupled nonlinear Stokes ice sheet model

    NASA Astrophysics Data System (ADS)

    Zhu, Hongyu; Petra, Noemi; Stadler, Georg; Isaac, Tobin; Hughes, Thomas J. R.; Ghattas, Omar

    2016-07-01

    We address the inverse problem of inferring the basal geothermal heat flux from surface velocity observations using a steady-state thermomechanically coupled nonlinear Stokes ice flow model. This is a challenging inverse problem since the map from basal heat flux to surface velocity observables is indirect: the heat flux is a boundary condition for the thermal advection-diffusion equation, which couples to the nonlinear Stokes ice flow equations; together they determine the surface ice flow velocity. This multiphysics inverse problem is formulated as a nonlinear least-squares optimization problem with a cost functional that includes the data misfit between surface velocity observations and model predictions. A Tikhonov regularization term is added to render the problem well posed. We derive adjoint-based gradient and Hessian expressions for the resulting partial differential equation (PDE)-constrained optimization problem and propose an inexact Newton method for its solution. As a consequence of the Petrov-Galerkin discretization of the energy equation, we show that discretization and differentiation do not commute; that is, the order in which we discretize the cost functional and differentiate it affects the correctness of the gradient. Using two- and three-dimensional model problems, we study the prospects for and limitations of the inference of the geothermal heat flux field from surface velocity observations. The results show that the reconstruction improves as the noise level in the observations decreases and that short-wavelength variations in the geothermal heat flux are difficult to recover. We analyze the ill-posedness of the inverse problem as a function of the number of observations by examining the spectrum of the Hessian of the cost functional. Motivated by the popularity of operator-split or staggered solvers for forward multiphysics problems - i.e., those that drop two-way coupling terms to yield a one-way coupled forward Jacobian - we study the effect on the inversion of a one-way coupling of the adjoint energy and Stokes equations. We show that taking such a one-way coupled approach for the adjoint equations can lead to an incorrect gradient and premature termination of optimization iterations. This is due to loss of a descent direction stemming from inconsistency of the gradient with the contours of the cost functional. Nevertheless, one may still obtain a reasonable approximate inverse solution particularly if important features of the reconstructed solution emerge early in optimization iterations, before the premature termination.

  20. The method of A-harmonic approximation and optimal interior partial regularity for nonlinear elliptic systems under the controllable growth condition

    NASA Astrophysics Data System (ADS)

    Chen, Shuhong; Tan, Zhong

    2007-11-01

    In this paper, we consider the nonlinear elliptic systems under controllable growth condition. We use a new method introduced by Duzaar and Grotowski, for proving partial regularity for weak solutions, based on a generalization of the technique of harmonic approximation. We extend previous partial regularity results under the natural growth condition to the case of the controllable growth condition, and directly establishing the optimal Hölder exponent for the derivative of a weak solution.

  1. Limited angle CT reconstruction by simultaneous spatial and Radon domain regularization based on TV and data-driven tight frame

    NASA Astrophysics Data System (ADS)

    Zhang, Wenkun; Zhang, Hanming; Wang, Linyuan; Cai, Ailong; Li, Lei; Yan, Bin

    2018-02-01

    Limited angle computed tomography (CT) reconstruction is widely performed in medical diagnosis and industrial testing because of the size of objects, engine/armor inspection requirements, and limited scan flexibility. Limited angle reconstruction necessitates usage of optimization-based methods that utilize additional sparse priors. However, most of conventional methods solely exploit sparsity priors of spatial domains. When CT projection suffers from serious data deficiency or various noises, obtaining reconstruction images that meet the requirement of quality becomes difficult and challenging. To solve this problem, this paper developed an adaptive reconstruction method for limited angle CT problem. The proposed method simultaneously uses spatial and Radon domain regularization model based on total variation (TV) and data-driven tight frame. Data-driven tight frame being derived from wavelet transformation aims at exploiting sparsity priors of sinogram in Radon domain. Unlike existing works that utilize pre-constructed sparse transformation, the framelets of the data-driven regularization model can be adaptively learned from the latest projection data in the process of iterative reconstruction to provide optimal sparse approximations for given sinogram. At the same time, an effective alternating direction method is designed to solve the simultaneous spatial and Radon domain regularization model. The experiments for both simulation and real data demonstrate that the proposed algorithm shows better performance in artifacts depression and details preservation than the algorithms solely using regularization model of spatial domain. Quantitative evaluations for the results also indicate that the proposed algorithm applying learning strategy performs better than the dual domains algorithms without learning regularization model

  2. [Application of regular expression in extracting key information from Chinese medicine literatures about re-evaluation of post-marketing surveillance].

    PubMed

    Wang, Zhifei; Xie, Yanming; Wang, Yongyan

    2011-10-01

    Computerizing extracting information from Chinese medicine literature seems more convenient than hand searching, which could simplify searching process and improve the accuracy. However, many computerized auto-extracting methods are increasingly used, regular expression is so special that could be efficient for extracting useful information in research. This article focused on regular expression applying in extracting information from Chinese medicine literature. Two practical examples were reported in this article about regular expression to extract "case number (non-terminology)" and "efficacy rate (subgroups for related information identification)", which explored how to extract information in Chinese medicine literature by means of some special research method.

  3. Generalization Performance of Regularized Ranking With Multiscale Kernels.

    PubMed

    Zhou, Yicong; Chen, Hong; Lan, Rushi; Pan, Zhibin

    2016-05-01

    The regularized kernel method for the ranking problem has attracted increasing attentions in machine learning. The previous regularized ranking algorithms are usually based on reproducing kernel Hilbert spaces with a single kernel. In this paper, we go beyond this framework by investigating the generalization performance of the regularized ranking with multiscale kernels. A novel ranking algorithm with multiscale kernels is proposed and its representer theorem is proved. We establish the upper bound of the generalization error in terms of the complexity of hypothesis spaces. It shows that the multiscale ranking algorithm can achieve satisfactory learning rates under mild conditions. Experiments demonstrate the effectiveness of the proposed method for drug discovery and recommendation tasks.

  4. Application of L1/2 regularization logistic method in heart disease diagnosis.

    PubMed

    Zhang, Bowen; Chai, Hua; Yang, Ziyi; Liang, Yong; Chu, Gejin; Liu, Xiaoying

    2014-01-01

    Heart disease has become the number one killer of human health, and its diagnosis depends on many features, such as age, blood pressure, heart rate and other dozens of physiological indicators. Although there are so many risk factors, doctors usually diagnose the disease depending on their intuition and experience, which requires a lot of knowledge and experience for correct determination. To find the hidden medical information in the existing clinical data is a noticeable and powerful approach in the study of heart disease diagnosis. In this paper, sparse logistic regression method is introduced to detect the key risk factors using L(1/2) regularization on the real heart disease data. Experimental results show that the sparse logistic L(1/2) regularization method achieves fewer but informative key features than Lasso, SCAD, MCP and Elastic net regularization approaches. Simultaneously, the proposed method can cut down the computational complexity, save cost and time to undergo medical tests and checkups, reduce the number of attributes needed to be taken from patients.

  5. Patch-based image reconstruction for PET using prior-image derived dictionaries

    NASA Astrophysics Data System (ADS)

    Tahaei, Marzieh S.; Reader, Andrew J.

    2016-09-01

    In PET image reconstruction, regularization is often needed to reduce the noise in the resulting images. Patch-based image processing techniques have recently been successfully used for regularization in medical image reconstruction through a penalized likelihood framework. Re-parameterization within reconstruction is another powerful regularization technique in which the object in the scanner is re-parameterized using coefficients for spatially-extensive basis vectors. In this work, a method for extracting patch-based basis vectors from the subject’s MR image is proposed. The coefficients for these basis vectors are then estimated using the conventional MLEM algorithm. Furthermore, using the alternating direction method of multipliers, an algorithm for optimizing the Poisson log-likelihood while imposing sparsity on the parameters is also proposed. This novel method is then utilized to find sparse coefficients for the patch-based basis vectors extracted from the MR image. The results indicate the superiority of the proposed methods to patch-based regularization using the penalized likelihood framework.

  6. Efficient energy stable schemes for isotropic and strongly anisotropic Cahn-Hilliard systems with the Willmore regularization

    NASA Astrophysics Data System (ADS)

    Chen, Ying; Lowengrub, John; Shen, Jie; Wang, Cheng; Wise, Steven

    2018-07-01

    We develop efficient energy stable numerical methods for solving isotropic and strongly anisotropic Cahn-Hilliard systems with the Willmore regularization. The scheme, which involves adaptive mesh refinement and a nonlinear multigrid finite difference method, is constructed based on a convex splitting approach. We prove that, for the isotropic Cahn-Hilliard system with the Willmore regularization, the total free energy of the system is non-increasing for any time step and mesh sizes. A straightforward modification of the scheme is then used to solve the regularized strongly anisotropic Cahn-Hilliard system, and it is numerically verified that the discrete energy of the anisotropic system is also non-increasing, and can be efficiently solved by using the modified stable method. We present numerical results in both two and three dimensions that are in good agreement with those in earlier work on the topics. Numerical simulations are presented to demonstrate the accuracy and efficiency of the proposed methods.

  7. An analytical method for the inverse Cauchy problem of Lame equation in a rectangle

    NASA Astrophysics Data System (ADS)

    Grigor’ev, Yu

    2018-04-01

    In this paper, we present an analytical computational method for the inverse Cauchy problem of Lame equation in the elasticity theory. A rectangular domain is frequently used in engineering structures and we only consider the analytical solution in a two-dimensional rectangle, wherein a missing boundary condition is recovered from the full measurement of stresses and displacements on an accessible boundary. The essence of the method consists in solving three independent Cauchy problems for the Laplace and Poisson equations. For each of them, the Fourier series is used to formulate a first-kind Fredholm integral equation for the unknown function of data. Then, we use a Lavrentiev regularization method, and the termwise separable property of kernel function allows us to obtain a closed-form regularized solution. As a result, for the displacement components, we obtain solutions in the form of a sum of series with three regularization parameters. The uniform convergence and error estimation of the regularized solutions are proved.

  8. Advanced Imaging Methods for Long-Baseline Optical Interferometry

    NASA Astrophysics Data System (ADS)

    Le Besnerais, G.; Lacour, S.; Mugnier, L. M.; Thiebaut, E.; Perrin, G.; Meimon, S.

    2008-11-01

    We address the data processing methods needed for imaging with a long baseline optical interferometer. We first describe parametric reconstruction approaches and adopt a general formulation of nonparametric image reconstruction as the solution of a constrained optimization problem. Within this framework, we present two recent reconstruction methods, Mira and Wisard, representative of the two generic approaches for dealing with the missing phase information. Mira is based on an implicit approach and a direct optimization of a Bayesian criterion while Wisard adopts a self-calibration approach and an alternate minimization scheme inspired from radio-astronomy. Both methods can handle various regularization criteria. We review commonly used regularization terms and introduce an original quadratic regularization called ldquosoft support constraintrdquo that favors the object compactness. It yields images of quality comparable to nonquadratic regularizations on the synthetic data we have processed. We then perform image reconstructions, both parametric and nonparametric, on astronomical data from the IOTA interferometer, and discuss the respective roles of parametric and nonparametric approaches for optical interferometric imaging.

  9. Representation of the exact relativistic electronic Hamiltonian within the regular approximation

    NASA Astrophysics Data System (ADS)

    Filatov, Michael; Cremer, Dieter

    2003-12-01

    The exact relativistic Hamiltonian for electronic states is expanded in terms of energy-independent linear operators within the regular approximation. An effective relativistic Hamiltonian has been obtained, which yields in lowest order directly the infinite-order regular approximation (IORA) rather than the zeroth-order regular approximation method. Further perturbational expansion of the exact relativistic electronic energy utilizing the effective Hamiltonian leads to new methods based on ordinary (IORAn) or double [IORAn(2)] perturbation theory (n: order of expansion), which provide improved energies in atomic calculations. Energies calculated with IORA4 and IORA3(2) are accurate up to c-20. Furthermore, IORA is improved by using the IORA wave function to calculate the Rayleigh quotient, which, if minimized, leads to the exact relativistic energy. The outstanding performance of this new IORA method coined scaled IORA is documented in atomic and molecular calculations.

  10. Dense motion estimation using regularization constraints on local parametric models.

    PubMed

    Patras, Ioannis; Worring, Marcel; van den Boomgaard, Rein

    2004-11-01

    This paper presents a method for dense optical flow estimation in which the motion field within patches that result from an initial intensity segmentation is parametrized with models of different order. We propose a novel formulation which introduces regularization constraints between the model parameters of neighboring patches. In this way, we provide the additional constraints for very small patches and for patches whose intensity variation cannot sufficiently constrain the estimation of their motion parameters. In order to preserve motion discontinuities, we use robust functions as a regularization mean. We adopt a three-frame approach and control the balance between the backward and forward constraints by a real-valued direction field on which regularization constraints are applied. An iterative deterministic relaxation method is employed in order to solve the corresponding optimization problem. Experimental results show that the proposed method deals successfully with motions large in magnitude, motion discontinuities, and produces accurate piecewise-smooth motion fields.

  11. RBOOST: RIEMANNIAN DISTANCE BASED REGULARIZED BOOSTING

    PubMed Central

    Liu, Meizhu; Vemuri, Baba C.

    2011-01-01

    Boosting is a versatile machine learning technique that has numerous applications including but not limited to image processing, computer vision, data mining etc. It is based on the premise that the classification performance of a set of weak learners can be boosted by some weighted combination of them. There have been a number of boosting methods proposed in the literature, such as the AdaBoost, LPBoost, SoftBoost and their variations. However, the learning update strategies used in these methods usually lead to overfitting and instabilities in the classification accuracy. Improved boosting methods via regularization can overcome such difficulties. In this paper, we propose a Riemannian distance regularized LPBoost, dubbed RBoost. RBoost uses Riemannian distance between two square-root densities (in closed form) – used to represent the distribution over the training data and the classification error respectively – to regularize the error distribution in an iterative update formula. Since this distance is in closed form, RBoost requires much less computational cost compared to other regularized Boosting algorithms. We present several experimental results depicting the performance of our algorithm in comparison to recently published methods, LP-Boost and CAVIAR, on a variety of datasets including the publicly available OASIS database, a home grown Epilepsy database and the well known UCI repository. Results depict that the RBoost algorithm performs better than the competing methods in terms of accuracy and efficiency. PMID:21927643

  12. Composite SAR imaging using sequential joint sparsity

    NASA Astrophysics Data System (ADS)

    Sanders, Toby; Gelb, Anne; Platte, Rodrigo B.

    2017-06-01

    This paper investigates accurate and efficient ℓ1 regularization methods for generating synthetic aperture radar (SAR) images. Although ℓ1 regularization algorithms are already employed in SAR imaging, practical and efficient implementation in terms of real time imaging remain a challenge. Here we demonstrate that fast numerical operators can be used to robustly implement ℓ1 regularization methods that are as or more efficient than traditional approaches such as back projection, while providing superior image quality. In particular, we develop a sequential joint sparsity model for composite SAR imaging which naturally combines the joint sparsity methodology with composite SAR. Our technique, which can be implemented using standard, fractional, or higher order total variation regularization, is able to reduce the effects of speckle and other noisy artifacts with little additional computational cost. Finally we show that generalizing total variation regularization to non-integer and higher orders provides improved flexibility and robustness for SAR imaging.

  13. Efficient operator splitting algorithm for joint sparsity-regularized SPIRiT-based parallel MR imaging reconstruction.

    PubMed

    Duan, Jizhong; Liu, Yu; Jing, Peiguang

    2018-02-01

    Self-consistent parallel imaging (SPIRiT) is an auto-calibrating model for the reconstruction of parallel magnetic resonance imaging, which can be formulated as a regularized SPIRiT problem. The Projection Over Convex Sets (POCS) method was used to solve the formulated regularized SPIRiT problem. However, the quality of the reconstructed image still needs to be improved. Though methods such as NonLinear Conjugate Gradients (NLCG) can achieve higher spatial resolution, these methods always demand very complex computation and converge slowly. In this paper, we propose a new algorithm to solve the formulated Cartesian SPIRiT problem with the JTV and JL1 regularization terms. The proposed algorithm uses the operator splitting (OS) technique to decompose the problem into a gradient problem and a denoising problem with two regularization terms, which is solved by our proposed split Bregman based denoising algorithm, and adopts the Barzilai and Borwein method to update step size. Simulation experiments on two in vivo data sets demonstrate that the proposed algorithm is 1.3 times faster than ADMM for datasets with 8 channels. Especially, our proposal is 2 times faster than ADMM for the dataset with 32 channels. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Accelerated Edge-Preserving Image Restoration Without Boundary Artifacts

    PubMed Central

    Matakos, Antonios; Ramani, Sathish; Fessler, Jeffrey A.

    2013-01-01

    To reduce blur in noisy images, regularized image restoration methods have been proposed that use non-quadratic regularizers (like l1 regularization or total-variation) that suppress noise while preserving edges in the image. Most of these methods assume a circulant blur (periodic convolution with a blurring kernel) that can lead to wraparound artifacts along the boundaries of the image due to the implied periodicity of the circulant model. Using a non-circulant model could prevent these artifacts at the cost of increased computational complexity. In this work we propose to use a circulant blur model combined with a masking operator that prevents wraparound artifacts. The resulting model is non-circulant, so we propose an efficient algorithm using variable splitting and augmented Lagrangian (AL) strategies. Our variable splitting scheme, when combined with the AL framework and alternating minimization, leads to simple linear systems that can be solved non-iteratively using FFTs, eliminating the need for more expensive CG-type solvers. The proposed method can also efficiently tackle a variety of convex regularizers including edge-preserving (e.g., total-variation) and sparsity promoting (e.g., l1 norm) regularizers. Simulation results show fast convergence of the proposed method, along with improved image quality at the boundaries where the circulant model is inaccurate. PMID:23372080

  15. Students with Chronic Conditions: Experiences and Challenges of Regular Education Teachers

    ERIC Educational Resources Information Center

    Selekman, Janice

    2017-01-01

    School nurses have observed the increasing prevalence of children with chronic conditions in the school setting; however, little is known about teacher experiences with these children in their regular classrooms. The purpose of this mixed-method study was to describe the experiences and challenges of regular education teachers when they have…

  16. The Temporal Dynamics of Regularity Extraction in Non-Human Primates

    ERIC Educational Resources Information Center

    Minier, Laure; Fagot, Joël; Rey, Arnaud

    2016-01-01

    Extracting the regularities of our environment is one of our core cognitive abilities. To study the fine-grained dynamics of the extraction of embedded regularities, a method combining the advantages of the artificial language paradigm (Saffran, Aslin, & Newport, [Saffran, J. R., 1996]) and the serial response time task (Nissen & Bullemer,…

  17. Artificial Neural Network with Regular Graph for Maximum Air Temperature Forecasting:. the Effect of Decrease in Nodes Degree on Learning

    NASA Astrophysics Data System (ADS)

    Ghaderi, A. H.; Darooneh, A. H.

    The behavior of nonlinear systems can be analyzed by artificial neural networks. Air temperature change is one example of the nonlinear systems. In this work, a new neural network method is proposed for forecasting maximum air temperature in two cities. In this method, the regular graph concept is used to construct some partially connected neural networks that have regular structures. The learning results of fully connected ANN and networks with proposed method are compared. In some case, the proposed method has the better result than conventional ANN. After specifying the best network, the effect of input pattern numbers on the prediction is studied and the results show that the increase of input patterns has a direct effect on the prediction accuracy.

  18. Phase retrieval using regularization method in intensity correlation imaging

    NASA Astrophysics Data System (ADS)

    Li, Xiyu; Gao, Xin; Tang, Jia; Lu, Changming; Wang, Jianli; Wang, Bin

    2014-11-01

    Intensity correlation imaging(ICI) method can obtain high resolution image with ground-based low precision mirrors, in the imaging process, phase retrieval algorithm should be used to reconstituted the object's image. But the algorithm now used(such as hybrid input-output algorithm) is sensitive to noise and easy to stagnate. However the signal-to-noise ratio of intensity interferometry is low especially in imaging astronomical objects. In this paper, we build the mathematical model of phase retrieval and simplified it into a constrained optimization problem of a multi-dimensional function. New error function was designed by noise distribution and prior information using regularization method. The simulation results show that the regularization method can improve the performance of phase retrieval algorithm and get better image especially in low SNR condition

  19. Regularization of the Perturbed Spatial Restricted Three-Body Problem by L-Transformations

    NASA Astrophysics Data System (ADS)

    Poleshchikov, S. M.

    2018-03-01

    Equations of motion for the perturbed circular restricted three-body problem have been regularized in canonical variables in a moving coordinate system. Two different L-matrices of the fourth order are used in the regularization. Conditions for generalized symplecticity of the constructed transform have been checked. In the unperturbed case, the regular equations have a polynomial structure. The regular equations have been numerically integrated using the Runge-Kutta-Fehlberg method. The results of numerical experiments are given for the Earth-Moon system parameters taking into account the perturbation of the Sun for different L-matrices.

  20. Secondary LD Mainstreaming Methods: Instructional Module. (An Instructional Module for Preservice or Inservice Training of Regular Secondary Educators).

    ERIC Educational Resources Information Center

    Reetz, Linda J.; Hoover, John H.

    Intended for use in preservice or inservice training of regular secondary educators, the module examines principles of communication, assessment, teaching methods, and classroom management through text, an annotated bibliography, and overhead masters. The first section covers communicating with handicapped students, their parents, and other…

  1. Spatio-Temporal Regularization for Longitudinal Registration to Subject-Specific 3d Template

    PubMed Central

    Guizard, Nicolas; Fonov, Vladimir S.; García-Lorenzo, Daniel; Nakamura, Kunio; Aubert-Broche, Bérengère; Collins, D. Louis

    2015-01-01

    Neurodegenerative diseases such as Alzheimer's disease present subtle anatomical brain changes before the appearance of clinical symptoms. Manual structure segmentation is long and tedious and although automatic methods exist, they are often performed in a cross-sectional manner where each time-point is analyzed independently. With such analysis methods, bias, error and longitudinal noise may be introduced. Noise due to MR scanners and other physiological effects may also introduce variability in the measurement. We propose to use 4D non-linear registration with spatio-temporal regularization to correct for potential longitudinal inconsistencies in the context of structure segmentation. The major contribution of this article is the use of individual template creation with spatio-temporal regularization of the deformation fields for each subject. We validate our method with different sets of real MRI data, compare it to available longitudinal methods such as FreeSurfer, SPM12, QUARC, TBM, and KNBSI, and demonstrate that spatially local temporal regularization yields more consistent rates of change of global structures resulting in better statistical power to detect significant changes over time and between populations. PMID:26301716

  2. Hyperspectral imagery super-resolution by compressive sensing inspired dictionary learning and spatial-spectral regularization.

    PubMed

    Huang, Wei; Xiao, Liang; Liu, Hongyi; Wei, Zhihui

    2015-01-19

    Due to the instrumental and imaging optics limitations, it is difficult to acquire high spatial resolution hyperspectral imagery (HSI). Super-resolution (SR) imagery aims at inferring high quality images of a given scene from degraded versions of the same scene. This paper proposes a novel hyperspectral imagery super-resolution (HSI-SR) method via dictionary learning and spatial-spectral regularization. The main contributions of this paper are twofold. First, inspired by the compressive sensing (CS) framework, for learning the high resolution dictionary, we encourage stronger sparsity on image patches and promote smaller coherence between the learned dictionary and sensing matrix. Thus, a sparsity and incoherence restricted dictionary learning method is proposed to achieve higher efficiency sparse representation. Second, a variational regularization model combing a spatial sparsity regularization term and a new local spectral similarity preserving term is proposed to integrate the spectral and spatial-contextual information of the HSI. Experimental results show that the proposed method can effectively recover spatial information and better preserve spectral information. The high spatial resolution HSI reconstructed by the proposed method outperforms reconstructed results by other well-known methods in terms of both objective measurements and visual evaluation.

  3. Iterative image reconstruction that includes a total variation regularization for radial MRI.

    PubMed

    Kojima, Shinya; Shinohara, Hiroyuki; Hashimoto, Takeyuki; Hirata, Masami; Ueno, Eiko

    2015-07-01

    This paper presents an iterative image reconstruction method for radial encodings in MRI based on a total variation (TV) regularization. The algebraic reconstruction method combined with total variation regularization (ART_TV) is implemented with a regularization parameter specifying the weight of the TV term in the optimization process. We used numerical simulations of a Shepp-Logan phantom, as well as experimental imaging of a phantom that included a rectangular-wave chart, to evaluate the performance of ART_TV, and to compare it with that of the Fourier transform (FT) method. The trade-off between spatial resolution and signal-to-noise ratio (SNR) was investigated for different values of the regularization parameter by experiments on a phantom and a commercially available MRI system. ART_TV was inferior to the FT with respect to the evaluation of the modulation transfer function (MTF), especially at high frequencies; however, it outperformed the FT with regard to the SNR. In accordance with the results of SNR measurement, visual impression suggested that the image quality of ART_TV was better than that of the FT for reconstruction of a noisy image of a kiwi fruit. In conclusion, ART_TV provides radial MRI with improved image quality for low-SNR data; however, the regularization parameter in ART_TV is a critical factor for obtaining improvement over the FT.

  4. Mixed linear-non-linear inversion of crustal deformation data: Bayesian inference of model, weighting and regularization parameters

    NASA Astrophysics Data System (ADS)

    Fukuda, Jun'ichi; Johnson, Kaj M.

    2010-06-01

    We present a unified theoretical framework and solution method for probabilistic, Bayesian inversions of crustal deformation data. The inversions involve multiple data sets with unknown relative weights, model parameters that are related linearly or non-linearly through theoretic models to observations, prior information on model parameters and regularization priors to stabilize underdetermined problems. To efficiently handle non-linear inversions in which some of the model parameters are linearly related to the observations, this method combines both analytical least-squares solutions and a Monte Carlo sampling technique. In this method, model parameters that are linearly and non-linearly related to observations, relative weights of multiple data sets and relative weights of prior information and regularization priors are determined in a unified Bayesian framework. In this paper, we define the mixed linear-non-linear inverse problem, outline the theoretical basis for the method, provide a step-by-step algorithm for the inversion, validate the inversion method using synthetic data and apply the method to two real data sets. We apply the method to inversions of multiple geodetic data sets with unknown relative data weights for interseismic fault slip and locking depth. We also apply the method to the problem of estimating the spatial distribution of coseismic slip on faults with unknown fault geometry, relative data weights and smoothing regularization weight.

  5. Human action recognition with group lasso regularized-support vector machine

    NASA Astrophysics Data System (ADS)

    Luo, Huiwu; Lu, Huanzhang; Wu, Yabei; Zhao, Fei

    2016-05-01

    The bag-of-visual-words (BOVW) and Fisher kernel are two popular models in human action recognition, and support vector machine (SVM) is the most commonly used classifier for the two models. We show two kinds of group structures in the feature representation constructed by BOVW and Fisher kernel, respectively, since the structural information of feature representation can be seen as a prior for the classifier and can improve the performance of the classifier, which has been verified in several areas. However, the standard SVM employs L2-norm regularization in its learning procedure, which penalizes each variable individually and cannot express the structural information of feature representation. We replace the L2-norm regularization with group lasso regularization in standard SVM, and a group lasso regularized-support vector machine (GLRSVM) is proposed. Then, we embed the group structural information of feature representation into GLRSVM. Finally, we introduce an algorithm to solve the optimization problem of GLRSVM by alternating directions method of multipliers. The experiments evaluated on KTH, YouTube, and Hollywood2 datasets show that our method achieves promising results and improves the state-of-the-art methods on KTH and YouTube datasets.

  6. Regularization Parameter Selection for Nonlinear Iterative Image Restoration and MRI Reconstruction Using GCV and SURE-Based Methods

    PubMed Central

    Ramani, Sathish; Liu, Zhihao; Rosen, Jeffrey; Nielsen, Jon-Fredrik; Fessler, Jeffrey A.

    2012-01-01

    Regularized iterative reconstruction algorithms for imaging inverse problems require selection of appropriate regularization parameter values. We focus on the challenging problem of tuning regularization parameters for nonlinear algorithms for the case of additive (possibly complex) Gaussian noise. Generalized cross-validation (GCV) and (weighted) mean-squared error (MSE) approaches (based on Stein's Unbiased Risk Estimate— SURE) need the Jacobian matrix of the nonlinear reconstruction operator (representative of the iterative algorithm) with respect to the data. We derive the desired Jacobian matrix for two types of nonlinear iterative algorithms: a fast variant of the standard iterative reweighted least-squares method and the contemporary split-Bregman algorithm, both of which can accommodate a wide variety of analysis- and synthesis-type regularizers. The proposed approach iteratively computes two weighted SURE-type measures: Predicted-SURE and Projected-SURE (that require knowledge of noise variance σ2), and GCV (that does not need σ2) for these algorithms. We apply the methods to image restoration and to magnetic resonance image (MRI) reconstruction using total variation (TV) and an analysis-type ℓ1-regularization. We demonstrate through simulations and experiments with real data that minimizing Predicted-SURE and Projected-SURE consistently lead to near-MSE-optimal reconstructions. We also observed that minimizing GCV yields reconstruction results that are near-MSE-optimal for image restoration and slightly sub-optimal for MRI. Theoretical derivations in this work related to Jacobian matrix evaluations can be extended, in principle, to other types of regularizers and reconstruction algorithms. PMID:22531764

  7. MO-DE-207A-02: A Feature-Preserving Image Reconstruction Method for Improved Pancreaticlesion Classification in Diagnostic CT Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, J; Tsui, B; Noo, F

    Purpose: To develop a feature-preserving model based image reconstruction (MBIR) method that improves performance in pancreatic lesion classification at equal or reduced radiation dose. Methods: A set of pancreatic lesion models was created with both benign and premalignant lesion types. These two classes of lesions are distinguished by their fine internal structures; their delineation is therefore crucial to the task of pancreatic lesion classification. To reduce image noise while preserving the features of the lesions, we developed a MBIR method with curvature-based regularization. The novel regularization encourages formation of smooth surfaces that model both the exterior shape and the internalmore » features of pancreatic lesions. Given that the curvature depends on the unknown image, image reconstruction or denoising becomes a non-convex optimization problem; to address this issue an iterative-reweighting scheme was used to calculate and update the curvature using the image from the previous iteration. Evaluation was carried out with insertion of the lesion models into the pancreas of a patient CT image. Results: Visual inspection was used to compare conventional TV regularization with our curvature-based regularization. Several penalty-strengths were considered for TV regularization, all of which resulted in erasing portions of the septation (thin partition) in a premalignant lesion. At matched noise variance (50% noise reduction in the patient stomach region), the connectivity of the septation was well preserved using the proposed curvature-based method. Conclusion: The curvature-based regularization is able to reduce image noise while simultaneously preserving the lesion features. This method could potentially improve task performance for pancreatic lesion classification at equal or reduced radiation dose. The result is of high significance for longitudinal surveillance studies of patients with pancreatic cysts, which may develop into pancreatic cancer. The Senior Author receives financial support from Siemens GmbH Healthcare.« less

  8. Work Integration of People with Disabilities in the Regular Labour Market: What Can We Do to Improve These Processes?

    ERIC Educational Resources Information Center

    Vila, Montserrat; Pallisera, Maria; Fullana, Judit

    2007-01-01

    Background: It is important to ensure that regular processes of labour market integration are available for all citizens. Method: Thematic content analysis techniques, using semi-structured group interviews, were used to identify the principal elements contributing to the processes of integrating people with disabilities into the regular labour…

  9. Increasing Accuracy of Tissue Shear Modulus Reconstruction Using Ultrasonic Strain Tensor Measurement

    NASA Astrophysics Data System (ADS)

    Sumi, C.

    Previously, we developed three displacement vector measurement methods, i.e., the multidimensional cross-spectrum phase gradient method (MCSPGM), the multidimensional autocorrelation method (MAM), and the multidimensional Doppler method (MDM). To increase the accuracies and stabilities of lateral and elevational displacement measurements, we also developed spatially variant, displacement component-dependent regularization. In particular, the regularization of only the lateral/elevational displacements is advantageous for the lateral unmodulated case. The demonstrated measurements of the displacement vector distributions in experiments using an inhomogeneous shear modulus agar phantom confirm that displacement-component-dependent regularization enables more stable shear modulus reconstruction. In this report, we also review our developed lateral modulation methods that use Parabolic functions, Hanning windows, and Gaussian functions in the apodization function and the optimized apodization function that realizes the designed point spread function (PSF). The modulations significantly increase the accuracy of the strain tensor measurement and shear modulus reconstruction (demonstrated using an agar phantom).

  10. PRIFIRA: General regularization using prior-conditioning for fast radio interferometric imaging†

    NASA Astrophysics Data System (ADS)

    Naghibzadeh, Shahrzad; van der Veen, Alle-Jan

    2018-06-01

    Image formation in radio astronomy is a large-scale inverse problem that is inherently ill-posed. We present a general algorithmic framework based on a Bayesian-inspired regularized maximum likelihood formulation of the radio astronomical imaging problem with a focus on diffuse emission recovery from limited noisy correlation data. The algorithm is dubbed PRIor-conditioned Fast Iterative Radio Astronomy (PRIFIRA) and is based on a direct embodiment of the regularization operator into the system by right preconditioning. The resulting system is then solved using an iterative method based on projections onto Krylov subspaces. We motivate the use of a beamformed image (which includes the classical "dirty image") as an efficient prior-conditioner. Iterative reweighting schemes generalize the algorithmic framework and can account for different regularization operators that encourage sparsity of the solution. The performance of the proposed method is evaluated based on simulated one- and two-dimensional array arrangements as well as actual data from the core stations of the Low Frequency Array radio telescope antenna configuration, and compared to state-of-the-art imaging techniques. We show the generality of the proposed method in terms of regularization schemes while maintaining a competitive reconstruction quality with the current reconstruction techniques. Furthermore, we show that exploiting Krylov subspace methods together with the proper noise-based stopping criteria results in a great improvement in imaging efficiency.

  11. The Fast Multipole Method and Fourier Convolution for the Solution of Acoustic Scattering on Regular Volumetric Grids

    PubMed Central

    Hesford, Andrew J.; Waag, Robert C.

    2010-01-01

    The fast multipole method (FMM) is applied to the solution of large-scale, three-dimensional acoustic scattering problems involving inhomogeneous objects defined on a regular grid. The grid arrangement is especially well suited to applications in which the scattering geometry is not known a priori and is reconstructed on a regular grid using iterative inverse scattering algorithms or other imaging techniques. The regular structure of unknown scattering elements facilitates a dramatic reduction in the amount of storage and computation required for the FMM, both of which scale linearly with the number of scattering elements. In particular, the use of fast Fourier transforms to compute Green's function convolutions required for neighboring interactions lowers the often-significant cost of finest-level FMM computations and helps mitigate the dependence of FMM cost on finest-level box size. Numerical results demonstrate the efficiency of the composite method as the number of scattering elements in each finest-level box is increased. PMID:20835366

  12. The fast multipole method and Fourier convolution for the solution of acoustic scattering on regular volumetric grids

    NASA Astrophysics Data System (ADS)

    Hesford, Andrew J.; Waag, Robert C.

    2010-10-01

    The fast multipole method (FMM) is applied to the solution of large-scale, three-dimensional acoustic scattering problems involving inhomogeneous objects defined on a regular grid. The grid arrangement is especially well suited to applications in which the scattering geometry is not known a priori and is reconstructed on a regular grid using iterative inverse scattering algorithms or other imaging techniques. The regular structure of unknown scattering elements facilitates a dramatic reduction in the amount of storage and computation required for the FMM, both of which scale linearly with the number of scattering elements. In particular, the use of fast Fourier transforms to compute Green's function convolutions required for neighboring interactions lowers the often-significant cost of finest-level FMM computations and helps mitigate the dependence of FMM cost on finest-level box size. Numerical results demonstrate the efficiency of the composite method as the number of scattering elements in each finest-level box is increased.

  13. The Fast Multipole Method and Fourier Convolution for the Solution of Acoustic Scattering on Regular Volumetric Grids.

    PubMed

    Hesford, Andrew J; Waag, Robert C

    2010-10-20

    The fast multipole method (FMM) is applied to the solution of large-scale, three-dimensional acoustic scattering problems involving inhomogeneous objects defined on a regular grid. The grid arrangement is especially well suited to applications in which the scattering geometry is not known a priori and is reconstructed on a regular grid using iterative inverse scattering algorithms or other imaging techniques. The regular structure of unknown scattering elements facilitates a dramatic reduction in the amount of storage and computation required for the FMM, both of which scale linearly with the number of scattering elements. In particular, the use of fast Fourier transforms to compute Green's function convolutions required for neighboring interactions lowers the often-significant cost of finest-level FMM computations and helps mitigate the dependence of FMM cost on finest-level box size. Numerical results demonstrate the efficiency of the composite method as the number of scattering elements in each finest-level box is increased.

  14. Discovering Structural Regularity in 3D Geometry

    PubMed Central

    Pauly, Mark; Mitra, Niloy J.; Wallner, Johannes; Pottmann, Helmut; Guibas, Leonidas J.

    2010-01-01

    We introduce a computational framework for discovering regular or repeated geometric structures in 3D shapes. We describe and classify possible regular structures and present an effective algorithm for detecting such repeated geometric patterns in point- or mesh-based models. Our method assumes no prior knowledge of the geometry or spatial location of the individual elements that define the pattern. Structure discovery is made possible by a careful analysis of pairwise similarity transformations that reveals prominent lattice structures in a suitable model of transformation space. We introduce an optimization method for detecting such uniform grids specifically designed to deal with outliers and missing elements. This yields a robust algorithm that successfully discovers complex regular structures amidst clutter, noise, and missing geometry. The accuracy of the extracted generating transformations is further improved using a novel simultaneous registration method in the spatial domain. We demonstrate the effectiveness of our algorithm on a variety of examples and show applications to compression, model repair, and geometry synthesis. PMID:21170292

  15. Semi-regular remeshing based trust region spherical geometry image for 3D deformed mesh used MLWNN

    NASA Astrophysics Data System (ADS)

    Dhibi, Naziha; Elkefi, Akram; Bellil, Wajdi; Ben Amar, Chokri

    2017-03-01

    Triangular surface are now widely used for modeling three-dimensional object, since these models are very high resolution and the geometry of the mesh is often very dense, it is then necessary to remesh this object to reduce their complexity, the mesh quality (connectivity regularity) must be ameliorated. In this paper, we review the main methods of semi-regular remeshing of the state of the art, given the semi-regular remeshing is mainly relevant for wavelet-based compression, then we present our method for re-meshing based trust region spherical geometry image to have good scheme of 3d mesh compression used to deform 3D meh based on Multi library Wavelet Neural Network structure (MLWNN). Experimental results show that the progressive re-meshing algorithm capable of obtaining more compact representations and semi-regular objects and yield an efficient compression capabilities with minimal set of features used to have good 3D deformation scheme.

  16. Fast inversion of gravity data using the symmetric successive over-relaxation (SSOR) preconditioned conjugate gradient algorithm

    NASA Astrophysics Data System (ADS)

    Meng, Zhaohai; Li, Fengting; Xu, Xuechun; Huang, Danian; Zhang, Dailei

    2017-02-01

    The subsurface three-dimensional (3D) model of density distribution is obtained by solving an under-determined linear equation that is established by gravity data. Here, we describe a new fast gravity inversion method to recover a 3D density model from gravity data. The subsurface will be divided into a large number of rectangular blocks, each with an unknown constant density. The gravity inversion method introduces a stabiliser model norm with a depth weighting function to produce smooth models. The depth weighting function is combined with the model norm to counteract the skin effect of the gravity potential field. As the numbers of density model parameters is NZ (the number of layers in the vertical subsurface domain) times greater than the observed gravity data parameters, the inverse density parameter is larger than the observed gravity data parameters. Solving the full set of gravity inversion equations is very time-consuming, and applying a new algorithm to estimate gravity inversion can significantly reduce the number of iterations and the computational time. In this paper, a new symmetric successive over-relaxation (SSOR) iterative conjugate gradient (CG) method is shown to be an appropriate algorithm to solve this Tikhonov cost function (gravity inversion equation). The new, faster method is applied on Gaussian noise-contaminated synthetic data to demonstrate its suitability for 3D gravity inversion. To demonstrate the performance of the new algorithm on actual gravity data, we provide a case study that includes ground-based measurement of residual Bouguer gravity anomalies over the Humble salt dome near Houston, Gulf Coast Basin, off the shore of Louisiana. A 3D distribution of salt rock concentration is used to evaluate the inversion results recovered by the new SSOR iterative method. In the test model, the density values in the constructed model coincide with the known location and depth of the salt dome.

  17. Zeta Function Regularization in Casimir Effect Calculations and J. S. DOWKER's Contribution

    NASA Astrophysics Data System (ADS)

    Elizalde, Emilio

    2012-06-01

    A summary of relevant contributions, ordered in time, to the subject of operator zeta functions and their application to physical issues is provided. The description ends with the seminal contributions of Stephen Hawking and Stuart Dowker and collaborators, considered by many authors as the actual starting point of the introduction of zeta function regularization methods in theoretical physics, in particular, for quantum vacuum fluctuation and Casimir effect calculations. After recalling a number of the strengths of this powerful and elegant method, some of its limitations are discussed. Finally, recent results of the so-called operator regularization procedure are presented.

  18. Zeta Function Regularization in Casimir Effect Calculations and J. S. Dowker's Contribution

    NASA Astrophysics Data System (ADS)

    Elizalde, Emilio

    2012-07-01

    A summary of relevant contributions, ordered in time, to the subject of operator zeta functions and their application to physical issues is provided. The description ends with the seminal contributions of Stephen Hawking and Stuart Dowker and collaborators, considered by many authors as the actual starting point of the introduction of zeta function regularization methods in theoretical physics, in particular, for quantum vacuum fluctuation and Casimir effect calculations. After recalling a number of the strengths of this powerful and elegant method, some of its limitations are discussed. Finally, recent results of the so called operator regularization procedure are presented.

  19. Discovering mutated driver genes through a robust and sparse co-regularized matrix factorization framework with prior information from mRNA expression patterns and interaction network.

    PubMed

    Xi, Jianing; Wang, Minghui; Li, Ao

    2018-06-05

    Discovery of mutated driver genes is one of the primary objective for studying tumorigenesis. To discover some relatively low frequently mutated driver genes from somatic mutation data, many existing methods incorporate interaction network as prior information. However, the prior information of mRNA expression patterns are not exploited by these existing network-based methods, which is also proven to be highly informative of cancer progressions. To incorporate prior information from both interaction network and mRNA expressions, we propose a robust and sparse co-regularized nonnegative matrix factorization to discover driver genes from mutation data. Furthermore, our framework also conducts Frobenius norm regularization to overcome overfitting issue. Sparsity-inducing penalty is employed to obtain sparse scores in gene representations, of which the top scored genes are selected as driver candidates. Evaluation experiments by known benchmarking genes indicate that the performance of our method benefits from the two type of prior information. Our method also outperforms the existing network-based methods, and detect some driver genes that are not predicted by the competing methods. In summary, our proposed method can improve the performance of driver gene discovery by effectively incorporating prior information from interaction network and mRNA expression patterns into a robust and sparse co-regularized matrix factorization framework.

  20. Vacuum polarization in the field of a multidimensional global monopole

    NASA Astrophysics Data System (ADS)

    Grats, Yu. V.; Spirin, P. A.

    2016-11-01

    An approximate expression for the Euclidean Green function of a massless scalar field in the spacetime of a multidimensional global monopole has been derived. Expressions for the vacuum expectation values <ϕ2>ren and < T 00>ren have been derived by the dimensional regularization method. Comparison with the results obtained by alternative regularization methods is made.

  1. Model-Averaged ℓ1 Regularization using Markov Chain Monte Carlo Model Composition

    PubMed Central

    Fraley, Chris; Percival, Daniel

    2014-01-01

    Bayesian Model Averaging (BMA) is an effective technique for addressing model uncertainty in variable selection problems. However, current BMA approaches have computational difficulty dealing with data in which there are many more measurements (variables) than samples. This paper presents a method for combining ℓ1 regularization and Markov chain Monte Carlo model composition techniques for BMA. By treating the ℓ1 regularization path as a model space, we propose a method to resolve the model uncertainty issues arising in model averaging from solution path point selection. We show that this method is computationally and empirically effective for regression and classification in high-dimensional datasets. We apply our technique in simulations, as well as to some applications that arise in genomics. PMID:25642001

  2. Shape regularized active contour based on dynamic programming for anatomical structure segmentation

    NASA Astrophysics Data System (ADS)

    Yu, Tianli; Luo, Jiebo; Singhal, Amit; Ahuja, Narendra

    2005-04-01

    We present a method to incorporate nonlinear shape prior constraints into segmenting different anatomical structures in medical images. Kernel space density estimation (KSDE) is used to derive the nonlinear shape statistics and enable building a single model for a class of objects with nonlinearly varying shapes. The object contour is coerced by image-based energy into the correct shape sub-distribution (e.g., left or right lung), without the need for model selection. In contrast to an earlier algorithm that uses a local gradient-descent search (susceptible to local minima), we propose an algorithm that iterates between dynamic programming (DP) and shape regularization. DP is capable of finding an optimal contour in the search space that maximizes a cost function related to the difference between the interior and exterior of the object. To enforce the nonlinear shape prior, we propose two shape regularization methods, global and local regularization. Global regularization is applied after each DP search to move the entire shape vector in the shape space in a gradient descent fashion to the position of probable shapes learned from training. The regularized shape is used as the starting shape for the next iteration. Local regularization is accomplished through modifying the search space of the DP. The modified search space only allows a certain amount of deformation of the local shape from the starting shape. Both regularization methods ensure the consistency between the resulted shape with the training shapes, while still preserving DP"s ability to search over a large range and avoid local minima. Our algorithm was applied to two different segmentation tasks for radiographic images: lung field and clavicle segmentation. Both applications have shown that our method is effective and versatile in segmenting various anatomical structures under prior shape constraints; and it is robust to noise and local minima caused by clutter (e.g., blood vessels) and other similar structures (e.g., ribs). We believe that the proposed algorithm represents a major step in the paradigm shift to object segmentation under nonlinear shape constraints.

  3. Imaging Critical Zone Using High Frequency Rayleigh Wave Group Velocity Measurements Extracted from Ambient Seismic Fields Gathered With 2400 Seismic Nodes in Southeastern Wyoming.

    NASA Astrophysics Data System (ADS)

    Keifer, I. S.; Dueker, K. G.

    2016-12-01

    In an effort to characterize critical zone development in varying regions, seismologist conduct seismic surveys to assist in the realization of critical zone properties e.g. porosity and regolith thickness. A limitation of traditional critical zone seismology is that data is normally collected along lines, to generate two dimensional transects of the subsurface seismic velocity, even though the critical zone structure is 3D. Hence, we deployed six seismic 2D arrays in southeastern Wyoming to gather ambient seismic fields so that 3D shear velocity models could be produced. The arrays were made up of nominally 400 seismic stations arranged in a 200-meter square grid layout. Each array produced a half Terabyte data volume, so a premium was placed on computational efficiency throughout this study, to handle the roughly 65 billion samples recorded by each array. The ambient fields were cross-correlated on the Yellowstone Super-Computer using the pSIN code (Chen et al., 2016), which decreased correlation run times by a factor of 300 with respect to workstation computers. Group delay times extracted from cross-correlations using 8 Hz frequency bands from 10 Hz to 100 Hz show frequency dispersion at sites with shallow regolith underlain by granite bedrock. Dimensionally, the group velocity map inversion is overdetermined, even after extensive culling of spurious group delay times. Model Resolution matrices for our six arrays show values > 0.7 for most of the modal domain, approaching unity at the center of the model domain; we are then confident that we have an adequate number of rays covering our array space, and should experience minimal smearing of our resultant model due to application of inverse solution on the data. After inverting for the group velocity maps, a second inversion is performed of the group velocity maps for the 3D shear velocity model. This inversion is underdetermined and a second order Tikhonov regularization is used to obtain stable inverse images. Results will be presented.

  4. EEG minimum-norm estimation compared with MEG dipole fitting in the localization of somatosensory sources at S1.

    PubMed

    Komssi, S; Huttunen, J; Aronen, H J; Ilmoniemi, R J

    2004-03-01

    Dipole models, which are frequently used in attempts to solve the electromagnetic inverse problem, require explicit a priori assumptions about the cerebral current sources. This is not the case for solutions based on minimum-norm estimates. In the present study, we evaluated the spatial accuracy of the L2 minimum-norm estimate (MNE) in realistic noise conditions by assessing its ability to localize sources of evoked responses at the primary somatosensory cortex (SI). Multichannel somatosensory evoked potentials (SEPs) and magnetic fields (SEFs) were recorded in 5 subjects while stimulating the median and ulnar nerves at the left wrist. A Tikhonov-regularized L2-MNE, constructed on a spherical surface from the SEP signals, was compared with an equivalent current dipole (ECD) solution obtained from the SEFs. Primarily tangential current sources accounted for both SEP and SEF distributions at around 20 ms (N20/N20m) and 70 ms (P70/P70m), which deflections were chosen for comparative analysis. The distances between the locations of the maximum current densities obtained from MNE and the locations of ECDs were on the average 12-13 mm for both deflections and nerves stimulated. In accordance with the somatotopical order of SI, both the MNE and ECD tended to localize median nerve activation more laterally than ulnar nerve activation for the N20/N20m deflection. Simulation experiments further indicated that, with a proper estimate of the source depth and with a good fit of the head model, the MNE can reach a mean accuracy of 5 mm in 0.2-microV root-mean-square noise. When compared with previously reported localizations based on dipole modelling of SEPs, it appears that equally accurate localization of S1 can be obtained with the MNE. MNE can be used to verify parametric source modelling results. Having a relatively good localization accuracy and requiring minimal assumptions, the MNE may be useful for the localization of poorly known activity distributions and for tracking activity changes between brain areas as a function of time.

  5. Raman lidar observations of a Saharan dust outbreak event: Characterization of the dust optical properties and determination of particle size and microphysical parameters

    NASA Astrophysics Data System (ADS)

    Di Girolamo, Paolo; Summa, Donato; Bhawar, Rohini; Di Iorio, Tatiana; Cacciani, Marco; Veselovskii, Igor; Dubovik, Oleg; Kolgotin, Alexey

    2012-04-01

    The Raman lidar system BASIL was operational in Achern (Black Forest) between 25 May and 30 August 2007 in the framework of the Convective and Orographically-induced Precipitation Study (COPS). The system performed continuous measurements over a period of approx. 36 h from 06:22 UTC on 1 August to 18:28 UTC on 2 August 2007, capturing the signature of a severe Saharan dust outbreak episode. The data clearly reveal the presence of two almost separate aerosol layers: a lower layer located between 1.5 and 3.5 km above ground level (a.g.l.) and an upper layer extending between 3.0 and 6.0 km a.g.l. The time evolution of the dust cloud is illustrated and discussed in the paper in terms of several optical parameters (particle backscatter ratio at 532 and 1064 nm, the colour ratio and the backscatter Angström parameter). An inversion algorithm was used to retrieve particle size and microphysical parameters, i.e., mean and effective radius, number, surface area, volume concentration, and complex refractive index, as well as the parameters of a bimodal particle size distribution (PSD), from the multi-wavelength lidar data of particle backscattering, extinction and depolarization. The retrieval scheme employs Tikhonov's inversion with regularization and makes use of kernel functions for randomly oriented spheroids. Size and microphysical parameters of dust particles are estimated as a function of altitude at different times during the dust outbreak event. Retrieval results reveal the presence of a fine mode with radii of 0.1-0.2 μm and a coarse mode with radii of 3-5 μm both in the lower and upper dust layers, and the dominance in the upper dust layer of a coarse mode with radii of 4-5 μm. Effective radius varies with altitude in the range 0.1-1.5 μm, while volume concentration is found to not exceed 92 μm3 cm-3. The real and imaginary part of the complex refractive index vary in the range 1.4-1.6 and 0.004-0.008, respectively.

  6. Temporal sparsity exploiting nonlocal regularization for 4D computed tomography reconstruction

    PubMed Central

    Kazantsev, Daniil; Guo, Enyu; Kaestner, Anders; Lionheart, William R. B.; Bent, Julian; Withers, Philip J.; Lee, Peter D.

    2016-01-01

    X-ray imaging applications in medical and material sciences are frequently limited by the number of tomographic projections collected. The inversion of the limited projection data is an ill-posed problem and needs regularization. Traditional spatial regularization is not well adapted to the dynamic nature of time-lapse tomography since it discards the redundancy of the temporal information. In this paper, we propose a novel iterative reconstruction algorithm with a nonlocal regularization term to account for time-evolving datasets. The aim of the proposed nonlocal penalty is to collect the maximum relevant information in the spatial and temporal domains. With the proposed sparsity seeking approach in the temporal space, the computational complexity of the classical nonlocal regularizer is substantially reduced (at least by one order of magnitude). The presented reconstruction method can be directly applied to various big data 4D (x, y, z+time) tomographic experiments in many fields. We apply the proposed technique to modelled data and to real dynamic X-ray microtomography (XMT) data of high resolution. Compared to the classical spatio-temporal nonlocal regularization approach, the proposed method delivers reconstructed images of improved resolution and higher contrast while remaining significantly less computationally demanding. PMID:27002902

  7. A regularized vortex-particle mesh method for large eddy simulation

    NASA Astrophysics Data System (ADS)

    Spietz, H. J.; Walther, J. H.; Hejlesen, M. M.

    2017-11-01

    We present recent developments of the remeshed vortex particle-mesh method for simulating incompressible fluid flow. The presented method relies on a parallel higher-order FFT based solver for the Poisson equation. Arbitrary high order is achieved through regularization of singular Green's function solutions to the Poisson equation and recently we have derived novel high order solutions for a mixture of open and periodic domains. With this approach the simulated variables may formally be viewed as the approximate solution to the filtered Navier Stokes equations, hence we use the method for Large Eddy Simulation by including a dynamic subfilter-scale model based on test-filters compatible with the aforementioned regularization functions. Further the subfilter-scale model uses Lagrangian averaging, which is a natural candidate in light of the Lagrangian nature of vortex particle methods. A multiresolution variation of the method is applied to simulate the benchmark problem of the flow past a square cylinder at Re = 22000 and the obtained results are compared to results from the literature.

  8. SU-E-J-67: Evaluation of Breathing Patterns for Respiratory-Gated Radiation Therapy Using Respiration Regularity Index

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheong, K; Lee, M; Kang, S

    2014-06-01

    Purpose: Despite the importance of accurately estimating the respiration regularity of a patient in motion compensation treatment, an effective and simply applicable method has rarely been reported. The authors propose a simple respiration regularity index based on parameters derived from a correspondingly simplified respiration model. Methods: In order to simplify a patient's breathing pattern while preserving the data's intrinsic properties, we defined a respiration model as a power of cosine form with a baseline drift. According to this respiration formula, breathing-pattern fluctuation could be explained using four factors: sample standard deviation of respiration period, sample standard deviation of amplitude andmore » the results of simple regression of the baseline drift (slope and standard deviation of residuals of a respiration signal. Overall irregularity (δ) was defined as a Euclidean norm of newly derived variable using principal component analysis (PCA) for the four fluctuation parameters. Finally, the proposed respiration regularity index was defined as ρ=ln(1+(1/ δ))/2, a higher ρ indicating a more regular breathing pattern. Subsequently, we applied it to simulated and clinical respiration signals from real-time position management (RPM; Varian Medical Systems, Palo Alto, CA) and investigated respiration regularity. Moreover, correlations between the regularity of the first session and the remaining fractions were investigated using Pearson's correlation coefficient. Results: The respiration regularity was determined based on ρ; patients with ρ<0.3 showed worse regularity than the others, whereas ρ>0.7 was suitable for respiratory-gated radiation therapy (RGRT). Fluctuations in breathing cycle and amplitude were especially determinative of ρ. If the respiration regularity of a patient's first session was known, it could be estimated through subsequent sessions. Conclusions: Respiration regularity could be objectively determined using a respiration regularity index, ρ. Such single-index testing of respiration regularity can facilitate determination of RGRT availability in clinical settings, especially for free-breathing cases. This work was supported by a Korea Science and Engineering Foundation (KOSEF) grant funded by the Korean Ministry of Science, ICT and Future Planning (No. 2013043498)« less

  9. Adiabatic regularization for gauge fields and the conformal anomaly

    NASA Astrophysics Data System (ADS)

    Chu, Chong-Sun; Koyama, Yoji

    2017-03-01

    Adiabatic regularization for quantum field theory in conformally flat spacetime is known for scalar and Dirac fermion fields. In this paper, we complete the construction by establishing the adiabatic regularization scheme for the gauge field. We show that the adiabatic expansion for the mode functions and the adiabatic vacuum can be defined in a similar way using Wentzel-Kramers-Brillouin-type (WKB-type) solutions as the scalar fields. As an application of the adiabatic method, we compute the trace of the energy momentum tensor and reproduce the known result for the conformal anomaly obtained by the other regularization methods. The availability of the adiabatic expansion scheme for the gauge field allows one to study various renormalized physical quantities of theories coupled to (non-Abelian) gauge fields in conformally flat spacetime, such as conformal supersymmetric Yang Mills, inflation, and cosmology.

  10. SPECT reconstruction using DCT-induced tight framelet regularization

    NASA Astrophysics Data System (ADS)

    Zhang, Jiahan; Li, Si; Xu, Yuesheng; Schmidtlein, C. R.; Lipson, Edward D.; Feiglin, David H.; Krol, Andrzej

    2015-03-01

    Wavelet transforms have been successfully applied in many fields of image processing. Yet, to our knowledge, they have never been directly incorporated to the objective function in Emission Computed Tomography (ECT) image reconstruction. Our aim has been to investigate if the ℓ1-norm of non-decimated discrete cosine transform (DCT) coefficients of the estimated radiotracer distribution could be effectively used as the regularization term for the penalized-likelihood (PL) reconstruction, where a regularizer is used to enforce the image smoothness in the reconstruction. In this study, the ℓ1-norm of 2D DCT wavelet decomposition was used as a regularization term. The Preconditioned Alternating Projection Algorithm (PAPA), which we proposed in earlier work to solve penalized likelihood (PL) reconstruction with non-differentiable regularizers, was used to solve this optimization problem. The DCT wavelet decompositions were performed on the transaxial reconstructed images. We reconstructed Monte Carlo simulated SPECT data obtained for a numerical phantom with Gaussian blobs as hot lesions and with a warm random lumpy background. Reconstructed images using the proposed method exhibited better noise suppression and improved lesion conspicuity, compared with images reconstructed using expectation maximization (EM) algorithm with Gaussian post filter (GPF). Also, the mean square error (MSE) was smaller, compared with EM-GPF. A critical and challenging aspect of this method was selection of optimal parameters. In summary, our numerical experiments demonstrated that the ℓ1-norm of discrete cosine transform (DCT) wavelet frame transform DCT regularizer shows promise for SPECT image reconstruction using PAPA method.

  11. Manifold Regularized Multitask Feature Learning for Multimodality Disease Classification

    PubMed Central

    Jie, Biao; Zhang, Daoqiang; Cheng, Bo; Shen, Dinggang

    2015-01-01

    Multimodality based methods have shown great advantages in classification of Alzheimer’s disease (AD) and its prodromal stage, that is, mild cognitive impairment (MCI). Recently, multitask feature selection methods are typically used for joint selection of common features across multiple modalities. However, one disadvantage of existing multimodality based methods is that they ignore the useful data distribution information in each modality, which is essential for subsequent classification. Accordingly, in this paper we propose a manifold regularized multitask feature learning method to preserve both the intrinsic relatedness among multiple modalities of data and the data distribution information in each modality. Specifically, we denote the feature learning on each modality as a single task, and use group-sparsity regularizer to capture the intrinsic relatedness among multiple tasks (i.e., modalities) and jointly select the common features from multiple tasks. Furthermore, we introduce a new manifold-based Laplacian regularizer to preserve the data distribution information from each task. Finally, we use the multikernel support vector machine method to fuse multimodality data for eventual classification. Conversely, we also extend our method to the semisupervised setting, where only partial data are labeled. We evaluate our method using the baseline magnetic resonance imaging (MRI), fluorodeoxyglucose positron emission tomography (FDG-PET), and cerebrospinal fluid (CSF) data of subjects from AD neuroimaging initiative database. The experimental results demonstrate that our proposed method can not only achieve improved classification performance, but also help to discover the disease-related brain regions useful for disease diagnosis. PMID:25277605

  12. Extended Hansen solubility approach: naphthalene in individual solvents.

    PubMed

    Martin, A; Wu, P L; Adjei, A; Beerbower, A; Prausnitz, J M

    1981-11-01

    A multiple regression method using Hansen partial solubility parameters, delta D, delta p, and delta H, was used to reproduce the solubilities of naphthalene in pure polar and nonpolar solvents and to predict its solubility in untested solvents. The method, called the extended Hansen approach, was compared with the extended Hildebrand solubility approach and the universal-functional-group-activity-coefficient (UNIFAC) method. The Hildebrand regular solution theory was also used to calculate naphthalene solubility. Naphthalene, an aromatic molecule having no side chains or functional groups, is "well-behaved', i.e., its solubility in active solvents known to interact with drug molecules is fairly regular. Because of its simplicity, naphthalene is a suitable solute with which to initiate the difficult study of solubility phenomena. The three methods tested (Hildebrand regular solution theory was introduced only for comparison of solubilities in regular solution) yielded similar results, reproducing naphthalene solubilities within approximately 30% of literature values. In some cases, however, the error was considerably greater. The UNIFAC calculation is superior in that it requires only the solute's heat of fusion, the melting point, and a knowledge of chemical structures of solute and solvent. The extended Hansen and extended Hildebrand methods need experimental solubility data on which to carry out regression analysis. The extended Hansen approach was the method of second choice because of its adaptability to solutes and solvents from various classes. Sample calculations are included to illustrate methods of predicting solubilities in untested solvents at various temperatures. The UNIFAC method was successful in this regard.

  13. Regularization design for high-quality cone-beam CT of intracranial hemorrhage using statistical reconstruction

    NASA Astrophysics Data System (ADS)

    Dang, H.; Stayman, J. W.; Xu, J.; Sisniega, A.; Zbijewski, W.; Wang, X.; Foos, D. H.; Aygun, N.; Koliatsos, V. E.; Siewerdsen, J. H.

    2016-03-01

    Intracranial hemorrhage (ICH) is associated with pathologies such as hemorrhagic stroke and traumatic brain injury. Multi-detector CT is the current front-line imaging modality for detecting ICH (fresh blood contrast 40-80 HU, down to 1 mm). Flat-panel detector (FPD) cone-beam CT (CBCT) offers a potential alternative with a smaller scanner footprint, greater portability, and lower cost potentially well suited to deployment at the point of care outside standard diagnostic radiology and emergency room settings. Previous studies have suggested reliable detection of ICH down to 3 mm in CBCT using high-fidelity artifact correction and penalized weighted least-squared (PWLS) image reconstruction with a post-artifact-correction noise model. However, ICH reconstructed by traditional image regularization exhibits nonuniform spatial resolution and noise due to interaction between the statistical weights and regularization, which potentially degrades the detectability of ICH. In this work, we propose three regularization methods designed to overcome these challenges. The first two compute spatially varying certainty for uniform spatial resolution and noise, respectively. The third computes spatially varying regularization strength to achieve uniform "detectability," combining both spatial resolution and noise in a manner analogous to a delta-function detection task. Experiments were conducted on a CBCT test-bench, and image quality was evaluated for simulated ICH in different regions of an anthropomorphic head. The first two methods improved the uniformity in spatial resolution and noise compared to traditional regularization. The third exhibited the highest uniformity in detectability among all methods and best overall image quality. The proposed regularization provides a valuable means to achieve uniform image quality in CBCT of ICH and is being incorporated in a CBCT prototype for ICH imaging.

  14. A Unified Fisher's Ratio Learning Method for Spatial Filter Optimization.

    PubMed

    Li, Xinyang; Guan, Cuntai; Zhang, Haihong; Ang, Kai Keng

    To detect the mental task of interest, spatial filtering has been widely used to enhance the spatial resolution of electroencephalography (EEG). However, the effectiveness of spatial filtering is undermined due to the significant nonstationarity of EEG. Based on regularization, most of the conventional stationary spatial filter design methods address the nonstationarity at the cost of the interclass discrimination. Moreover, spatial filter optimization is inconsistent with feature extraction when EEG covariance matrices could not be jointly diagonalized due to the regularization. In this paper, we propose a novel framework for a spatial filter design. With Fisher's ratio in feature space directly used as the objective function, the spatial filter optimization is unified with feature extraction. Given its ratio form, the selection of the regularization parameter could be avoided. We evaluate the proposed method on a binary motor imagery data set of 16 subjects, who performed the calibration and test sessions on different days. The experimental results show that the proposed method yields improvement in classification performance for both single broadband and filter bank settings compared with conventional nonunified methods. We also provide a systematic attempt to compare different objective functions in modeling data nonstationarity with simulation studies.To detect the mental task of interest, spatial filtering has been widely used to enhance the spatial resolution of electroencephalography (EEG). However, the effectiveness of spatial filtering is undermined due to the significant nonstationarity of EEG. Based on regularization, most of the conventional stationary spatial filter design methods address the nonstationarity at the cost of the interclass discrimination. Moreover, spatial filter optimization is inconsistent with feature extraction when EEG covariance matrices could not be jointly diagonalized due to the regularization. In this paper, we propose a novel framework for a spatial filter design. With Fisher's ratio in feature space directly used as the objective function, the spatial filter optimization is unified with feature extraction. Given its ratio form, the selection of the regularization parameter could be avoided. We evaluate the proposed method on a binary motor imagery data set of 16 subjects, who performed the calibration and test sessions on different days. The experimental results show that the proposed method yields improvement in classification performance for both single broadband and filter bank settings compared with conventional nonunified methods. We also provide a systematic attempt to compare different objective functions in modeling data nonstationarity with simulation studies.

  15. Convex blind image deconvolution with inverse filtering

    NASA Astrophysics Data System (ADS)

    Lv, Xiao-Guang; Li, Fang; Zeng, Tieyong

    2018-03-01

    Blind image deconvolution is the process of estimating both the original image and the blur kernel from the degraded image with only partial or no information about degradation and the imaging system. It is a bilinear ill-posed inverse problem corresponding to the direct problem of convolution. Regularization methods are used to handle the ill-posedness of blind deconvolution and get meaningful solutions. In this paper, we investigate a convex regularized inverse filtering method for blind deconvolution of images. We assume that the support region of the blur object is known, as has been done in a few existing works. By studying the inverse filters of signal and image restoration problems, we observe the oscillation structure of the inverse filters. Inspired by the oscillation structure of the inverse filters, we propose to use the star norm to regularize the inverse filter. Meanwhile, we use the total variation to regularize the resulting image obtained by convolving the inverse filter with the degraded image. The proposed minimization model is shown to be convex. We employ the first-order primal-dual method for the solution of the proposed minimization model. Numerical examples for blind image restoration are given to show that the proposed method outperforms some existing methods in terms of peak signal-to-noise ratio (PSNR), structural similarity (SSIM), visual quality and time consumption.

  16. Regularization Methods for High-Dimensional Instrumental Variables Regression With an Application to Genetical Genomics

    PubMed Central

    Lin, Wei; Feng, Rui; Li, Hongzhe

    2014-01-01

    In genetical genomics studies, it is important to jointly analyze gene expression data and genetic variants in exploring their associations with complex traits, where the dimensionality of gene expressions and genetic variants can both be much larger than the sample size. Motivated by such modern applications, we consider the problem of variable selection and estimation in high-dimensional sparse instrumental variables models. To overcome the difficulty of high dimensionality and unknown optimal instruments, we propose a two-stage regularization framework for identifying and estimating important covariate effects while selecting and estimating optimal instruments. The methodology extends the classical two-stage least squares estimator to high dimensions by exploiting sparsity using sparsity-inducing penalty functions in both stages. The resulting procedure is efficiently implemented by coordinate descent optimization. For the representative L1 regularization and a class of concave regularization methods, we establish estimation, prediction, and model selection properties of the two-stage regularized estimators in the high-dimensional setting where the dimensionality of co-variates and instruments are both allowed to grow exponentially with the sample size. The practical performance of the proposed method is evaluated by simulation studies and its usefulness is illustrated by an analysis of mouse obesity data. Supplementary materials for this article are available online. PMID:26392642

  17. Making Carbon-Nanotube Arrays Using Block Copolymers: Part 2

    NASA Technical Reports Server (NTRS)

    Bronikowski, Michael

    2004-01-01

    Some changes have been incorporated into a proposed method of manufacturing regular arrays of precisely sized, shaped, positioned, and oriented carbon nanotubes. Such arrays could be useful as mechanical resonators for signal filters and oscillators, and as electrophoretic filters for use in biochemical assays. A prior version of the method was described in Block Copolymers as Templates for Arrays of Carbon Nanotubes, (NPO-30240), NASA Tech Briefs, Vol. 27, No. 4 (April 2003), page 56. To recapitulate from that article: As in other previously reported methods, carbon nanotubes would be formed by decomposition of carbon-containing gases over nanometer-sized catalytic metal particles that had been deposited on suitable substrates. Unlike in other previously reported methods, the catalytic metal particles would not be so randomly and densely distributed as to give rise to thick, irregular mats of nanotubes with a variety of lengths, diameters, and orientations. Instead, in order to obtain regular arrays of spaced-apart carbon nanotubes as nearly identical as possible, the catalytic metal particles would be formed in predetermined regular patterns with precise spacings. The regularity of the arrays would be ensured by the use of nanostructured templates made of block copolymers.

  18. Approximate isotropic cloak for the Maxwell equations

    NASA Astrophysics Data System (ADS)

    Ghosh, Tuhin; Tarikere, Ashwin

    2018-05-01

    We construct a regular isotropic approximate cloak for the Maxwell system of equations. The method of transformation optics has enabled the design of electromagnetic parameters that cloak a region from external observation. However, these constructions are singular and anisotropic, making practical implementation difficult. Thus, regular approximations to these cloaks have been constructed that cloak a given region to any desired degree of accuracy. In this paper, we show how to construct isotropic approximations to these regularized cloaks using homogenization techniques so that one obtains cloaking of arbitrary accuracy with regular and isotropic parameters.

  19. Simultaneous Tumor Segmentation, Image Restoration, and Blur Kernel Estimation in PET Using Multiple Regularizations

    PubMed Central

    Li, Laquan; Wang, Jian; Lu, Wei; Tan, Shan

    2016-01-01

    Accurate tumor segmentation from PET images is crucial in many radiation oncology applications. Among others, partial volume effect (PVE) is recognized as one of the most important factors degrading imaging quality and segmentation accuracy in PET. Taking into account that image restoration and tumor segmentation are tightly coupled and can promote each other, we proposed a variational method to solve both problems simultaneously in this study. The proposed method integrated total variation (TV) semi-blind de-convolution and Mumford-Shah segmentation with multiple regularizations. Unlike many existing energy minimization methods using either TV or L2 regularization, the proposed method employed TV regularization over tumor edges to preserve edge information, and L2 regularization inside tumor regions to preserve the smooth change of the metabolic uptake in a PET image. The blur kernel was modeled as anisotropic Gaussian to address the resolution difference in transverse and axial directions commonly seen in a clinic PET scanner. The energy functional was rephrased using the Γ-convergence approximation and was iteratively optimized using the alternating minimization (AM) algorithm. The performance of the proposed method was validated on a physical phantom and two clinic datasets with non-Hodgkin’s lymphoma and esophageal cancer, respectively. Experimental results demonstrated that the proposed method had high performance for simultaneous image restoration, tumor segmentation and scanner blur kernel estimation. Particularly, the recovery coefficients (RC) of the restored images of the proposed method in the phantom study were close to 1, indicating an efficient recovery of the original blurred images; for segmentation the proposed method achieved average dice similarity indexes (DSIs) of 0.79 and 0.80 for two clinic datasets, respectively; and the relative errors of the estimated blur kernel widths were less than 19% in the transversal direction and 7% in the axial direction. PMID:28603407

  20. Bypassing the Limits of Ll Regularization: Convex Sparse Signal Processing Using Non-Convex Regularization

    NASA Astrophysics Data System (ADS)

    Parekh, Ankit

    Sparsity has become the basis of some important signal processing methods over the last ten years. Many signal processing problems (e.g., denoising, deconvolution, non-linear component analysis) can be expressed as inverse problems. Sparsity is invoked through the formulation of an inverse problem with suitably designed regularization terms. The regularization terms alone encode sparsity into the problem formulation. Often, the ℓ1 norm is used to induce sparsity, so much so that ℓ1 regularization is considered to be `modern least-squares'. The use of ℓ1 norm, as a sparsity-inducing regularizer, leads to a convex optimization problem, which has several benefits: the absence of extraneous local minima, well developed theory of globally convergent algorithms, even for large-scale problems. Convex regularization via the ℓ1 norm, however, tends to under-estimate the non-zero values of sparse signals. In order to estimate the non-zero values more accurately, non-convex regularization is often favored over convex regularization. However, non-convex regularization generally leads to non-convex optimization, which suffers from numerous issues: convergence may be guaranteed to only a stationary point, problem specific parameters may be difficult to set, and the solution is sensitive to the initialization of the algorithm. The first part of this thesis is aimed toward combining the benefits of non-convex regularization and convex optimization to estimate sparse signals more effectively. To this end, we propose to use parameterized non-convex regularizers with designated non-convexity and provide a range for the non-convex parameter so as to ensure that the objective function is strictly convex. By ensuring convexity of the objective function (sum of data-fidelity and non-convex regularizer), we can make use of a wide variety of convex optimization algorithms to obtain the unique global minimum reliably. The second part of this thesis proposes a non-linear signal decomposition technique for an important biomedical signal processing problem: the detection of sleep spindles and K-complexes in human sleep electroencephalography (EEG). We propose a non-linear model for the EEG consisting of three components: (1) a transient (sparse piecewise constant) component, (2) a low-frequency component, and (3) an oscillatory component. The oscillatory component admits a sparse time-frequency representation. Using a convex objective function, we propose a fast non-linear optimization algorithm to estimate the three components in the proposed signal model. The low-frequency and oscillatory components are then used to estimate the K-complexes and sleep spindles respectively. The proposed detection method is shown to outperform several state-of-the-art automated sleep spindles detection methods.

  1. Image superresolution by midfrequency sparse representation and total variation regularization

    NASA Astrophysics Data System (ADS)

    Xu, Jian; Chang, Zhiguo; Fan, Jiulun; Zhao, Xiaoqiang; Wu, Xiaomin; Wang, Yanzi

    2015-01-01

    Machine learning has provided many good tools for superresolution, whereas existing methods still need to be improved in many aspects. On one hand, the memory and time cost should be reduced. On the other hand, the step edges of the results obtained by the existing methods are not clear enough. We do the following work. First, we propose a method to extract the midfrequency features for dictionary learning. This method brings the benefit of a reduction of the memory and time complexity without sacrificing the performance. Second, we propose a detailed wiping-off total variation (DWO-TV) regularization model to reconstruct the sharp step edges. This model adds a novel constraint on the downsampling version of the high-resolution image to wipe off the details and artifacts and sharpen the step edges. Finally, step edges produced by the DWO-TV regularization and the details provided by learning are fused. Experimental results show that the proposed method offers a desirable compromise between low time and memory cost and the reconstruction quality.

  2. Hessian-based norm regularization for image restoration with biomedical applications.

    PubMed

    Lefkimmiatis, Stamatios; Bourquard, Aurélien; Unser, Michael

    2012-03-01

    We present nonquadratic Hessian-based regularization methods that can be effectively used for image restoration problems in a variational framework. Motivated by the great success of the total-variation (TV) functional, we extend it to also include second-order differential operators. Specifically, we derive second-order regularizers that involve matrix norms of the Hessian operator. The definition of these functionals is based on an alternative interpretation of TV that relies on mixed norms of directional derivatives. We show that the resulting regularizers retain some of the most favorable properties of TV, i.e., convexity, homogeneity, rotation, and translation invariance, while dealing effectively with the staircase effect. We further develop an efficient minimization scheme for the corresponding objective functions. The proposed algorithm is of the iteratively reweighted least-square type and results from a majorization-minimization approach. It relies on a problem-specific preconditioned conjugate gradient method, which makes the overall minimization scheme very attractive since it can be applied effectively to large images in a reasonable computational time. We validate the overall proposed regularization framework through deblurring experiments under additive Gaussian noise on standard and biomedical images.

  3. An interior-point method for total variation regularized positron emission tomography image reconstruction

    NASA Astrophysics Data System (ADS)

    Bai, Bing

    2012-03-01

    There has been a lot of work on total variation (TV) regularized tomographic image reconstruction recently. Many of them use gradient-based optimization algorithms with a differentiable approximation of the TV functional. In this paper we apply TV regularization in Positron Emission Tomography (PET) image reconstruction. We reconstruct the PET image in a Bayesian framework, using Poisson noise model and TV prior functional. The original optimization problem is transformed to an equivalent problem with inequality constraints by adding auxiliary variables. Then we use an interior point method with logarithmic barrier functions to solve the constrained optimization problem. In this method, a series of points approaching the solution from inside the feasible region are found by solving a sequence of subproblems characterized by an increasing positive parameter. We use preconditioned conjugate gradient (PCG) algorithm to solve the subproblems directly. The nonnegativity constraint is enforced by bend line search. The exact expression of the TV functional is used in our calculations. Simulation results show that the algorithm converges fast and the convergence is insensitive to the values of the regularization and reconstruction parameters.

  4. The Volume of the Regular Octahedron

    ERIC Educational Resources Information Center

    Trigg, Charles W.

    1974-01-01

    Five methods are given for computing the area of a regular octahedron. It is suggested that students first construct an octahedron as this will aid in space visualization. Six further extensions are left for the reader to try. (LS)

  5. Accurate mask-based spatially regularized correlation filter for visual tracking

    NASA Astrophysics Data System (ADS)

    Gu, Xiaodong; Xu, Xinping

    2017-01-01

    Recently, discriminative correlation filter (DCF)-based trackers have achieved extremely successful results in many competitions and benchmarks. These methods utilize a periodic assumption of the training samples to efficiently learn a classifier. However, this assumption will produce unwanted boundary effects, which severely degrade the tracking performance. Correlation filters with limited boundaries and spatially regularized DCFs were proposed to reduce boundary effects. However, their methods used the fixed mask or predesigned weights function, respectively, which was unsuitable for large appearance variation. We propose an accurate mask-based spatially regularized correlation filter for visual tracking. Our augmented objective can reduce the boundary effect even in large appearance variation. In our algorithm, the masking matrix is converted into the regularized function that acts on the correlation filter in frequency domain, which makes the algorithm fast convergence. Our online tracking algorithm performs favorably against state-of-the-art trackers on OTB-2015 Benchmark in terms of efficiency, accuracy, and robustness.

  6. A Comparison of the Pencil-of-Function Method with Prony’s Method, Wiener Filters and Other Identification Techniques,

    DTIC Science & Technology

    1977-12-01

    exponentials encountered are complex and zhey are approximately at harmonic frequencies. Moreover, the real parts of the complex exponencials are much...functions as a basis for expanding the current distribution on an antenna by the method of moments results in a regularized ill-posed problem with respect...to the current distribution on the antenna structure. However, the problem is not regularized with respect to chaoge because the chaPge distribution

  7. Regularization Paths for Cox's Proportional Hazards Model via Coordinate Descent.

    PubMed

    Simon, Noah; Friedman, Jerome; Hastie, Trevor; Tibshirani, Rob

    2011-03-01

    We introduce a pathwise algorithm for the Cox proportional hazards model, regularized by convex combinations of ℓ 1 and ℓ 2 penalties (elastic net). Our algorithm fits via cyclical coordinate descent, and employs warm starts to find a solution along a regularization path. We demonstrate the efficacy of our algorithm on real and simulated data sets, and find considerable speedup between our algorithm and competing methods.

  8. Scalar field coupling to Einstein tensor in regular black hole spacetime

    NASA Astrophysics Data System (ADS)

    Zhang, Chi; Wu, Chen

    2018-02-01

    In this paper, we study the perturbation property of a scalar field coupling to Einstein's tensor in the background of the regular black hole spacetimes. Our calculations show that the the coupling constant η imprints in the wave equation of a scalar perturbation. We calculated the quasinormal modes of scalar field coupling to Einstein's tensor in the regular black hole spacetimes by the 3rd order WKB method.

  9. Applications of compressed sensing image reconstruction to sparse view phase tomography

    NASA Astrophysics Data System (ADS)

    Ueda, Ryosuke; Kudo, Hiroyuki; Dong, Jian

    2017-10-01

    X-ray phase CT has a potential to give the higher contrast in soft tissue observations. To shorten the measure- ment time, sparse-view CT data acquisition has been attracting the attention. This paper applies two major compressed sensing (CS) approaches to image reconstruction in the x-ray sparse-view phase tomography. The first CS approach is the standard Total Variation (TV) regularization. The major drawbacks of TV regularization are a patchy artifact and loss in smooth intensity changes due to the piecewise constant nature of image model. The second CS method is a relatively new approach of CS which uses a nonlinear smoothing filter to design the regularization term. The nonlinear filter based CS is expected to reduce the major artifact in the TV regular- ization. The both cost functions can be minimized by the very fast iterative reconstruction method. However, in the past research activities, it is not clearly demonstrated how much image quality difference occurs between the TV regularization and the nonlinear filter based CS in x-ray phase CT applications. We clarify the issue by applying the two CS applications to the case of x-ray phase tomography. We provide results with numerically simulated data, which demonstrates that the nonlinear filter based CS outperforms the TV regularization in terms of textures and smooth intensity changes.

  10. What Type of Lectures Students Want? - A Reaction Evaluation of Dental Students

    PubMed Central

    Roopa, Srinivasan; Geetha M, Bagavad; Rani, Anitha; Chacko, Thomas

    2013-01-01

    Introduction: An one hour didactic lecture is the common method of teaching in dental colleges in India. Lengthy lectures are boring and students are passive recipients of the information. Interactive lectures are suggested as a means of overcoming the disadvantages of regular lectures. Aims: The present study was conducted to pilot various methods of making lectures interactive and to find the students’ reactions to interactive lectures as compared to regular lectures. Material and Methods: An entire batch of first year dental students (n = 78) was exposed to both interactive and regular lectures for the cardiovascular system in physiology. Among the total number of 12 lectures, alternate lectures were conducted in an interactive style. At the end of the 12 lecture series, students’ opinions were obtained using a structured feedback evaluation questionnaire, consisting of five statements, on a five point Likert scale. Statistical Analysis was done using SPSS software, version 15. Results: Interactive lectures were found to be more useful than regular lectures by 92% of the students. Significantly more number of students agreed or strongly agreed that interactive lectures kept them attentive, created interest, overcame monotony, motivated them for self learning and provided well defined learning than regular lectures. Among the different techniques which were used, the students preferred use of video clippings (58.1%), followed by each-one-teach-one. Results of the present study support the use of interactive lectures for ensuring increased interest and attention of students during lectures. Conclusion: Interactive lectures were more accepted and considered to be more useful than regular lectures by the students. PMID:24298487

  11. Influence of Signal Intensity Non-Uniformity on Brain Volumetry Using an Atlas-Based Method

    PubMed Central

    Abe, Osamu; Miyati, Tosiaki; Kabasawa, Hiroyuki; Takao, Hidemasa; Hayashi, Naoto; Kurosu, Tomomi; Iwatsubo, Takeshi; Yamashita, Fumio; Matsuda, Hiroshi; Mori, Harushi; Kunimatsu, Akira; Aoki, Shigeki; Ino, Kenji; Yano, Keiichi; Ohtomo, Kuni

    2012-01-01

    Objective Many studies have reported pre-processing effects for brain volumetry; however, no study has investigated whether non-parametric non-uniform intensity normalization (N3) correction processing results in reduced system dependency when using an atlas-based method. To address this shortcoming, the present study assessed whether N3 correction processing provides reduced system dependency in atlas-based volumetry. Materials and Methods Contiguous sagittal T1-weighted images of the brain were obtained from 21 healthy participants, by using five magnetic resonance protocols. After image preprocessing using the Statistical Parametric Mapping 5 software, we measured the structural volume of the segmented images with the WFU-PickAtlas software. We applied six different bias-correction levels (Regularization 10, Regularization 0.0001, Regularization 0, Regularization 10 with N3, Regularization 0.0001 with N3, and Regularization 0 with N3) to each set of images. The structural volume change ratio (%) was defined as the change ratio (%) = (100 × [measured volume - mean volume of five magnetic resonance protocols] / mean volume of five magnetic resonance protocols) for each bias-correction level. Results A low change ratio was synonymous with lower system dependency. The results showed that the images with the N3 correction had a lower change ratio compared with those without the N3 correction. Conclusion The present study is the first atlas-based volumetry study to show that the precision of atlas-based volumetry improves when using N3-corrected images. Therefore, correction for signal intensity non-uniformity is strongly advised for multi-scanner or multi-site imaging trials. PMID:22778560

  12. Optimized star sensors laboratory calibration method using a regularization neural network.

    PubMed

    Zhang, Chengfen; Niu, Yanxiong; Zhang, Hao; Lu, Jiazhen

    2018-02-10

    High-precision ground calibration is essential to ensure the performance of star sensors. However, the complex distortion and multi-error coupling have brought great difficulties to traditional calibration methods, especially for large field of view (FOV) star sensors. Although increasing the complexity of models is an effective way to improve the calibration accuracy, it significantly increases the demand for calibration data. In order to achieve high-precision calibration of star sensors with large FOV, a novel laboratory calibration method based on a regularization neural network is proposed. A multi-layer structure neural network is designed to represent the mapping of the star vector and the corresponding star point coordinate directly. To ensure the generalization performance of the network, regularization strategies are incorporated into the net structure and the training algorithm. Simulation and experiment results demonstrate that the proposed method can achieve high precision with less calibration data and without any other priori information. Compared with traditional methods, the calibration error of the star sensor decreased by about 30%. The proposed method can satisfy the precision requirement for large FOV star sensors.

  13. Joint image and motion reconstruction for PET using a B-spline motion model.

    PubMed

    Blume, Moritz; Navab, Nassir; Rafecas, Magdalena

    2012-12-21

    We present a novel joint image and motion reconstruction method for PET. The method is based on gated data and reconstructs an image together with a motion function. The motion function can be used to transform the reconstructed image to any of the input gates. All available events (from all gates) are used in the reconstruction. The presented method uses a B-spline motion model, together with a novel motion regularization procedure that does not need a regularization parameter (which is usually extremely difficult to adjust). Several image and motion grid levels are used in order to reduce the reconstruction time. In a simulation study, the presented method is compared to a recently proposed joint reconstruction method. While the presented method provides comparable reconstruction quality, it is much easier to use since no regularization parameter has to be chosen. Furthermore, since the B-spline discretization of the motion function depends on fewer parameters than a displacement field, the presented method is considerably faster and consumes less memory than its counterpart. The method is also applied to clinical data, for which a novel purely data-driven gating approach is presented.

  14. Dimensional regularization in position space and a Forest Formula for Epstein-Glaser renormalization

    NASA Astrophysics Data System (ADS)

    Dütsch, Michael; Fredenhagen, Klaus; Keller, Kai Johannes; Rejzner, Katarzyna

    2014-12-01

    We reformulate dimensional regularization as a regularization method in position space and show that it can be used to give a closed expression for the renormalized time-ordered products as solutions to the induction scheme of Epstein-Glaser. This closed expression, which we call the Epstein-Glaser Forest Formula, is analogous to Zimmermann's Forest Formula for BPH renormalization. For scalar fields, the resulting renormalization method is always applicable, we compute several examples. We also analyze the Hopf algebraic aspects of the combinatorics. Our starting point is the Main Theorem of Renormalization of Stora and Popineau and the arising renormalization group as originally defined by Stückelberg and Petermann.

  15. Apparatus and procedure to characterize the surface quality of conductors by measuring the rate of cathode emission as a function of surface electric field strength

    DOEpatents

    Mestayer, Mac; Christo, Steve; Taylor, Mark

    2014-10-21

    A device and method for characterizing quality of a conducting surface. The device including a gaseous ionizing chamber having centrally located inside the chamber a conducting sample to be tested to which a negative potential is applied, a plurality of anode or "sense" wires spaced regularly about the central test wire, a plurality of "field wires" at a negative potential are spaced regularly around the sense, and a plurality of "guard wires" at a positive potential are spaced regularly around the field wires in the chamber. The method utilizing the device to measure emission currents from the conductor.

  16. Surface-based prostate registration with biomechanical regularization

    NASA Astrophysics Data System (ADS)

    van de Ven, Wendy J. M.; Hu, Yipeng; Barentsz, Jelle O.; Karssemeijer, Nico; Barratt, Dean; Huisman, Henkjan J.

    2013-03-01

    Adding MR-derived information to standard transrectal ultrasound (TRUS) images for guiding prostate biopsy is of substantial clinical interest. A tumor visible on MR images can be projected on ultrasound by using MRUS registration. A common approach is to use surface-based registration. We hypothesize that biomechanical modeling will better control deformation inside the prostate than a regular surface-based registration method. We developed a novel method by extending a surface-based registration with finite element (FE) simulation to better predict internal deformation of the prostate. For each of six patients, a tetrahedral mesh was constructed from the manual prostate segmentation. Next, the internal prostate deformation was simulated using the derived radial surface displacement as boundary condition. The deformation field within the gland was calculated using the predicted FE node displacements and thin-plate spline interpolation. We tested our method on MR guided MR biopsy imaging data, as landmarks can easily be identified on MR images. For evaluation of the registration accuracy we used 45 anatomical landmarks located in all regions of the prostate. Our results show that the median target registration error of a surface-based registration with biomechanical regularization is 1.88 mm, which is significantly different from 2.61 mm without biomechanical regularization. We can conclude that biomechanical FE modeling has the potential to improve the accuracy of multimodal prostate registration when comparing it to regular surface-based registration.

  17. Image super-resolution via adaptive filtering and regularization

    NASA Astrophysics Data System (ADS)

    Ren, Jingbo; Wu, Hao; Dong, Weisheng; Shi, Guangming

    2014-11-01

    Image super-resolution (SR) is widely used in the fields of civil and military, especially for the low-resolution remote sensing images limited by the sensor. Single-image SR refers to the task of restoring a high-resolution (HR) image from the low-resolution image coupled with some prior knowledge as a regularization term. One classic method regularizes image by total variation (TV) and/or wavelet or some other transform which introduce some artifacts. To compress these shortages, a new framework for single image SR is proposed by utilizing an adaptive filter before regularization. The key of our model is that the adaptive filter is used to remove the spatial relevance among pixels first and then only the high frequency (HF) part, which is sparser in TV and transform domain, is considered as the regularization term. Concretely, through transforming the original model, the SR question can be solved by two alternate iteration sub-problems. Before each iteration, the adaptive filter should be updated to estimate the initial HF. A high quality HF part and HR image can be obtained by solving the first and second sub-problem, respectively. In experimental part, a set of remote sensing images captured by Landsat satellites are tested to demonstrate the effectiveness of the proposed framework. Experimental results show the outstanding performance of the proposed method in quantitative evaluation and visual fidelity compared with the state-of-the-art methods.

  18. Lipschitz regularity results for nonlinear strictly elliptic equations and applications

    NASA Astrophysics Data System (ADS)

    Ley, Olivier; Nguyen, Vinh Duc

    2017-10-01

    Most of Lipschitz regularity results for nonlinear strictly elliptic equations are obtained for a suitable growth power of the nonlinearity with respect to the gradient variable (subquadratic for instance). For equations with superquadratic growth power in gradient, one usually uses weak Bernstein-type arguments which require regularity and/or convex-type assumptions on the gradient nonlinearity. In this article, we obtain new Lipschitz regularity results for a large class of nonlinear strictly elliptic equations with possibly arbitrary growth power of the Hamiltonian with respect to the gradient variable using some ideas coming from Ishii-Lions' method. We use these bounds to solve an ergodic problem and to study the regularity and the large time behavior of the solution of the evolution equation.

  19. Background field removal technique based on non-regularized variable kernels sophisticated harmonic artifact reduction for phase data for quantitative susceptibility mapping.

    PubMed

    Kan, Hirohito; Arai, Nobuyuki; Takizawa, Masahiro; Omori, Kazuyoshi; Kasai, Harumasa; Kunitomo, Hiroshi; Hirose, Yasujiro; Shibamoto, Yuta

    2018-06-11

    We developed a non-regularized, variable kernel, sophisticated harmonic artifact reduction for phase data (NR-VSHARP) method to accurately estimate local tissue fields without regularization for quantitative susceptibility mapping (QSM). We then used a digital brain phantom to evaluate the accuracy of the NR-VSHARP method, and compared it with the VSHARP and iterative spherical mean value (iSMV) methods through in vivo human brain experiments. Our proposed NR-VSHARP method, which uses variable spherical mean value (SMV) kernels, minimizes L2 norms only within the volume of interest to reduce phase errors and save cortical information without regularization. In a numerical phantom study, relative local field and susceptibility map errors were determined using NR-VSHARP, VSHARP, and iSMV. Additionally, various background field elimination methods were used to image the human brain. In a numerical phantom study, the use of NR-VSHARP considerably reduced the relative local field and susceptibility map errors throughout a digital whole brain phantom, compared with VSHARP and iSMV. In the in vivo experiment, the NR-VSHARP-estimated local field could sufficiently achieve minimal boundary losses and phase error suppression throughout the brain. Moreover, the susceptibility map generated using NR-VSHARP minimized the occurrence of streaking artifacts caused by insufficient background field removal. Our proposed NR-VSHARP method yields minimal boundary losses and highly precise phase data. Our results suggest that this technique may facilitate high-quality QSM. Copyright © 2017. Published by Elsevier Inc.

  20. High-quality compressive ghost imaging

    NASA Astrophysics Data System (ADS)

    Huang, Heyan; Zhou, Cheng; Tian, Tian; Liu, Dongqi; Song, Lijun

    2018-04-01

    We propose a high-quality compressive ghost imaging method based on projected Landweber regularization and guided filter, which effectively reduce the undersampling noise and improve the resolution. In our scheme, the original object is reconstructed by decomposing of regularization and denoising steps instead of solving a minimization problem in compressive reconstruction process. The simulation and experimental results show that our method can obtain high ghost imaging quality in terms of PSNR and visual observation.

  1. 50 CFR 221.70 - How must documents be filed and served under this subpart?

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... party and FERC, using: (i) One of the methods of service in § 221.13(c); or (ii) Regular mail. (2) The... this subpart? (a) Filing. (1) A document under this subpart must be filed using one of the methods set... document received after 5 p.m. at the place where the filing is due is considered filed on the next regular...

  2. Vacuum polarization in the field of a multidimensional global monopole

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grats, Yu. V., E-mail: grats@phys.msu.ru; Spirin, P. A.

    2016-11-15

    An approximate expression for the Euclidean Green function of a massless scalar field in the spacetime of a multidimensional global monopole has been derived. Expressions for the vacuum expectation values 〈ϕ{sup 2}〉{sub ren} and 〈T{sub 00}〉{sub ren} have been derived by the dimensional regularization method. Comparison with the results obtained by alternative regularization methods is made.

  3. Fast Spatial Resolution Analysis of Quadratic Penalized Least-Squares Image Reconstruction With Separate Real and Imaginary Roughness Penalty: Application to fMRI.

    PubMed

    Olafsson, Valur T; Noll, Douglas C; Fessler, Jeffrey A

    2018-02-01

    Penalized least-squares iterative image reconstruction algorithms used for spatial resolution-limited imaging, such as functional magnetic resonance imaging (fMRI), commonly use a quadratic roughness penalty to regularize the reconstructed images. When used for complex-valued images, the conventional roughness penalty regularizes the real and imaginary parts equally. However, these imaging methods sometimes benefit from separate penalties for each part. The spatial smoothness from the roughness penalty on the reconstructed image is dictated by the regularization parameter(s). One method to set the parameter to a desired smoothness level is to evaluate the full width at half maximum of the reconstruction method's local impulse response. Previous work has shown that when using the conventional quadratic roughness penalty, one can approximate the local impulse response using an FFT-based calculation. However, that acceleration method cannot be applied directly for separate real and imaginary regularization. This paper proposes a fast and stable calculation for this case that also uses FFT-based calculations to approximate the local impulse responses of the real and imaginary parts. This approach is demonstrated with a quadratic image reconstruction of fMRI data that uses separate roughness penalties for the real and imaginary parts.

  4. A Distributed Learning Method for ℓ1-Regularized Kernel Machine over Wireless Sensor Networks

    PubMed Central

    Ji, Xinrong; Hou, Cuiqin; Hou, Yibin; Gao, Fang; Wang, Shulong

    2016-01-01

    In wireless sensor networks, centralized learning methods have very high communication costs and energy consumption. These are caused by the need to transmit scattered training examples from various sensor nodes to the central fusion center where a classifier or a regression machine is trained. To reduce the communication cost, a distributed learning method for a kernel machine that incorporates ℓ1 norm regularization (ℓ1-regularized) is investigated, and a novel distributed learning algorithm for the ℓ1-regularized kernel minimum mean squared error (KMSE) machine is proposed. The proposed algorithm relies on in-network processing and a collaboration that transmits the sparse model only between single-hop neighboring nodes. This paper evaluates the proposed algorithm with respect to the prediction accuracy, the sparse rate of model, the communication cost and the number of iterations on synthetic and real datasets. The simulation results show that the proposed algorithm can obtain approximately the same prediction accuracy as that obtained by the batch learning method. Moreover, it is significantly superior in terms of the sparse rate of model and communication cost, and it can converge with fewer iterations. Finally, an experiment conducted on a wireless sensor network (WSN) test platform further shows the advantages of the proposed algorithm with respect to communication cost. PMID:27376298

  5. Beer volatile compounds and their application to low-malt beer fermentation.

    PubMed

    Kobayashi, Michiko; Shimizu, Hiroshi; Shioya, Suteaki

    2008-10-01

    Low-malt beers, in which the amount of wort is adjusted to less than two-thirds of that in regular beer, are popular in the Japanese market because the flavor of low-malt beer is similar to that of regular beer but the price lesser than that of regular beer. There are few published articles about low-malt beer. However, in the production process, there are many similarities between low-malt and regular beer, e.g., the yeast used in low-malt beer fermentation is the same as that used for regular beer. Furthermore, many investigations into regular beer are applicable to low-malt beer production. In this review, we focus on production of volatile compounds, and various studies that are applicable to regular and low-malt beer. In particular, information about metabolism of volatile compounds in yeast cells during fermentation, volatile compound measurement and estimation methods, and control of volatile compound production are discussed in this review, which concentrates on studies published in the last 5-6 years.

  6. High-resolution EEG techniques for brain-computer interface applications.

    PubMed

    Cincotti, Febo; Mattia, Donatella; Aloise, Fabio; Bufalari, Simona; Astolfi, Laura; De Vico Fallani, Fabrizio; Tocci, Andrea; Bianchi, Luigi; Marciani, Maria Grazia; Gao, Shangkai; Millan, Jose; Babiloni, Fabio

    2008-01-15

    High-resolution electroencephalographic (HREEG) techniques allow estimation of cortical activity based on non-invasive scalp potential measurements, using appropriate models of volume conduction and of neuroelectrical sources. In this study we propose an application of this body of technologies, originally developed to obtain functional images of the brain's electrical activity, in the context of brain-computer interfaces (BCI). Our working hypothesis predicted that, since HREEG pre-processing removes spatial correlation introduced by current conduction in the head structures, by providing the BCI with waveforms that are mostly due to the unmixed activity of a small cortical region, a more reliable classification would be obtained, at least when the activity to detect has a limited generator, which is the case in motor related tasks. HREEG techniques employed in this study rely on (i) individual head models derived from anatomical magnetic resonance images, (ii) distributed source model, composed of a layer of current dipoles, geometrically constrained to the cortical mantle, (iii) depth-weighted minimum L(2)-norm constraint and Tikhonov regularization for linear inverse problem solution and (iv) estimation of electrical activity in cortical regions of interest corresponding to relevant Brodmann areas. Six subjects were trained to learn self modulation of sensorimotor EEG rhythms, related to the imagination of limb movements. Off-line EEG data was used to estimate waveforms of cortical activity (cortical current density, CCD) on selected regions of interest. CCD waveforms were fed into the BCI computational pipeline as an alternative to raw EEG signals; spectral features are evaluated through statistical tests (r(2) analysis), to quantify their reliability for BCI control. These results are compared, within subjects, to analogous results obtained without HREEG techniques. The processing procedure was designed in such a way that computations could be split into a setup phase (which includes most of the computational burden) and the actual EEG processing phase, which was limited to a single matrix multiplication. This separation allowed to make the procedure suitable for on-line utilization, and a pilot experiment was performed. Results show that lateralization of electrical activity, which is expected to be contralateral to the imagined movement, is more evident on the estimated CCDs than in the scalp potentials. CCDs produce a pattern of relevant spectral features that is more spatially focused, and has a higher statistical significance (EEG: 0.20+/-0.114 S.D.; CCD: 0.55+/-0.16 S.D.; p=10(-5)). A pilot experiment showed that a trained subject could utilize voluntary modulation of estimated CCDs for accurate (eight targets) on-line control of a cursor. This study showed that it is practically feasible to utilize HREEG techniques for on-line operation of a BCI system; off-line analysis suggests that accuracy of BCI control is enhanced by the proposed method.

  7. Existence, regularity, and concentration phenomenon of nontrivial solitary waves for a class of generalized variable coefficient Kadomtsev-Petviashvili equation

    NASA Astrophysics Data System (ADS)

    Alves, Claudianor O.; Miyagaki, Olímpio H.

    2017-08-01

    In this paper, we establish some results concerning the existence, regularity, and concentration phenomenon of nontrivial solitary waves for a class of generalized variable coefficient Kadomtsev-Petviashvili equation. Variational methods are used to get an existence result, as well as, to study the concentration phenomenon, while the regularity is more delicate because we are leading with functions in an anisotropic Sobolev space.

  8. Dancing with Black Holes

    NASA Astrophysics Data System (ADS)

    Aarseth, S. J.

    2008-05-01

    We describe efforts over the last six years to implement regularization methods suitable for studying one or more interacting black holes by direct N-body simulations. Three different methods have been adapted to large-N systems: (i) Time-Transformed Leapfrog, (ii) Wheel-Spoke, and (iii) Algorithmic Regularization. These methods have been tried out with some success on GRAPE-type computers. Special emphasis has also been devoted to including post-Newtonian terms, with application to moderately massive black holes in stellar clusters. Some examples of simulations leading to coalescence by gravitational radiation will be presented to illustrate the practical usefulness of such methods.

  9. The Mimetic Finite Element Method and the Virtual Element Method for elliptic problems with arbitrary regularity.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Manzini, Gianmarco

    2012-07-13

    We develop and analyze a new family of virtual element methods on unstructured polygonal meshes for the diffusion problem in primal form, that use arbitrarily regular discrete spaces V{sub h} {contained_in} C{sup {alpha}} {element_of} N. The degrees of freedom are (a) solution and derivative values of various degree at suitable nodes and (b) solution moments inside polygons. The convergence of the method is proven theoretically and an optimal error estimate is derived. The connection with the Mimetic Finite Difference method is also discussed. Numerical experiments confirm the convergence rate that is expected from the theory.

  10. Implicit Regularization for Reconstructing 3D Building Rooftop Models Using Airborne LiDAR Data

    PubMed Central

    Jung, Jaewook; Jwa, Yoonseok; Sohn, Gunho

    2017-01-01

    With rapid urbanization, highly accurate and semantically rich virtualization of building assets in 3D become more critical for supporting various applications, including urban planning, emergency response and location-based services. Many research efforts have been conducted to automatically reconstruct building models at city-scale from remotely sensed data. However, developing a fully-automated photogrammetric computer vision system enabling the massive generation of highly accurate building models still remains a challenging task. One the most challenging task for 3D building model reconstruction is to regularize the noises introduced in the boundary of building object retrieved from a raw data with lack of knowledge on its true shape. This paper proposes a data-driven modeling approach to reconstruct 3D rooftop models at city-scale from airborne laser scanning (ALS) data. The focus of the proposed method is to implicitly derive the shape regularity of 3D building rooftops from given noisy information of building boundary in a progressive manner. This study covers a full chain of 3D building modeling from low level processing to realistic 3D building rooftop modeling. In the element clustering step, building-labeled point clouds are clustered into homogeneous groups by applying height similarity and plane similarity. Based on segmented clusters, linear modeling cues including outer boundaries, intersection lines, and step lines are extracted. Topology elements among the modeling cues are recovered by the Binary Space Partitioning (BSP) technique. The regularity of the building rooftop model is achieved by an implicit regularization process in the framework of Minimum Description Length (MDL) combined with Hypothesize and Test (HAT). The parameters governing the MDL optimization are automatically estimated based on Min-Max optimization and Entropy-based weighting method. The performance of the proposed method is tested over the International Society for Photogrammetry and Remote Sensing (ISPRS) benchmark datasets. The results show that the proposed method can robustly produce accurate regularized 3D building rooftop models. PMID:28335486

  11. Implicit Regularization for Reconstructing 3D Building Rooftop Models Using Airborne LiDAR Data.

    PubMed

    Jung, Jaewook; Jwa, Yoonseok; Sohn, Gunho

    2017-03-19

    With rapid urbanization, highly accurate and semantically rich virtualization of building assets in 3D become more critical for supporting various applications, including urban planning, emergency response and location-based services. Many research efforts have been conducted to automatically reconstruct building models at city-scale from remotely sensed data. However, developing a fully-automated photogrammetric computer vision system enabling the massive generation of highly accurate building models still remains a challenging task. One the most challenging task for 3D building model reconstruction is to regularize the noises introduced in the boundary of building object retrieved from a raw data with lack of knowledge on its true shape. This paper proposes a data-driven modeling approach to reconstruct 3D rooftop models at city-scale from airborne laser scanning (ALS) data. The focus of the proposed method is to implicitly derive the shape regularity of 3D building rooftops from given noisy information of building boundary in a progressive manner. This study covers a full chain of 3D building modeling from low level processing to realistic 3D building rooftop modeling. In the element clustering step, building-labeled point clouds are clustered into homogeneous groups by applying height similarity and plane similarity. Based on segmented clusters, linear modeling cues including outer boundaries, intersection lines, and step lines are extracted. Topology elements among the modeling cues are recovered by the Binary Space Partitioning (BSP) technique. The regularity of the building rooftop model is achieved by an implicit regularization process in the framework of Minimum Description Length (MDL) combined with Hypothesize and Test (HAT). The parameters governing the MDL optimization are automatically estimated based on Min-Max optimization and Entropy-based weighting method. The performance of the proposed method is tested over the International Society for Photogrammetry and Remote Sensing (ISPRS) benchmark datasets. The results show that the proposed method can robustly produce accurate regularized 3D building rooftop models.

  12. Quaternion Regularization of the Equations of the Perturbed Spatial Restricted Three-Body Problem: I

    NASA Astrophysics Data System (ADS)

    Chelnokov, Yu. N.

    2017-11-01

    We develop a quaternion method for regularizing the differential equations of the perturbed spatial restricted three-body problem by using the Kustaanheimo-Stiefel variables, which is methodologically closely related to the quaternion method for regularizing the differential equations of perturbed spatial two-body problem, which was proposed by the author of the present paper. A survey of papers related to the regularization of the differential equations of the two- and threebody problems is given. The original Newtonian equations of perturbed spatial restricted three-body problem are considered, and the problem of their regularization is posed; the energy relations and the differential equations describing the variations in the energies of the system in the perturbed spatial restricted three-body problem are given, as well as the first integrals of the differential equations of the unperturbed spatial restricted circular three-body problem (Jacobi integrals); the equations of perturbed spatial restricted three-body problem written in terms of rotating coordinate systems whose angular motion is described by the rotation quaternions (Euler (Rodrigues-Hamilton) parameters) are considered; and the differential equations for angular momenta in the restricted three-body problem are given. Local regular quaternion differential equations of perturbed spatial restricted three-body problem in the Kustaanheimo-Stiefel variables, i.e., equations regular in a neighborhood of the first and second body of finite mass, are obtained. The equations are systems of nonlinear nonstationary eleventhorder differential equations. These equations employ, as additional dependent variables, the energy characteristics of motion of the body under study (a body of a negligibly small mass) and the time whose derivative with respect to a new independent variable is equal to the distance from the body of negligibly small mass to the first or second body of finite mass. The equations obtained in the paper permit developing regular methods for determining solutions, in analytical or numerical form, of problems difficult for classicalmethods, such as the motion of a body of negligibly small mass in a neighborhood of the other two bodies of finite masses.

  13. Multilinear Graph Embedding: Representation and Regularization for Images.

    PubMed

    Chen, Yi-Lei; Hsu, Chiou-Ting

    2014-02-01

    Given a set of images, finding a compact and discriminative representation is still a big challenge especially when multiple latent factors are hidden in the way of data generation. To represent multifactor images, although multilinear models are widely used to parameterize the data, most methods are based on high-order singular value decomposition (HOSVD), which preserves global statistics but interprets local variations inadequately. To this end, we propose a novel method, called multilinear graph embedding (MGE), as well as its kernelization MKGE to leverage the manifold learning techniques into multilinear models. Our method theoretically links the linear, nonlinear, and multilinear dimensionality reduction. We also show that the supervised MGE encodes informative image priors for image regularization, provided that an image is represented as a high-order tensor. From our experiments on face and gait recognition, the superior performance demonstrates that MGE better represents multifactor images than classic methods, including HOSVD and its variants. In addition, the significant improvement in image (or tensor) completion validates the potential of MGE for image regularization.

  14. Task-based statistical image reconstruction for high-quality cone-beam CT

    NASA Astrophysics Data System (ADS)

    Dang, Hao; Webster Stayman, J.; Xu, Jennifer; Zbijewski, Wojciech; Sisniega, Alejandro; Mow, Michael; Wang, Xiaohui; Foos, David H.; Aygun, Nafi; Koliatsos, Vassilis E.; Siewerdsen, Jeffrey H.

    2017-11-01

    Task-based analysis of medical imaging performance underlies many ongoing efforts in the development of new imaging systems. In statistical image reconstruction, regularization is often formulated in terms to encourage smoothness and/or sharpness (e.g. a linear, quadratic, or Huber penalty) but without explicit formulation of the task. We propose an alternative regularization approach in which a spatially varying penalty is determined that maximizes task-based imaging performance at every location in a 3D image. We apply the method to model-based image reconstruction (MBIR—viz., penalized weighted least-squares, PWLS) in cone-beam CT (CBCT) of the head, focusing on the task of detecting a small, low-contrast intracranial hemorrhage (ICH), and we test the performance of the algorithm in the context of a recently developed CBCT prototype for point-of-care imaging of brain injury. Theoretical predictions of local spatial resolution and noise are computed via an optimization by which regularization (specifically, the quadratic penalty strength) is allowed to vary throughout the image to maximize local task-based detectability index ({{d}\\prime} ). Simulation studies and test-bench experiments were performed using an anthropomorphic head phantom. Three PWLS implementations were tested: conventional (constant) penalty; a certainty-based penalty derived to enforce constant point-spread function, PSF; and the task-based penalty derived to maximize local detectability at each location. Conventional (constant) regularization exhibited a fairly strong degree of spatial variation in {{d}\\prime} , and the certainty-based method achieved uniform PSF, but each exhibited a reduction in detectability compared to the task-based method, which improved detectability up to ~15%. The improvement was strongest in areas of high attenuation (skull base), where the conventional and certainty-based methods tended to over-smooth the data. The task-driven reconstruction method presents a promising regularization method in MBIR by explicitly incorporating task-based imaging performance as the objective. The results demonstrate improved ICH conspicuity and support the development of high-quality CBCT systems.

  15. Investigation of the hysteresis phenomena in steady shock reflection using kinetic and continuum methods

    NASA Astrophysics Data System (ADS)

    Ivanov, M.; Zeitoun, D.; Vuillon, J.; Gimelshein, S.; Markelov, G.

    1996-05-01

    The problem of transition of planar shock waves over straight wedges in steady flows from regular to Mach reflection and back was numerically studied by the DSMC method for solving the Boltzmann equation and finite difference method with FCT algorithm for solving the Euler equations. It is shown that the transition from regular to Mach reflection takes place in accordance with detachment criterion while the opposite transition occurs at smaller angles. The hysteresis effect was observed at increasing and decreasing shock wave angle.

  16. Gait symmetry and regularity in transfemoral amputees assessed by trunk accelerations

    PubMed Central

    2010-01-01

    Background The aim of this study was to evaluate a method based on a single accelerometer for the assessment of gait symmetry and regularity in subjects wearing lower limb prostheses. Methods Ten transfemoral amputees and ten healthy control subjects were studied. For the purpose of this study, subjects wore a triaxial accelerometer on their thorax, and foot insoles. Subjects were asked to walk straight ahead for 70 m at their natural speed, and at a lower and faster speed. Indices of step and stride regularity (Ad1 and Ad2, respectively) were obtained by the autocorrelation coefficients computed from the three acceleration components. Step and stride durations were calculated from the plantar pressure data and were used to compute two reference indices (SI1 and SI2) for step and stride regularity. Results Regression analysis showed that both Ad1 well correlates with SI1 (R2 up to 0.74), and Ad2 well correlates with SI2 (R2 up to 0.52). A ROC analysis showed that Ad1 and Ad2 has generally a good sensitivity and specificity in classifying amputee's walking trial, as having a normal or a pathologic step or stride regularity as defined by means of the reference indices SI1 and SI2. In particular, the antero-posterior component of Ad1 and the vertical component of Ad2 had a sensitivity of 90.6% and 87.2%, and a specificity of 92.3% and 81.8%, respectively. Conclusions The use of a simple accelerometer, whose components can be analyzed by the autocorrelation function method, is adequate for the assessment of gait symmetry and regularity in transfemoral amputees. PMID:20085653

  17. Health services utilization of people having and not having a regular doctor in Canada.

    PubMed

    Thanh, Nguyen Xuan; Rapoport, John

    2017-04-01

    Canada having a universal health insurance plan that provides hospital and physician benefits offers a natural experiment of whether continuity of care actually provides lower or higher utilization of services. The question we are evaluating is whether Canadians, who have a regular physician, use more health resources than those who do not have one? Using two statistical methods, including propensity score matching and zero-inflated negative binomial regression, we analyzed data from the 2010 and 2007/2008 Canadian Community Health Surveys separately to document differences between people self-reportedly having and not having a regular doctor in the utilization of general practitioner, specialist, and hospital services. The results showed, consistently for all two statistical methods and two datasets used, that people reportedly having a regular doctor used more healthcare services than a matched group of people who was self-reportedly not having a regular doctor. For specialist and hospital utilization, the statistically significant differences were in the likelihood if the service was used but not in the number of specialist visits or hospital nights among users. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  18. ACCELERATING MR PARAMETER MAPPING USING SPARSITY-PROMOTING REGULARIZATION IN PARAMETRIC DIMENSION

    PubMed Central

    Velikina, Julia V.; Alexander, Andrew L.; Samsonov, Alexey

    2013-01-01

    MR parameter mapping requires sampling along additional (parametric) dimension, which often limits its clinical appeal due to a several-fold increase in scan times compared to conventional anatomic imaging. Data undersampling combined with parallel imaging is an attractive way to reduce scan time in such applications. However, inherent SNR penalties of parallel MRI due to noise amplification often limit its utility even at moderate acceleration factors, requiring regularization by prior knowledge. In this work, we propose a novel regularization strategy, which utilizes smoothness of signal evolution in the parametric dimension within compressed sensing framework (p-CS) to provide accurate and precise estimation of parametric maps from undersampled data. The performance of the method was demonstrated with variable flip angle T1 mapping and compared favorably to two representative reconstruction approaches, image space-based total variation regularization and an analytical model-based reconstruction. The proposed p-CS regularization was found to provide efficient suppression of noise amplification and preservation of parameter mapping accuracy without explicit utilization of analytical signal models. The developed method may facilitate acceleration of quantitative MRI techniques that are not suitable to model-based reconstruction because of complex signal models or when signal deviations from the expected analytical model exist. PMID:23213053

  19. Regularization method for large eddy simulations of shock-turbulence interactions

    NASA Astrophysics Data System (ADS)

    Braun, N. O.; Pullin, D. I.; Meiron, D. I.

    2018-05-01

    The rapid change in scales over a shock has the potential to introduce unique difficulties in Large Eddy Simulations (LES) of compressible shock-turbulence flows if the governing model does not sufficiently capture the spectral distribution of energy in the upstream turbulence. A method for the regularization of LES of shock-turbulence interactions is presented which is constructed to enforce that the energy content in the highest resolved wavenumbers decays as k - 5 / 3, and is computed locally in physical-space at low computational cost. The application of the regularization to an existing subgrid scale model is shown to remove high wavenumber errors while maintaining agreement with Direct Numerical Simulations (DNS) of forced and decaying isotropic turbulence. Linear interaction analysis is implemented to model the interaction of a shock with isotropic turbulence from LES. Comparisons to analytical models suggest that the regularization significantly improves the ability of the LES to predict amplifications in subgrid terms over the modeled shockwave. LES and DNS of decaying, modeled post shock turbulence are also considered, and inclusion of the regularization in shock-turbulence LES is shown to improve agreement with lower Reynolds number DNS.

  20. Integrative analysis of gene expression and copy number alterations using canonical correlation analysis.

    PubMed

    Soneson, Charlotte; Lilljebjörn, Henrik; Fioretos, Thoas; Fontes, Magnus

    2010-04-15

    With the rapid development of new genetic measurement methods, several types of genetic alterations can be quantified in a high-throughput manner. While the initial focus has been on investigating each data set separately, there is an increasing interest in studying the correlation structure between two or more data sets. Multivariate methods based on Canonical Correlation Analysis (CCA) have been proposed for integrating paired genetic data sets. The high dimensionality of microarray data imposes computational difficulties, which have been addressed for instance by studying the covariance structure of the data, or by reducing the number of variables prior to applying the CCA. In this work, we propose a new method for analyzing high-dimensional paired genetic data sets, which mainly emphasizes the correlation structure and still permits efficient application to very large data sets. The method is implemented by translating a regularized CCA to its dual form, where the computational complexity depends mainly on the number of samples instead of the number of variables. The optimal regularization parameters are chosen by cross-validation. We apply the regularized dual CCA, as well as a classical CCA preceded by a dimension-reducing Principal Components Analysis (PCA), to a paired data set of gene expression changes and copy number alterations in leukemia. Using the correlation-maximizing methods, regularized dual CCA and PCA+CCA, we show that without pre-selection of known disease-relevant genes, and without using information about clinical class membership, an exploratory analysis singles out two patient groups, corresponding to well-known leukemia subtypes. Furthermore, the variables showing the highest relevance to the extracted features agree with previous biological knowledge concerning copy number alterations and gene expression changes in these subtypes. Finally, the correlation-maximizing methods are shown to yield results which are more biologically interpretable than those resulting from a covariance-maximizing method, and provide different insight compared to when each variable set is studied separately using PCA. We conclude that regularized dual CCA as well as PCA+CCA are useful methods for exploratory analysis of paired genetic data sets, and can be efficiently implemented also when the number of variables is very large.

  1. Linearized Alternating Direction Method of Multipliers for Constrained Nonconvex Regularized Optimization

    DTIC Science & Technology

    2016-11-22

    structure of the graph, we replace the ℓ1- norm by the nonconvex Capped -ℓ1 norm , and obtain the Generalized Capped -ℓ1 regularized logistic regression...X. M. Yuan. Linearized augmented lagrangian and alternating direction methods for nuclear norm minimization. Mathematics of Computation, 82(281):301...better approximations of ℓ0- norm theoretically and computationally beyond ℓ1- norm , for example, the compressive sensing (Xiao et al., 2011). The

  2. The LPM effect in sequential bremsstrahlung: dimensional regularization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arnold, Peter; Chang, Han-Chih; Iqbal, Shahin

    The splitting processes of bremsstrahlung and pair production in a medium are coherent over large distances in the very high energy limit, which leads to a suppression known as the Landau-Pomeranchuk-Migdal (LPM) effect. Of recent interest is the case when the coherence lengths of two consecutive splitting processes overlap (which is important for understanding corrections to standard treatments of the LPM effect in QCD). In previous papers, we have developed methods for computing such corrections without making soft-gluon approximations. However, our methods require consistent treatment of canceling ultraviolet (UV) divergences associated with coincident emission times, even for processes with tree-levelmore » amplitudes. In this paper, we show how to use dimensional regularization to properly handle the UV contributions. We also present a simple diagnostic test that any consistent UV regularization method for this problem needs to pass.« less

  3. The LPM effect in sequential bremsstrahlung: dimensional regularization

    DOE PAGES

    Arnold, Peter; Chang, Han-Chih; Iqbal, Shahin

    2016-10-19

    The splitting processes of bremsstrahlung and pair production in a medium are coherent over large distances in the very high energy limit, which leads to a suppression known as the Landau-Pomeranchuk-Migdal (LPM) effect. Of recent interest is the case when the coherence lengths of two consecutive splitting processes overlap (which is important for understanding corrections to standard treatments of the LPM effect in QCD). In previous papers, we have developed methods for computing such corrections without making soft-gluon approximations. However, our methods require consistent treatment of canceling ultraviolet (UV) divergences associated with coincident emission times, even for processes with tree-levelmore » amplitudes. In this paper, we show how to use dimensional regularization to properly handle the UV contributions. We also present a simple diagnostic test that any consistent UV regularization method for this problem needs to pass.« less

  4. Prediction of the binding affinities of peptides to class II MHC using a regularized thermodynamic model

    PubMed Central

    2010-01-01

    Background The binding of peptide fragments of extracellular peptides to class II MHC is a crucial event in the adaptive immune response. Each MHC allotype generally binds a distinct subset of peptides and the enormous number of possible peptide epitopes prevents their complete experimental characterization. Computational methods can utilize the limited experimental data to predict the binding affinities of peptides to class II MHC. Results We have developed the Regularized Thermodynamic Average, or RTA, method for predicting the affinities of peptides binding to class II MHC. RTA accounts for all possible peptide binding conformations using a thermodynamic average and includes a parameter constraint for regularization to improve accuracy on novel data. RTA was shown to achieve higher accuracy, as measured by AUC, than SMM-align on the same data for all 17 MHC allotypes examined. RTA also gave the highest accuracy on all but three allotypes when compared with results from 9 different prediction methods applied to the same data. In addition, the method correctly predicted the peptide binding register of 17 out of 18 peptide-MHC complexes. Finally, we found that suboptimal peptide binding registers, which are often ignored in other prediction methods, made significant contributions of at least 50% of the total binding energy for approximately 20% of the peptides. Conclusions The RTA method accurately predicts peptide binding affinities to class II MHC and accounts for multiple peptide binding registers while reducing overfitting through regularization. The method has potential applications in vaccine design and in understanding autoimmune disorders. A web server implementing the RTA prediction method is available at http://bordnerlab.org/RTA/. PMID:20089173

  5. EIT image regularization by a new Multi-Objective Simulated Annealing algorithm.

    PubMed

    Castro Martins, Thiago; Sales Guerra Tsuzuki, Marcos

    2015-01-01

    Multi-Objective Optimization can be used to produce regularized Electrical Impedance Tomography (EIT) images where the weight of the regularization term is not known a priori. This paper proposes a novel Multi-Objective Optimization algorithm based on Simulated Annealing tailored for EIT image reconstruction. Images are reconstructed from experimental data and compared with images from other Multi and Single Objective optimization methods. A significant performance enhancement from traditional techniques can be inferred from the results.

  6. Block matching sparsity regularization-based image reconstruction for incomplete projection data in computed tomography

    NASA Astrophysics Data System (ADS)

    Cai, Ailong; Li, Lei; Zheng, Zhizhong; Zhang, Hanming; Wang, Linyuan; Hu, Guoen; Yan, Bin

    2018-02-01

    In medical imaging many conventional regularization methods, such as total variation or total generalized variation, impose strong prior assumptions which can only account for very limited classes of images. A more reasonable sparse representation frame for images is still badly needed. Visually understandable images contain meaningful patterns, and combinations or collections of these patterns can be utilized to form some sparse and redundant representations which promise to facilitate image reconstructions. In this work, we propose and study block matching sparsity regularization (BMSR) and devise an optimization program using BMSR for computed tomography (CT) image reconstruction for an incomplete projection set. The program is built as a constrained optimization, minimizing the L1-norm of the coefficients of the image in the transformed domain subject to data observation and positivity of the image itself. To solve the program efficiently, a practical method based on the proximal point algorithm is developed and analyzed. In order to accelerate the convergence rate, a practical strategy for tuning the BMSR parameter is proposed and applied. The experimental results for various settings, including real CT scanning, have verified the proposed reconstruction method showing promising capabilities over conventional regularization.

  7. Fast parallel MR image reconstruction via B1-based, adaptive restart, iterative soft thresholding algorithms (BARISTA).

    PubMed

    Muckley, Matthew J; Noll, Douglas C; Fessler, Jeffrey A

    2015-02-01

    Sparsity-promoting regularization is useful for combining compressed sensing assumptions with parallel MRI for reducing scan time while preserving image quality. Variable splitting algorithms are the current state-of-the-art algorithms for SENSE-type MR image reconstruction with sparsity-promoting regularization. These methods are very general and have been observed to work with almost any regularizer; however, the tuning of associated convergence parameters is a commonly-cited hindrance in their adoption. Conversely, majorize-minimize algorithms based on a single Lipschitz constant have been observed to be slow in shift-variant applications such as SENSE-type MR image reconstruction since the associated Lipschitz constants are loose bounds for the shift-variant behavior. This paper bridges the gap between the Lipschitz constant and the shift-variant aspects of SENSE-type MR imaging by introducing majorizing matrices in the range of the regularizer matrix. The proposed majorize-minimize methods (called BARISTA) converge faster than state-of-the-art variable splitting algorithms when combined with momentum acceleration and adaptive momentum restarting. Furthermore, the tuning parameters associated with the proposed methods are unitless convergence tolerances that are easier to choose than the constraint penalty parameters required by variable splitting algorithms.

  8. Fast Parallel MR Image Reconstruction via B1-based, Adaptive Restart, Iterative Soft Thresholding Algorithms (BARISTA)

    PubMed Central

    Noll, Douglas C.; Fessler, Jeffrey A.

    2014-01-01

    Sparsity-promoting regularization is useful for combining compressed sensing assumptions with parallel MRI for reducing scan time while preserving image quality. Variable splitting algorithms are the current state-of-the-art algorithms for SENSE-type MR image reconstruction with sparsity-promoting regularization. These methods are very general and have been observed to work with almost any regularizer; however, the tuning of associated convergence parameters is a commonly-cited hindrance in their adoption. Conversely, majorize-minimize algorithms based on a single Lipschitz constant have been observed to be slow in shift-variant applications such as SENSE-type MR image reconstruction since the associated Lipschitz constants are loose bounds for the shift-variant behavior. This paper bridges the gap between the Lipschitz constant and the shift-variant aspects of SENSE-type MR imaging by introducing majorizing matrices in the range of the regularizer matrix. The proposed majorize-minimize methods (called BARISTA) converge faster than state-of-the-art variable splitting algorithms when combined with momentum acceleration and adaptive momentum restarting. Furthermore, the tuning parameters associated with the proposed methods are unitless convergence tolerances that are easier to choose than the constraint penalty parameters required by variable splitting algorithms. PMID:25330484

  9. Accelerating 4D flow MRI by exploiting vector field divergence regularization.

    PubMed

    Santelli, Claudio; Loecher, Michael; Busch, Julia; Wieben, Oliver; Schaeffter, Tobias; Kozerke, Sebastian

    2016-01-01

    To improve velocity vector field reconstruction from undersampled four-dimensional (4D) flow MRI by penalizing divergence of the measured flow field. Iterative image reconstruction in which magnitude and phase are regularized separately in alternating iterations was implemented. The approach allows incorporating prior knowledge of the flow field being imaged. In the present work, velocity data were regularized to reduce divergence, using either divergence-free wavelets (DFW) or a finite difference (FD) method using the ℓ1-norm of divergence and curl. The reconstruction methods were tested on a numerical phantom and in vivo data. Results of the DFW and FD approaches were compared with data obtained with standard compressed sensing (CS) reconstruction. Relative to standard CS, directional errors of vector fields and divergence were reduced by 55-60% and 38-48% for three- and six-fold undersampled data with the DFW and FD methods. Velocity vector displays of the numerical phantom and in vivo data were found to be improved upon DFW or FD reconstruction. Regularization of vector field divergence in image reconstruction from undersampled 4D flow data is a valuable approach to improve reconstruction accuracy of velocity vector fields. © 2014 Wiley Periodicals, Inc.

  10. Assessment and Evaluation in Adapted Physical Education.

    ERIC Educational Resources Information Center

    Kennedy, Charlotte I.; Bundschuh, Ernie

    Intended for physical education teachers, the booklet describes informal and formal methods for evaluating handicapped children to determine whether they can participate in a regular physical education program with nonhandicapped students, in a regular physical education program with modification, or in a specially designed physical education…

  11. Micro-CT image reconstruction based on alternating direction augmented Lagrangian method and total variation.

    PubMed

    Gopi, Varun P; Palanisamy, P; Wahid, Khan A; Babyn, Paul; Cooper, David

    2013-01-01

    Micro-computed tomography (micro-CT) plays an important role in pre-clinical imaging. The radiation from micro-CT can result in excess radiation exposure to the specimen under test, hence the reduction of radiation from micro-CT is essential. The proposed research focused on analyzing and testing an alternating direction augmented Lagrangian (ADAL) algorithm to recover images from random projections using total variation (TV) regularization. The use of TV regularization in compressed sensing problems makes the recovered image quality sharper by preserving the edges or boundaries more accurately. In this work TV regularization problem is addressed by ADAL which is a variant of the classic augmented Lagrangian method for structured optimization. The per-iteration computational complexity of the algorithm is two fast Fourier transforms, two matrix vector multiplications and a linear time shrinkage operation. Comparison of experimental results indicate that the proposed algorithm is stable, efficient and competitive with the existing algorithms for solving TV regularization problems. Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. Sparse regularization for EIT reconstruction incorporating structural information derived from medical imaging.

    PubMed

    Gong, Bo; Schullcke, Benjamin; Krueger-Ziolek, Sabine; Mueller-Lisse, Ullrich; Moeller, Knut

    2016-06-01

    Electrical impedance tomography (EIT) reconstructs the conductivity distribution of a domain using electrical data on its boundary. This is an ill-posed inverse problem usually solved on a finite element mesh. For this article, a special regularization method incorporating structural information of the targeted domain is proposed and evaluated. Structural information was obtained either from computed tomography images or from preliminary EIT reconstructions by a modified k-means clustering. The proposed regularization method integrates this structural information into the reconstruction as a soft constraint preferring sparsity in group level. A first evaluation with Monte Carlo simulations indicated that the proposed solver is more robust to noise and the resulting images show fewer artifacts. This finding is supported by real data analysis. The structure based regularization has the potential to balance structural a priori information with data driven reconstruction. It is robust to noise, reduces artifacts and produces images that reflect anatomy and are thus easier to interpret for physicians.

  13. Noisy image magnification with total variation regularization and order-changed dictionary learning

    NASA Astrophysics Data System (ADS)

    Xu, Jian; Chang, Zhiguo; Fan, Jiulun; Zhao, Xiaoqiang; Wu, Xiaomin; Wang, Yanzi

    2015-12-01

    Noisy low resolution (LR) images are always obtained in real applications, but many existing image magnification algorithms can not get good result from a noisy LR image. We propose a two-step image magnification algorithm to solve this problem. The proposed algorithm takes the advantages of both regularization-based method and learning-based method. The first step is based on total variation (TV) regularization and the second step is based on sparse representation. In the first step, we add a constraint on the TV regularization model to magnify the LR image and at the same time to suppress the noise in it. In the second step, we propose an order-changed dictionary training algorithm to train the dictionaries which is dominated by texture details. Experimental results demonstrate that the proposed algorithm performs better than many other algorithms when the noise is not serious. The proposed algorithm can also provide better visual quality on natural LR images.

  14. Optimal guidance law development for an advanced launch system

    NASA Technical Reports Server (NTRS)

    Calise, Anthony J.; Leung, Martin S. K.

    1995-01-01

    The objective of this research effort was to develop a real-time guidance approach for launch vehicles ascent to orbit injection. Various analytical approaches combined with a variety of model order and model complexity reduction have been investigated. Singular perturbation methods were first attempted and found to be unsatisfactory. The second approach based on regular perturbation analysis was subsequently investigated. It also fails because the aerodynamic effects (ignored in the zero order solution) are too large to be treated as perturbations. Therefore, the study demonstrates that perturbation methods alone (both regular and singular perturbations) are inadequate for use in developing a guidance algorithm for the atmospheric flight phase of a launch vehicle. During a second phase of the research effort, a hybrid analytic/numerical approach was developed and evaluated. The approach combines the numerical methods of collocation and the analytical method of regular perturbations. The concept of choosing intelligent interpolating functions is also introduced. Regular perturbation analysis allows the use of a crude representation for the collocation solution, and intelligent interpolating functions further reduce the number of elements without sacrificing the approximation accuracy. As a result, the combined method forms a powerful tool for solving real-time optimal control problems. Details of the approach are illustrated in a fourth order nonlinear example. The hybrid approach is then applied to the launch vehicle problem. The collocation solution is derived from a bilinear tangent steering law, and results in a guidance solution for the entire flight regime that includes both atmospheric and exoatmospheric flight phases.

  15. Traction cytometry: regularization in the Fourier approach and comparisons with finite element method.

    PubMed

    Kulkarni, Ankur H; Ghosh, Prasenjit; Seetharaman, Ashwin; Kondaiah, Paturu; Gundiah, Namrata

    2018-05-09

    Traction forces exerted by adherent cells are quantified using displacements of embedded markers on polyacrylamide substrates due to cell contractility. Fourier Transform Traction Cytometry (FTTC) is widely used to calculate tractions but has inherent limitations due to errors in the displacement fields; these are mitigated through a regularization parameter (γ) in the Reg-FTTC method. An alternate finite element (FE) approach computes tractions on a domain using known boundary conditions. Robust verification and recovery studies are lacking but essential in assessing the accuracy and noise sensitivity of the traction solutions from the different methods. We implemented the L2 regularization method and defined a maximum curvature point in the traction with γ plot as the optimal regularization parameter (γ*) in the Reg-FTTC approach. Traction reconstructions using γ* yield accurate values of low and maximum tractions (Tmax) in the presence of up to 5% noise. Reg-FTTC is hence a clear improvement over the FTTC method but is inadequate to reconstruct low stresses such as those at nascent focal adhesions. FE, implemented using a node-by-node comparison, showed an intermediate reconstruction compared to Reg-FTTC. We performed experiments using mouse embryonic fibroblast (MEF) and compared results between these approaches. Tractions from FTTC and FE showed differences of ∼92% and 22% as compared to Reg-FTTC. Selection of an optimum value of γ for each cell reduced variability in the computed tractions as compared to using a single value of γ for all the MEF cells in this study.

  16. Effects of preparation methods on protein and amino acid contents of various eggs available in Malaysian local markets.

    PubMed

    Ismail, Maznah; Mariod, Abdalbasit; Pin, Sia Soh

    2013-01-01

    The effect of preparation methods (raw, half-boiled and hard-boiled) on protein and amino acid contents, as well as the protein quality (amino acid score) of regular, kampung and nutrient enriched Malaysian eggs was investigated. The protein content was determined using a semi-micro Kjeldahl method whereas the amino acid composition was determined using HPLC. The protein content of raw regular, kampung and nutrient enriched eggs were 49.9 ±0.2%, 55.8 ±0.2% and 56.5 ±0.5%, respectively. The protein content of hard-boiled eggs of regular, kampung and nutrient enriched eggs was 56.8 ±0.1%, 54.7 ±0.1%, and 53.7 ±0.5%, while that for half-boiled eggs of regular, kampung and nutrient enriched eggs was 54.7 ±0.6%, 53.4 ±0.4%, and 55.1 ±0.7%, respectively. There were significant differences (p < 0.05) in protein and amino acid contents of half-boiled, hard-boiled as compared with raw samples, and valine was found as the limiting amino acid. It was found that there were significant differences (p < 0.05) of total amino score in regular, kampung and nutrient enriched eggs after heat treatments.Furthermore, hard-boiling (100°C) for 10 minutes and half-boiling (100°C) for 5 minutes affects the total amino score, which in turn alter the protein quality of the egg.

  17. Space-dependent perfusion coefficient estimation in a 2D bioheat transfer problem

    NASA Astrophysics Data System (ADS)

    Bazán, Fermín S. V.; Bedin, Luciano; Borges, Leonardo S.

    2017-05-01

    In this work, a method for estimating the space-dependent perfusion coefficient parameter in a 2D bioheat transfer model is presented. In the method, the bioheat transfer model is transformed into a time-dependent semidiscrete system of ordinary differential equations involving perfusion coefficient values as parameters, and the estimation problem is solved through a nonlinear least squares technique. In particular, the bioheat problem is solved by the method of lines based on a highly accurate pseudospectral approach, and perfusion coefficient values are estimated by the regularized Gauss-Newton method coupled with a proper regularization parameter. The performance of the method on several test problems is illustrated numerically.

  18. Semisupervised Support Vector Machines With Tangent Space Intrinsic Manifold Regularization.

    PubMed

    Sun, Shiliang; Xie, Xijiong

    2016-09-01

    Semisupervised learning has been an active research topic in machine learning and data mining. One main reason is that labeling examples is expensive and time-consuming, while there are large numbers of unlabeled examples available in many practical problems. So far, Laplacian regularization has been widely used in semisupervised learning. In this paper, we propose a new regularization method called tangent space intrinsic manifold regularization. It is intrinsic to data manifold and favors linear functions on the manifold. Fundamental elements involved in the formulation of the regularization are local tangent space representations, which are estimated by local principal component analysis, and the connections that relate adjacent tangent spaces. Simultaneously, we explore its application to semisupervised classification and propose two new learning algorithms called tangent space intrinsic manifold regularized support vector machines (TiSVMs) and tangent space intrinsic manifold regularized twin SVMs (TiTSVMs). They effectively integrate the tangent space intrinsic manifold regularization consideration. The optimization of TiSVMs can be solved by a standard quadratic programming, while the optimization of TiTSVMs can be solved by a pair of standard quadratic programmings. The experimental results of semisupervised classification problems show the effectiveness of the proposed semisupervised learning algorithms.

  19. Comparison of detrending methods for fluctuation analysis in hydrology

    NASA Astrophysics Data System (ADS)

    Zhang, Qiang; Zhou, Yu; Singh, Vijay P.; Chen, Yongqin David

    2011-03-01

    SummaryTrends within a hydrologic time series can significantly influence the scaling results of fluctuation analysis, such as rescaled range (RS) analysis and (multifractal) detrended fluctuation analysis (MF-DFA). Therefore, removal of trends is important in the study of scaling properties of the time series. In this study, three detrending methods, including adaptive detrending algorithm (ADA), Fourier-based method, and average removing technique, were evaluated by analyzing numerically generated series and observed streamflow series with obvious relative regular periodic trend. Results indicated that: (1) the Fourier-based detrending method and ADA were similar in detrending practices, and given proper parameters, these two methods can produce similarly satisfactory results; (2) detrended series by Fourier-based detrending method and ADA lose the fluctuation information at larger time scales, and the location of crossover points is heavily impacted by the chosen parameters of these two methods; and (3) the average removing method has an advantage over the other two methods, i.e., the fluctuation information at larger time scales is kept well-an indication of relatively reliable performance in detrending. In addition, the average removing method performed reasonably well in detrending a time series with regular periods or trends. In this sense, the average removing method should be preferred in the study of scaling properties of the hydrometeorolgical series with relative regular periodic trend using MF-DFA.

  20. Connection between the regular approximation and the normalized elimination of the small component in relativistic quantum theory

    NASA Astrophysics Data System (ADS)

    Filatov, Michael; Cremer, Dieter

    2005-02-01

    The regular approximation to the normalized elimination of the small component (NESC) in the modified Dirac equation has been developed and presented in matrix form. The matrix form of the infinite-order regular approximation (IORA) expressions, obtained in [Filatov and Cremer, J. Chem. Phys. 118, 6741 (2003)] using the resolution of the identity, is the exact matrix representation and corresponds to the zeroth-order regular approximation to NESC (NESC-ZORA). Because IORA (=NESC-ZORA) is a variationally stable method, it was used as a suitable starting point for the development of the second-order regular approximation to NESC (NESC-SORA). As shown for hydrogenlike ions, NESC-SORA energies are closer to the exact Dirac energies than the energies from the fifth-order Douglas-Kroll approximation, which is much more computationally demanding than NESC-SORA. For the application of IORA (=NESC-ZORA) and NESC-SORA to many-electron systems, the number of the two-electron integrals that need to be evaluated (identical to the number of the two-electron integrals of a full Dirac-Hartree-Fock calculation) was drastically reduced by using the resolution of the identity technique. An approximation was derived, which requires only the two-electron integrals of a nonrelativistic calculation. The accuracy of this approach was demonstrated for heliumlike ions. The total energy based on the approximate integrals deviates from the energy calculated with the exact integrals by less than 5×10-9hartree units. NESC-ZORA and NESC-SORA can easily be implemented in any nonrelativistic quantum chemical program. Their application is comparable in cost with that of nonrelativistic methods. The methods can be run with density functional theory and any wave function method. NESC-SORA has the advantage that it does not imply a picture change.

  1. L1-norm locally linear representation regularization multi-source adaptation learning.

    PubMed

    Tao, Jianwen; Wen, Shiting; Hu, Wenjun

    2015-09-01

    In most supervised domain adaptation learning (DAL) tasks, one has access only to a small number of labeled examples from target domain. Therefore the success of supervised DAL in this "small sample" regime needs the effective utilization of the large amounts of unlabeled data to extract information that is useful for generalization. Toward this end, we here use the geometric intuition of manifold assumption to extend the established frameworks in existing model-based DAL methods for function learning by incorporating additional information about the target geometric structure of the marginal distribution. We would like to ensure that the solution is smooth with respect to both the ambient space and the target marginal distribution. In doing this, we propose a novel L1-norm locally linear representation regularization multi-source adaptation learning framework which exploits the geometry of the probability distribution, which has two techniques. Firstly, an L1-norm locally linear representation method is presented for robust graph construction by replacing the L2-norm reconstruction measure in LLE with L1-norm one, which is termed as L1-LLR for short. Secondly, considering the robust graph regularization, we replace traditional graph Laplacian regularization with our new L1-LLR graph Laplacian regularization and therefore construct new graph-based semi-supervised learning framework with multi-source adaptation constraint, which is coined as L1-MSAL method. Moreover, to deal with the nonlinear learning problem, we also generalize the L1-MSAL method by mapping the input data points from the input space to a high-dimensional reproducing kernel Hilbert space (RKHS) via a nonlinear mapping. Promising experimental results have been obtained on several real-world datasets such as face, visual video and object. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. An irregular lattice method for elastic wave propagation

    NASA Astrophysics Data System (ADS)

    O'Brien, Gareth S.; Bean, Christopher J.

    2011-12-01

    Lattice methods are a class of numerical scheme which represent a medium as a connection of interacting nodes or particles. In the case of modelling seismic wave propagation, the interaction term is determined from Hooke's Law including a bond-bending term. This approach has been shown to model isotropic seismic wave propagation in an elastic or viscoelastic medium by selecting the appropriate underlying lattice structure. To predetermine the material constants, this methodology has been restricted to regular grids, hexagonal or square in 2-D or cubic in 3-D. Here, we present a method for isotropic elastic wave propagation where we can remove this lattice restriction. The methodology is outlined and a relationship between the elastic material properties and an irregular lattice geometry are derived. The numerical method is compared with an analytical solution for wave propagation in an infinite homogeneous body along with comparing the method with a numerical solution for a layered elastic medium. The dispersion properties of this method are derived from a plane wave analysis showing the scheme is more dispersive than a regular lattice method. Therefore, the computational costs of using an irregular lattice are higher. However, by removing the regular lattice structure the anisotropic nature of fracture propagation in such methods can be removed.

  3. On-Line Identification of Simulation Examples for Forgetting Methods to Track Time Varying Parameters Using the Alternative Covariance Matrix in Matlab

    NASA Astrophysics Data System (ADS)

    Vachálek, Ján

    2011-12-01

    The paper compares the abilities of forgetting methods to track time varying parameters of two different simulated models with different types of excitation. The observed parameters in the simulations are the integral sum of the Euclidean norm, deviation of the parameter estimates from their true values and a selected band prediction error count. As supplementary information, we observe the eigenvalues of the covariance matrix. In the paper we used a modified method of Regularized Exponential Forgetting with Alternative Covariance Matrix (REFACM) along with Directional Forgetting (DF) and three standard regularized methods.

  4. Laplace Inversion of Low-Resolution NMR Relaxometry Data Using Sparse Representation Methods

    PubMed Central

    Berman, Paula; Levi, Ofer; Parmet, Yisrael; Saunders, Michael; Wiesman, Zeev

    2013-01-01

    Low-resolution nuclear magnetic resonance (LR-NMR) relaxometry is a powerful tool that can be harnessed for characterizing constituents in complex materials. Conversion of the relaxation signal into a continuous distribution of relaxation components is an ill-posed inverse Laplace transform problem. The most common numerical method implemented today for dealing with this kind of problem is based on L2-norm regularization. However, sparse representation methods via L1 regularization and convex optimization are a relatively new approach for effective analysis and processing of digital images and signals. In this article, a numerical optimization method for analyzing LR-NMR data by including non-negativity constraints and L1 regularization and by applying a convex optimization solver PDCO, a primal-dual interior method for convex objectives, that allows general linear constraints to be treated as linear operators is presented. The integrated approach includes validation of analyses by simulations, testing repeatability of experiments, and validation of the model and its statistical assumptions. The proposed method provides better resolved and more accurate solutions when compared with those suggested by existing tools. © 2013 Wiley Periodicals, Inc. Concepts Magn Reson Part A 42A: 72–88, 2013. PMID:23847452

  5. Laplace Inversion of Low-Resolution NMR Relaxometry Data Using Sparse Representation Methods.

    PubMed

    Berman, Paula; Levi, Ofer; Parmet, Yisrael; Saunders, Michael; Wiesman, Zeev

    2013-05-01

    Low-resolution nuclear magnetic resonance (LR-NMR) relaxometry is a powerful tool that can be harnessed for characterizing constituents in complex materials. Conversion of the relaxation signal into a continuous distribution of relaxation components is an ill-posed inverse Laplace transform problem. The most common numerical method implemented today for dealing with this kind of problem is based on L 2 -norm regularization. However, sparse representation methods via L 1 regularization and convex optimization are a relatively new approach for effective analysis and processing of digital images and signals. In this article, a numerical optimization method for analyzing LR-NMR data by including non-negativity constraints and L 1 regularization and by applying a convex optimization solver PDCO, a primal-dual interior method for convex objectives, that allows general linear constraints to be treated as linear operators is presented. The integrated approach includes validation of analyses by simulations, testing repeatability of experiments, and validation of the model and its statistical assumptions. The proposed method provides better resolved and more accurate solutions when compared with those suggested by existing tools. © 2013 Wiley Periodicals, Inc. Concepts Magn Reson Part A 42A: 72-88, 2013.

  6. Joint image reconstruction method with correlative multi-channel prior for x-ray spectral computed tomography

    NASA Astrophysics Data System (ADS)

    Kazantsev, Daniil; Jørgensen, Jakob S.; Andersen, Martin S.; Lionheart, William R. B.; Lee, Peter D.; Withers, Philip J.

    2018-06-01

    Rapid developments in photon-counting and energy-discriminating detectors have the potential to provide an additional spectral dimension to conventional x-ray grayscale imaging. Reconstructed spectroscopic tomographic data can be used to distinguish individual materials by characteristic absorption peaks. The acquired energy-binned data, however, suffer from low signal-to-noise ratio, acquisition artifacts, and frequently angular undersampled conditions. New regularized iterative reconstruction methods have the potential to produce higher quality images and since energy channels are mutually correlated it can be advantageous to exploit this additional knowledge. In this paper, we propose a novel method which jointly reconstructs all energy channels while imposing a strong structural correlation. The core of the proposed algorithm is to employ a variational framework of parallel level sets to encourage joint smoothing directions. In particular, the method selects reference channels from which to propagate structure in an adaptive and stochastic way while preferring channels with a high data signal-to-noise ratio. The method is compared with current state-of-the-art multi-channel reconstruction techniques including channel-wise total variation and correlative total nuclear variation regularization. Realistic simulation experiments demonstrate the performance improvements achievable by using correlative regularization methods.

  7. Efficient multidimensional regularization for Volterra series estimation

    NASA Astrophysics Data System (ADS)

    Birpoutsoukis, Georgios; Csurcsia, Péter Zoltán; Schoukens, Johan

    2018-05-01

    This paper presents an efficient nonparametric time domain nonlinear system identification method. It is shown how truncated Volterra series models can be efficiently estimated without the need of long, transient-free measurements. The method is a novel extension of the regularization methods that have been developed for impulse response estimates of linear time invariant systems. To avoid the excessive memory needs in case of long measurements or large number of estimated parameters, a practical gradient-based estimation method is also provided, leading to the same numerical results as the proposed Volterra estimation method. Moreover, the transient effects in the simulated output are removed by a special regularization method based on the novel ideas of transient removal for Linear Time-Varying (LTV) systems. Combining the proposed methodologies, the nonparametric Volterra models of the cascaded water tanks benchmark are presented in this paper. The results for different scenarios varying from a simple Finite Impulse Response (FIR) model to a 3rd degree Volterra series with and without transient removal are compared and studied. It is clear that the obtained models capture the system dynamics when tested on a validation dataset, and their performance is comparable with the white-box (physical) models.

  8. Semi-supervised manifold learning with affinity regularization for Alzheimer's disease identification using positron emission tomography imaging.

    PubMed

    Lu, Shen; Xia, Yong; Cai, Tom Weidong; Feng, David Dagan

    2015-01-01

    Dementia, Alzheimer's disease (AD) in particular is a global problem and big threat to the aging population. An image based computer-aided dementia diagnosis method is needed to providing doctors help during medical image examination. Many machine learning based dementia classification methods using medical imaging have been proposed and most of them achieve accurate results. However, most of these methods make use of supervised learning requiring fully labeled image dataset, which usually is not practical in real clinical environment. Using large amount of unlabeled images can improve the dementia classification performance. In this study we propose a new semi-supervised dementia classification method based on random manifold learning with affinity regularization. Three groups of spatial features are extracted from positron emission tomography (PET) images to construct an unsupervised random forest which is then used to regularize the manifold learning objective function. The proposed method, stat-of-the-art Laplacian support vector machine (LapSVM) and supervised SVM are applied to classify AD and normal controls (NC). The experiment results show that learning with unlabeled images indeed improves the classification performance. And our method outperforms LapSVM on the same dataset.

  9. Numerical Differentiation of Noisy, Nonsmooth Data

    DOE PAGES

    Chartrand, Rick

    2011-01-01

    We consider the problem of differentiating a function specified by noisy data. Regularizing the differentiation process avoids the noise amplification of finite-difference methods. We use total-variation regularization, which allows for discontinuous solutions. The resulting simple algorithm accurately differentiates noisy functions, including those which have a discontinuous derivative.

  10. Application of L1-norm regularization to epicardial potential reconstruction based on gradient projection.

    PubMed

    Wang, Liansheng; Qin, Jing; Wong, Tien Tsin; Heng, Pheng Ann

    2011-10-07

    The epicardial potential (EP)-targeted inverse problem of electrocardiography (ECG) has been widely investigated as it is demonstrated that EPs reflect underlying myocardial activity. It is a well-known ill-posed problem as small noises in input data may yield a highly unstable solution. Traditionally, L2-norm regularization methods have been proposed to solve this ill-posed problem. But the L2-norm penalty function inherently leads to considerable smoothing of the solution, which reduces the accuracy of distinguishing abnormalities and locating diseased regions. Directly using the L1-norm penalty function, however, may greatly increase computational complexity due to its non-differentiability. We propose an L1-norm regularization method in order to reduce the computational complexity and make rapid convergence possible. Variable splitting is employed to make the L1-norm penalty function differentiable based on the observation that both positive and negative potentials exist on the epicardial surface. Then, the inverse problem of ECG is further formulated as a bound-constrained quadratic problem, which can be efficiently solved by gradient projection in an iterative manner. Extensive experiments conducted on both synthetic data and real data demonstrate that the proposed method can handle both measurement noise and geometry noise and obtain more accurate results than previous L2- and L1-norm regularization methods, especially when the noises are large.

  11. Evaluating large-scale propensity score performance through real-world and synthetic data experiments.

    PubMed

    Tian, Yuxi; Schuemie, Martijn J; Suchard, Marc A

    2018-06-22

    Propensity score adjustment is a popular approach for confounding control in observational studies. Reliable frameworks are needed to determine relative propensity score performance in large-scale studies, and to establish optimal propensity score model selection methods. We detail a propensity score evaluation framework that includes synthetic and real-world data experiments. Our synthetic experimental design extends the 'plasmode' framework and simulates survival data under known effect sizes, and our real-world experiments use a set of negative control outcomes with presumed null effect sizes. In reproductions of two published cohort studies, we compare two propensity score estimation methods that contrast in their model selection approach: L1-regularized regression that conducts a penalized likelihood regression, and the 'high-dimensional propensity score' (hdPS) that employs a univariate covariate screen. We evaluate methods on a range of outcome-dependent and outcome-independent metrics. L1-regularization propensity score methods achieve superior model fit, covariate balance and negative control bias reduction compared with the hdPS. Simulation results are mixed and fluctuate with simulation parameters, revealing a limitation of simulation under the proportional hazards framework. Including regularization with the hdPS reduces commonly reported non-convergence issues but has little effect on propensity score performance. L1-regularization incorporates all covariates simultaneously into the propensity score model and offers propensity score performance superior to the hdPS marginal screen.

  12. SU-G-BRA-11: Tumor Tracking in An Iterative Volume of Interest Based 4D CBCT Reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martin, R; Pan, T; Ahmad, M

    2016-06-15

    Purpose: 4D CBCT can allow evaluation of tumor motion immediately prior to radiation therapy, but suffers from heavy artifacts that limit its ability to track tumors. Various iterative and compressed sensing reconstructions have been proposed to reduce these artifacts, but are costly time-wise and can degrade the image quality of bony anatomy for alignment with regularization. We have previously proposed an iterative volume of interest (I4D VOI) method which minimizes reconstruction time and maintains image quality of bony anatomy by focusing a 4D reconstruction within a VOI. The purpose of this study is to test the tumor tracking accuracy ofmore » this method compared to existing methods. Methods: Long scan (8–10 mins) CBCT data with corresponding RPM data was collected for 12 lung cancer patients. The full data set was sorted into 8 phases and reconstructed using FDK cone beam reconstruction to serve as a gold standard. The data was reduced in way that maintains a normal breathing pattern and used to reconstruct 4D images using FDK, low and high regularization TV minimization (λ=2,10), and the proposed I4D VOI method with PTVs used for the VOI. Tumor trajectories were found using rigid registration within the VOI for each reconstruction and compared to the gold standard. Results: The root mean square error (RMSE) values were 2.70mm for FDK, 2.50mm for low regularization TV, 1.48mm for high regularization TV, and 2.34mm for I4D VOI. Streak artifacts in I4D VOI were reduced compared to FDK and images were less blurred than TV reconstructed images. Conclusion: I4D VOI performed at least as well as existing methods in tumor tracking, with the exception of high regularization TV minimization. These results along with the reconstruction time and outside VOI image quality advantages suggest I4D VOI to be an improvement over existing methods. Funding support provided by CPRIT grant RP110562-P2-01.« less

  13. Automatic Constraint Detection for 2D Layout Regularization.

    PubMed

    Jiang, Haiyong; Nan, Liangliang; Yan, Dong-Ming; Dong, Weiming; Zhang, Xiaopeng; Wonka, Peter

    2016-08-01

    In this paper, we address the problem of constraint detection for layout regularization. The layout we consider is a set of two-dimensional elements where each element is represented by its bounding box. Layout regularization is important in digitizing plans or images, such as floor plans and facade images, and in the improvement of user-created contents, such as architectural drawings and slide layouts. To regularize a layout, we aim to improve the input by detecting and subsequently enforcing alignment, size, and distance constraints between layout elements. Similar to previous work, we formulate layout regularization as a quadratic programming problem. In addition, we propose a novel optimization algorithm that automatically detects constraints. We evaluate the proposed framework using a variety of input layouts from different applications. Our results demonstrate that our method has superior performance to the state of the art.

  14. A Unified Approach for Solving Nonlinear Regular Perturbation Problems

    ERIC Educational Resources Information Center

    Khuri, S. A.

    2008-01-01

    This article describes a simple alternative unified method of solving nonlinear regular perturbation problems. The procedure is based upon the manipulation of Taylor's approximation for the expansion of the nonlinear term in the perturbed equation. An essential feature of this technique is the relative simplicity used and the associated unified…

  15. School Disrepair and Substance Use among Regular and Alternative High School Students

    ERIC Educational Resources Information Center

    Grana, Rachel A.; Black, David; Sun, Ping; Rohrbach, Louise A.; Gunning, Melissa; Sussman, Steven

    2010-01-01

    Background: The physical environment influences adolescent health behavior and personal development. This article examines the relationship between level of school disrepair and substance use among students attending regular high school (RHS) and alternative high school (AHS). Methods: Data were collected from students (N = 7058) participating in…

  16. 27 CFR 31.171 - Method of filing.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... disposition required by §§ 31.155 and 31.156 in accordance with the wholesaler's regular accounting and... disposition must be filed not later than one business day following the date the transaction occurred. (c... be filed in accordance with the wholesaler's regular accounting and recordkeeping practices. (26 U.S...

  17. Applications of exact traveling wave solutions of Modified Liouville and the Symmetric Regularized Long Wave equations via two new techniques

    NASA Astrophysics Data System (ADS)

    Lu, Dianchen; Seadawy, Aly R.; Ali, Asghar

    2018-06-01

    In this current work, we employ novel methods to find the exact travelling wave solutions of Modified Liouville equation and the Symmetric Regularized Long Wave equation, which are called extended simple equation and exp(-Ψ(ξ))-expansion methods. By assigning the different values to the parameters, different types of the solitary wave solutions are derived from the exact traveling wave solutions, which shows the efficiency and precision of our methods. Some solutions have been represented by graphical. The obtained results have several applications in physical science.

  18. Construction of normal-regular decisions of Bessel typed special system

    NASA Astrophysics Data System (ADS)

    Tasmambetov, Zhaksylyk N.; Talipova, Meiramgul Zh.

    2017-09-01

    Studying a special system of differential equations in the separate production of the second order is solved by the degenerate hypergeometric function reducing to the Bessel functions of two variables. To construct a solution of this system near regular and irregular singularities, we use the method of Frobenius-Latysheva applying the concepts of rank and antirank. There is proved the basic theorem that establishes the existence of four linearly independent solutions of studying system type of Bessel. To prove the existence of normal-regular solutions we establish necessary conditions for the existence of such solutions. The existence and convergence of a normally regular solution are shown using the notion of rank and antirank.

  19. Further investigation on "A multiplicative regularization for force reconstruction"

    NASA Astrophysics Data System (ADS)

    Aucejo, M.; De Smet, O.

    2018-05-01

    We have recently proposed a multiplicative regularization to reconstruct mechanical forces acting on a structure from vibration measurements. This method does not require any selection procedure for choosing the regularization parameter, since the amount of regularization is automatically adjusted throughout an iterative resolution process. The proposed iterative algorithm has been developed with performance and efficiency in mind, but it is actually a simplified version of a full iterative procedure not described in the original paper. The present paper aims at introducing the full resolution algorithm and comparing it with its simplified version in terms of computational efficiency and solution accuracy. In particular, it is shown that both algorithms lead to very similar identified solutions.

  20. Control of the transition between regular and mach reflection of shock waves

    NASA Astrophysics Data System (ADS)

    Alekseev, A. K.

    2012-06-01

    A control problem was considered that makes it possible to switch the flow between stationary Mach and regular reflection of shock waves within the dual solution domain. The sensitivity of the flow was computed by solving adjoint equations. A control disturbance was sought by applying gradient optimization methods. According to the computational results, the transition from regular to Mach reflection can be executed by raising the temperature. The transition from Mach to regular reflection can be achieved by lowering the temperature at moderate Mach numbers and is impossible at large numbers. The reliability of the numerical results was confirmed by verifying them with the help of a posteriori analysis.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Senesi, Andrew; Lee, Byeongdu

    Herein, a general method to calculate the scattering functions of polyhedra, including both regular and semi-regular polyhedra, is presented. These calculations may be achieved by breaking a polyhedron into sets of congruent pieces, thereby reducing computation time by taking advantage of Fourier transforms and inversion symmetry. Each piece belonging to a set or subunit can be generated by either rotation or translation. Further, general strategies to compute truncated, concave and stellated polyhedra are provided. Using this method, the asymptotic behaviors of the polyhedral scattering functions are compared with that of a sphere. It is shown that, for a regular polyhedron,more » the form factor oscillation at highqis correlated with the face-to-face distance. In addition, polydispersity affects the Porod constant. The ideas presented herein will be important for the characterization of nanomaterials using small-angle scattering.« less

  2. A deconvolution extraction method for 2D multi-object fibre spectroscopy based on the regularized least-squares QR-factorization algorithm

    NASA Astrophysics Data System (ADS)

    Yu, Jian; Yin, Qian; Guo, Ping; Luo, A.-li

    2014-09-01

    This paper presents an efficient method for the extraction of astronomical spectra from two-dimensional (2D) multifibre spectrographs based on the regularized least-squares QR-factorization (LSQR) algorithm. We address two issues: we propose a modified Gaussian point spread function (PSF) for modelling the 2D PSF from multi-emission-line gas-discharge lamp images (arc images), and we develop an efficient deconvolution method to extract spectra in real circumstances. The proposed modified 2D Gaussian PSF model can fit various types of 2D PSFs, including different radial distortion angles and ellipticities. We adopt the regularized LSQR algorithm to solve the sparse linear equations constructed from the sparse convolution matrix, which we designate the deconvolution spectrum extraction method. Furthermore, we implement a parallelized LSQR algorithm based on graphics processing unit programming in the Compute Unified Device Architecture to accelerate the computational processing. Experimental results illustrate that the proposed extraction method can greatly reduce the computational cost and memory use of the deconvolution method and, consequently, increase its efficiency and practicability. In addition, the proposed extraction method has a stronger noise tolerance than other methods, such as the boxcar (aperture) extraction and profile extraction methods. Finally, we present an analysis of the sensitivity of the extraction results to the radius and full width at half-maximum of the 2D PSF.

  3. Considering the Spatial Layout Information of Bag of Features (BoF) Framework for Image Classification.

    PubMed

    Mu, Guangyu; Liu, Ying; Wang, Limin

    2015-01-01

    The spatial pooling method such as spatial pyramid matching (SPM) is very crucial in the bag of features model used in image classification. SPM partitions the image into a set of regular grids and assumes that the spatial layout of all visual words obey the uniform distribution over these regular grids. However, in practice, we consider that different visual words should obey different spatial layout distributions. To improve SPM, we develop a novel spatial pooling method, namely spatial distribution pooling (SDP). The proposed SDP method uses an extension model of Gauss mixture model to estimate the spatial layout distributions of the visual vocabulary. For each visual word type, SDP can generate a set of flexible grids rather than the regular grids from the traditional SPM. Furthermore, we can compute the grid weights for visual word tokens according to their spatial coordinates. The experimental results demonstrate that SDP outperforms the traditional spatial pooling methods, and is competitive with the state-of-the-art classification accuracy on several challenging image datasets.

  4. Nonrigid iterative closest points for registration of 3D biomedical surfaces

    NASA Astrophysics Data System (ADS)

    Liang, Luming; Wei, Mingqiang; Szymczak, Andrzej; Petrella, Anthony; Xie, Haoran; Qin, Jing; Wang, Jun; Wang, Fu Lee

    2018-01-01

    Advanced 3D optical and laser scanners bring new challenges to computer graphics. We present a novel nonrigid surface registration algorithm based on Iterative Closest Point (ICP) method with multiple correspondences. Our method, called the Nonrigid Iterative Closest Points (NICPs), can be applied to surfaces of arbitrary topology. It does not impose any restrictions on the deformation, e.g. rigidity or articulation. Finally, it does not require parametrization of input meshes. Our method is based on an objective function that combines distance and regularization terms. Unlike the standard ICP, the distance term is determined based on multiple two-way correspondences rather than single one-way correspondences between surfaces. A Laplacian-based regularization term is proposed to take full advantage of multiple two-way correspondences. This term regularizes the surface movement by enforcing vertices to move coherently with their 1-ring neighbors. The proposed method achieves good performances when no global pose differences or significant amount of bending exists in the models, for example, families of similar shapes, like human femur and vertebrae models.

  5. Frequency guided methods for demodulation of a single fringe pattern.

    PubMed

    Wang, Haixia; Kemao, Qian

    2009-08-17

    Phase demodulation from a single fringe pattern is a challenging task but of interest. A frequency-guided regularized phase tracker and a frequency-guided sequential demodulation method with Levenberg-Marquardt optimization are proposed to demodulate a single fringe pattern. Demodulation path guided by the local frequency from the highest to the lowest is applied in both methods. Since critical points have low local frequency values, they are processed last so that the spurious sign problem caused by these points is avoided. These two methods can be considered as alternatives to the effective fringe follower regularized phase tracker. Demodulation results from one computer-simulated and two experimental fringe patterns using the proposed methods will be demonstrated. (c) 2009 Optical Society of America

  6. Inverse problems with nonnegative and sparse solutions: algorithms and application to the phase retrieval problem

    NASA Astrophysics Data System (ADS)

    Quy Muoi, Pham; Nho Hào, Dinh; Sahoo, Sujit Kumar; Tang, Dongliang; Cong, Nguyen Huu; Dang, Cuong

    2018-05-01

    In this paper, we study a gradient-type method and a semismooth Newton method for minimization problems in regularizing inverse problems with nonnegative and sparse solutions. We propose a special penalty functional forcing the minimizers of regularized minimization problems to be nonnegative and sparse, and then we apply the proposed algorithms in a practical the problem. The strong convergence of the gradient-type method and the local superlinear convergence of the semismooth Newton method are proven. Then, we use these algorithms for the phase retrieval problem and illustrate their efficiency in numerical examples, particularly in the practical problem of optical imaging through scattering media where all the noises from experiment are presented.

  7. Fast ℓ1-regularized space-time adaptive processing using alternating direction method of multipliers

    NASA Astrophysics Data System (ADS)

    Qin, Lilong; Wu, Manqing; Wang, Xuan; Dong, Zhen

    2017-04-01

    Motivated by the sparsity of filter coefficients in full-dimension space-time adaptive processing (STAP) algorithms, this paper proposes a fast ℓ1-regularized STAP algorithm based on the alternating direction method of multipliers to accelerate the convergence and reduce the calculations. The proposed algorithm uses a splitting variable to obtain an equivalent optimization formulation, which is addressed with an augmented Lagrangian method. Using the alternating recursive algorithm, the method can rapidly result in a low minimum mean-square error without a large number of calculations. Through theoretical analysis and experimental verification, we demonstrate that the proposed algorithm provides a better output signal-to-clutter-noise ratio performance than other algorithms.

  8. MO-DE-207A-07: Filtered Iterative Reconstruction (FIR) Via Proximal Forward-Backward Splitting: A Synergy of Analytical and Iterative Reconstruction Method for CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, H

    Purpose: This work is to develop a general framework, namely filtered iterative reconstruction (FIR) method, to incorporate analytical reconstruction (AR) method into iterative reconstruction (IR) method, for enhanced CT image quality. Methods: FIR is formulated as a combination of filtered data fidelity and sparsity regularization, and then solved by proximal forward-backward splitting (PFBS) algorithm. As a result, the image reconstruction decouples data fidelity and image regularization with a two-step iterative scheme, during which an AR-projection step updates the filtered data fidelity term, while a denoising solver updates the sparsity regularization term. During the AR-projection step, the image is projected tomore » the data domain to form the data residual, and then reconstructed by certain AR to a residual image which is in turn weighted together with previous image iterate to form next image iterate. Since the eigenvalues of AR-projection operator are close to the unity, PFBS based FIR has a fast convergence. Results: The proposed FIR method is validated in the setting of circular cone-beam CT with AR being FDK and total-variation sparsity regularization, and has improved image quality from both AR and IR. For example, AIR has improved visual assessment and quantitative measurement in terms of both contrast and resolution, and reduced axial and half-fan artifacts. Conclusion: FIR is proposed to incorporate AR into IR, with an efficient image reconstruction algorithm based on PFBS. The CBCT results suggest that FIR synergizes AR and IR with improved image quality and reduced axial and half-fan artifacts. The authors was partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000), and the Shanghai Pujiang Talent Program (#14PJ1404500).« less

  9. Comparison of ultrasonic-assisted and regular leaching of germanium from by-product of zinc metallurgy.

    PubMed

    Zhang, Libo; Guo, Wenqian; Peng, Jinhui; Li, Jing; Lin, Guo; Yu, Xia

    2016-07-01

    A major source of germanium recovery and also the source of this research is the by-product of lead and zinc metallurgical process. The primary purpose of the research is to investigate the effects of ultrasonic assisted and regular methods on the leaching yield of germanium from roasted slag containing germanium. In the study, the HCl-CaCl2 mixed solution is adopted as the reacting system and the Ca(ClO)2 used as the oxidant. Through six single factor (leaching time, temperature, amount of Ca(ClO)2, acid concentration, concentration of CaCl2 solution, ultrasonic power) experiments and the comparison of the two methods, it is found the optimum collective of germanium for ultrasonic-assisted method is obtained at temperature 80 °C for a leaching duration of 40 min. The optimum concentration for hydrochloric acid, CaCl2 and oxidizing agent are identified to be 3.5 mol/L, 150 g/L and 58.33 g/L, respectively. In addition, 700 W is the best ultrasonic power and an over-high power is adverse in the leaching process. Under the optimum condition, the recovery of germanium could reach up to 92.7%. While, the optimum leaching condition for regular leaching method is same to ultrasonic-assisted method, except regular method consume 100 min and the leaching rate of Ge 88.35% is lower about 4.35%. All in all, the experiment manifests that the leaching time can be reduced by as much as 60% and the leaching rate of Ge can be increased by 3-5% with the application of ultrasonic tool, which is mainly thanks to the mechanical action of ultrasonic. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. Image Reconstruction from Under sampled Fourier Data Using the Polynomial Annihilation Transform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Archibald, Richard K.; Gelb, Anne; Platte, Rodrigo

    Fourier samples are collected in a variety of applications including magnetic resonance imaging and synthetic aperture radar. The data are typically under-sampled and noisy. In recent years, l 1 regularization has received considerable attention in designing image reconstruction algorithms from under-sampled and noisy Fourier data. The underlying image is assumed to have some sparsity features, that is, some measurable features of the image have sparse representation. The reconstruction algorithm is typically designed to solve a convex optimization problem, which consists of a fidelity term penalized by one or more l 1 regularization terms. The Split Bregman Algorithm provides a fastmore » explicit solution for the case when TV is used for the l1l1 regularization terms. Due to its numerical efficiency, it has been widely adopted for a variety of applications. A well known drawback in using TV as an l 1 regularization term is that the reconstructed image will tend to default to a piecewise constant image. This issue has been addressed in several ways. Recently, the polynomial annihilation edge detection method was used to generate a higher order sparsifying transform, and was coined the “polynomial annihilation (PA) transform.” This paper adapts the Split Bregman Algorithm for the case when the PA transform is used as the l 1 regularization term. In so doing, we achieve a more accurate image reconstruction method from under-sampled and noisy Fourier data. Our new method compares favorably to the TV Split Bregman Algorithm, as well as to the popular TGV combined with shearlet approach.« less

  11. The use of salinity contrast for density difference compensation to improve the thermal recovery efficiency in high-temperature aquifer thermal energy storage systems

    NASA Astrophysics Data System (ADS)

    van Lopik, Jan H.; Hartog, Niels; Zaadnoordijk, Willem Jan

    2016-08-01

    The efficiency of heat recovery in high-temperature (>60 °C) aquifer thermal energy storage (HT-ATES) systems is limited due to the buoyancy of the injected hot water. This study investigates the potential to improve the efficiency through compensation of the density difference by increased salinity of the injected hot water for a single injection-recovery well scheme. The proposed method was tested through numerical modeling with SEAWATv4, considering seasonal HT-ATES with four consecutive injection-storage-recovery cycles. Recovery efficiencies for the consecutive cycles were investigated for six cases with three simulated scenarios: (a) regular HT-ATES, (b) HT-ATES with density difference compensation using saline water, and (c) theoretical regular HT-ATES without free thermal convection. For the reference case, in which 80 °C water was injected into a high-permeability aquifer, regular HT-ATES had an efficiency of 0.40 after four consecutive recovery cycles. The density difference compensation method resulted in an efficiency of 0.69, approximating the theoretical case (0.76). Sensitivity analysis showed that the net efficiency increase by using the density difference compensation method instead of regular HT-ATES is greater for higher aquifer hydraulic conductivity, larger temperature difference between injection water and ambient groundwater, smaller injection volume, and larger aquifer thickness. This means that density difference compensation allows the application of HT-ATES in thicker, more permeable aquifers and with larger temperatures than would be considered for regular HT-ATES systems.

  12. Hierarchical Regularization of Polygons for Photogrammetric Point Clouds of Oblique Images

    NASA Astrophysics Data System (ADS)

    Xie, L.; Hu, H.; Zhu, Q.; Wu, B.; Zhang, Y.

    2017-05-01

    Despite the success of multi-view stereo (MVS) reconstruction from massive oblique images in city scale, only point clouds and triangulated meshes are available from existing MVS pipelines, which are topologically defect laden, free of semantical information and hard to edit and manipulate interactively in further applications. On the other hand, 2D polygons and polygonal models are still the industrial standard. However, extraction of the 2D polygons from MVS point clouds is still a non-trivial task, given the fact that the boundaries of the detected planes are zigzagged and regularities, such as parallel and orthogonal, cannot preserve. Aiming to solve these issues, this paper proposes a hierarchical polygon regularization method for the photogrammetric point clouds from existing MVS pipelines, which comprises of local and global levels. After boundary points extraction, e.g. using alpha shapes, the local level is used to consolidate the original points, by refining the orientation and position of the points using linear priors. The points are then grouped into local segments by forward searching. In the global level, regularities are enforced through a labeling process, which encourage the segments share the same label and the same label represents segments are parallel or orthogonal. This is formulated as Markov Random Field and solved efficiently. Preliminary results are made with point clouds from aerial oblique images and compared with two classical regularization methods, which have revealed that the proposed method are more powerful in abstracting a single building and is promising for further 3D polygonal model reconstruction and GIS applications.

  13. Three-dimensional Gravity Inversion with a New Gradient Scheme on Unstructured Grids

    NASA Astrophysics Data System (ADS)

    Sun, S.; Yin, C.; Gao, X.; Liu, Y.; Zhang, B.

    2017-12-01

    Stabilized gradient-based methods have been proved to be efficient for inverse problems. Based on these methods, setting gradient close to zero can effectively minimize the objective function. Thus the gradient of objective function determines the inversion results. By analyzing the cause of poor resolution on depth in gradient-based gravity inversion methods, we find that imposing depth weighting functional in conventional gradient can improve the depth resolution to some extent. However, the improvement is affected by the regularization parameter and the effect of the regularization term becomes smaller with increasing depth (shown as Figure 1 (a)). In this paper, we propose a new gradient scheme for gravity inversion by introducing a weighted model vector. The new gradient can improve the depth resolution more efficiently, which is independent of the regularization parameter, and the effect of regularization term will not be weakened when depth increases. Besides, fuzzy c-means clustering method and smooth operator are both used as regularization terms to yield an internal consecutive inverse model with sharp boundaries (Sun and Li, 2015). We have tested our new gradient scheme with unstructured grids on synthetic data to illustrate the effectiveness of the algorithm. Gravity forward modeling with unstructured grids is based on the algorithm proposed by Okbe (1979). We use a linear conjugate gradient inversion scheme to solve the inversion problem. The numerical experiments show a great improvement in depth resolution compared with regular gradient scheme, and the inverse model is compact at all depths (shown as Figure 1 (b)). AcknowledgeThis research is supported by Key Program of National Natural Science Foundation of China (41530320), China Natural Science Foundation for Young Scientists (41404093), and Key National Research Project of China (2016YFC0303100, 2017YFC0601900). ReferencesSun J, Li Y. 2015. Multidomain petrophysically constrained inversion and geology differentiation using guided fuzzy c-means clustering. Geophysics, 80(4): ID1-ID18. Okabe M. 1979. Analytical expressions for gravity anomalies due to homogeneous polyhedral bodies and translations into magnetic anomalies. Geophysics, 44(4), 730-741.

  14. Approaching the axiomatic enrichment of the Gene Ontology from a lexical perspective.

    PubMed

    Quesada-Martínez, Manuel; Mikroyannidi, Eleni; Fernández-Breis, Jesualdo Tomás; Stevens, Robert

    2015-09-01

    The main goal of this work is to measure how lexical regularities in biomedical ontology labels can be used for the automatic creation of formal relationships between classes, and to evaluate the results of applying our approach to the Gene Ontology (GO). In recent years, we have developed a method for the lexical analysis of regularities in biomedical ontology labels, and we showed that the labels can present a high degree of regularity. In this work, we extend our method with a cross-products extension (CPE) metric, which estimates the potential interest of a specific regularity for axiomatic enrichment in the lexical analysis, using information on exact matches in external ontologies. The GO consortium recently enriched the GO by using so-called cross-product extensions. Cross-products are generated by establishing axioms that relate a given GO class with classes from the GO or other biomedical ontologies. We apply our method to the GO and study how its lexical analysis can identify and reconstruct the cross-products that are defined by the GO consortium. The label of the classes of the GO are highly regular in lexical terms, and the exact matches with labels of external ontologies affect 80% of the GO classes. The CPE metric reveals that 31.48% of the classes that exhibit regularities have fragments that are classes into two external ontologies that are selected for our experiment, namely, the Cell Ontology and the Chemical Entities of Biological Interest ontology, and 18.90% of them are fully decomposable into smaller parts. Our results show that the CPE metric permits our method to detect GO cross-product extensions with a mean recall of 62% and a mean precision of 28%. The study is completed with an analysis of false positives to explain this precision value. We think that our results support the claim that our lexical approach can contribute to the axiomatic enrichment of biomedical ontologies and that it can provide new insights into the engineering of biomedical ontologies. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. Simultaneous regularization method for the determination of radius distributions from experimental multiangle correlation functions

    NASA Astrophysics Data System (ADS)

    Buttgereit, R.; Roths, T.; Honerkamp, J.; Aberle, L. B.

    2001-10-01

    Dynamic light scattering experiments have become a powerful tool in order to investigate the dynamical properties of complex fluids. In many applications in both soft matter research and industry so-called ``real world'' systems are subject of great interest. Here, the dilution of the investigated system often cannot be changed without getting measurement artifacts, so that one often has to deal with highly concentrated and turbid media. The investigation of such systems requires techniques that suppress the influence of multiple scattering, e.g., cross correlation techniques. However, measurements at turbid as well as highly diluted media lead to data with low signal-to-noise ratio, which complicates data analysis and leads to unreliable results. In this article a multiangle regularization method is discussed, which copes with the difficulties arising from such samples and enhances enormously the quality of the estimated solution. In order to demonstrate the efficiency of this multiangle regularization method we applied it to cross correlation functions measured at highly turbid samples.

  16. Implication of adaptive smoothness constraint and Helmert variance component estimation in seismic slip inversion

    NASA Astrophysics Data System (ADS)

    Fan, Qingbiao; Xu, Caijun; Yi, Lei; Liu, Yang; Wen, Yangmao; Yin, Zhi

    2017-10-01

    When ill-posed problems are inverted, the regularization process is equivalent to adding constraint equations or prior information from a Bayesian perspective. The veracity of the constraints (or the regularization matrix R) significantly affects the solution, and a smoothness constraint is usually added in seismic slip inversions. In this paper, an adaptive smoothness constraint (ASC) based on the classic Laplacian smoothness constraint (LSC) is proposed. The ASC not only improves the smoothness constraint, but also helps constrain the slip direction. A series of experiments are conducted in which different magnitudes of noise are imposed and different densities of observation are assumed, and the results indicated that the ASC was superior to the LSC. Using the proposed ASC, the Helmert variance component estimation method is highlighted as the best for selecting the regularization parameter compared with other methods, such as generalized cross-validation or the mean squared error criterion method. The ASC may also benefit other ill-posed problems in which a smoothness constraint is required.

  17. Novel insights into the mechanism of the ortho/para spin conversion of hydrogen pairs: implications for catalysis and interstellar water.

    PubMed

    Limbach, Hans-Heinrich; Buntkowsky, Gerd; Matthes, Jochen; Gründemann, Stefan; Pery, Tal; Walaszek, Bernadeta; Chaudret, Bruno

    2006-03-13

    The phenomenon of exchange coupling is taken into account in the description of the magnetic nuclear spin conversion between bound ortho- and para-dihydrogen. This conversion occurs without bond breaking, in contrast to the chemical spin conversion. It is shown that the exchange coupling needs to be reduced so that the corresponding exchange barrier can increase and the given magnetic interaction can effectively induce a spin conversion. The implications for related molecules such as water are discussed. For ice, a dipolar magnetic conversion and for liquid water a chemical conversion are predicted to occur within the millisecond timescale. It follows that a separation of water into its spin isomers, as proposed by Tikhonov and Volkov (Science 2002, 296, 2363), is not feasible. Nuclear spin temperatures of water vapor in comets, which are smaller than the gas-phase equilibrium temperatures, are proposed to be diagnostic for the temperature of the ice or the dust surface from which the water was released.

  18. The Regular Education Initiative: What Do Three Groups of Education Professionals Think?

    ERIC Educational Resources Information Center

    Davis, Jane C.; Maheady, Larry

    1991-01-01

    A survey of general education teachers, special education teachers, and building principals in Michigan assessed their agreement with the Regular Education Initiative (REI) goals and methods. Analysis of the 605 responses indicated general agreement with REI goals and procedures. Most educators believed that pragmatic factors posed the greatest…

  19. Programming-Languages as a Conceptual Framework for Teaching Mathematics

    ERIC Educational Resources Information Center

    Feurzeig, Wallace; Papert, Seymour A.

    2011-01-01

    Formal mathematical methods remain, for most high school students, mysterious, artificial and not a part of their regular intuitive thinking. The authors develop some themes that could lead to a radically new approach. According to this thesis, the teaching of programming languages as a regular part of academic progress can contribute effectively…

  20. GRAPHEME-PHONEME REGULARITY AND ITS EFFECTS ON EARLY READING--A PILOT STUDY.

    ERIC Educational Resources Information Center

    FRANKENSTEIN, ROSELYN; KJELDERGAARD, PAUL M.

    A PILOT EXPERIMENT CONDUCTED TO TEST THE EFFECT OF A SPECIALLY DEVISED PHONIC APPROACH TO EARLY READING IS DESCRIBED. THE PHONIC METHOD USED ACHIEVED SOUND-SYMBOL REGULARITY AND HAD THE FOLLOWING CHARACTERISTICS--(1) CONSONANT GRAPHEMES EACH REPRESENTED ONLY ONE SOUND AND WERE PRINTED USING NEARLY STANDARD ALPHABETIC SYMBOLS. (2) EACH VOWEL…

Top