Sample records for regular solution approach

  1. Recovering fine details from under-resolved electron tomography data using higher order total variation ℓ 1 regularization

    DOE PAGES

    Sanders, Toby; Gelb, Anne; Platte, Rodrigo B.; ...

    2017-01-03

    Over the last decade or so, reconstruction methods using ℓ 1 regularization, often categorized as compressed sensing (CS) algorithms, have significantly improved the capabilities of high fidelity imaging in electron tomography. The most popular ℓ 1 regularization approach within electron tomography has been total variation (TV) regularization. In addition to reducing unwanted noise, TV regularization encourages a piecewise constant solution with sparse boundary regions. In this paper we propose an alternative ℓ 1 regularization approach for electron tomography based on higher order total variation (HOTV). Like TV, the HOTV approach promotes solutions with sparse boundary regions. In smooth regions however,more » the solution is not limited to piecewise constant behavior. We demonstrate that this allows for more accurate reconstruction of a broader class of images – even those for which TV was designed for – particularly when dealing with pragmatic tomographic sampling patterns and very fine image features. In conclusion, we develop results for an electron tomography data set as well as a phantom example, and we also make comparisons with discrete tomography approaches.« less

  2. Extended Hansen solubility approach: naphthalene in individual solvents.

    PubMed

    Martin, A; Wu, P L; Adjei, A; Beerbower, A; Prausnitz, J M

    1981-11-01

    A multiple regression method using Hansen partial solubility parameters, delta D, delta p, and delta H, was used to reproduce the solubilities of naphthalene in pure polar and nonpolar solvents and to predict its solubility in untested solvents. The method, called the extended Hansen approach, was compared with the extended Hildebrand solubility approach and the universal-functional-group-activity-coefficient (UNIFAC) method. The Hildebrand regular solution theory was also used to calculate naphthalene solubility. Naphthalene, an aromatic molecule having no side chains or functional groups, is "well-behaved', i.e., its solubility in active solvents known to interact with drug molecules is fairly regular. Because of its simplicity, naphthalene is a suitable solute with which to initiate the difficult study of solubility phenomena. The three methods tested (Hildebrand regular solution theory was introduced only for comparison of solubilities in regular solution) yielded similar results, reproducing naphthalene solubilities within approximately 30% of literature values. In some cases, however, the error was considerably greater. The UNIFAC calculation is superior in that it requires only the solute's heat of fusion, the melting point, and a knowledge of chemical structures of solute and solvent. The extended Hansen and extended Hildebrand methods need experimental solubility data on which to carry out regression analysis. The extended Hansen approach was the method of second choice because of its adaptability to solutes and solvents from various classes. Sample calculations are included to illustrate methods of predicting solubilities in untested solvents at various temperatures. The UNIFAC method was successful in this regard.

  3. Recent advancements in GRACE mascon regularization and uncertainty assessment

    NASA Astrophysics Data System (ADS)

    Loomis, B. D.; Luthcke, S. B.

    2017-12-01

    The latest release of the NASA Goddard Space Flight Center (GSFC) global time-variable gravity mascon product applies a new regularization strategy along with new methods for estimating noise and leakage uncertainties. The critical design component of mascon estimation is the construction of the applied regularization matrices, and different strategies exist between the different centers that produce mascon solutions. The new approach from GSFC directly applies the pre-fit Level 1B inter-satellite range-acceleration residuals in the design of time-dependent regularization matrices, which are recomputed at each step of our iterative solution method. We summarize this new approach, demonstrating the simultaneous increase in recovered time-variable gravity signal and reduction in the post-fit inter-satellite residual magnitudes, until solution convergence occurs. We also present our new approach for estimating mascon noise uncertainties, which are calibrated to the post-fit inter-satellite residuals. Lastly, we present a new technique for end users to quickly estimate the signal leakage errors for any selected grouping of mascons, and we test the viability of this leakage assessment procedure on the mascon solutions produced by other processing centers.

  4. A trade-off solution between model resolution and covariance in surface-wave inversion

    USGS Publications Warehouse

    Xia, J.; Xu, Y.; Miller, R.D.; Zeng, C.

    2010-01-01

    Regularization is necessary for inversion of ill-posed geophysical problems. Appraisal of inverse models is essential for meaningful interpretation of these models. Because uncertainties are associated with regularization parameters, extra conditions are usually required to determine proper parameters for assessing inverse models. Commonly used techniques for assessment of a geophysical inverse model derived (generally iteratively) from a linear system are based on calculating the model resolution and the model covariance matrices. Because the model resolution and the model covariance matrices of the regularized solutions are controlled by the regularization parameter, direct assessment of inverse models using only the covariance matrix may provide incorrect results. To assess an inverted model, we use the concept of a trade-off between model resolution and covariance to find a proper regularization parameter with singular values calculated in the last iteration. We plot the singular values from large to small to form a singular value plot. A proper regularization parameter is normally the first singular value that approaches zero in the plot. With this regularization parameter, we obtain a trade-off solution between model resolution and model covariance in the vicinity of a regularized solution. The unit covariance matrix can then be used to calculate error bars of the inverse model at a resolution level determined by the regularization parameter. We demonstrate this approach with both synthetic and real surface-wave data. ?? 2010 Birkh??user / Springer Basel AG.

  5. The Role of the Pressure in the Partial Regularity Theory for Weak Solutions of the Navier-Stokes Equations

    NASA Astrophysics Data System (ADS)

    Chamorro, Diego; Lemarié-Rieusset, Pierre-Gilles; Mayoufi, Kawther

    2018-04-01

    We study the role of the pressure in the partial regularity theory for weak solutions of the Navier-Stokes equations. By introducing the notion of dissipative solutions, due to D uchon and R obert (Nonlinearity 13:249-255, 2000), we will provide a generalization of the Caffarelli, Kohn and Nirenberg theory. Our approach sheels new light on the role of the pressure in this theory in connection to Serrin's local regularity criterion.

  6. Optimal guidance law development for an advanced launch system

    NASA Technical Reports Server (NTRS)

    Calise, Anthony J.; Leung, Martin S. K.

    1995-01-01

    The objective of this research effort was to develop a real-time guidance approach for launch vehicles ascent to orbit injection. Various analytical approaches combined with a variety of model order and model complexity reduction have been investigated. Singular perturbation methods were first attempted and found to be unsatisfactory. The second approach based on regular perturbation analysis was subsequently investigated. It also fails because the aerodynamic effects (ignored in the zero order solution) are too large to be treated as perturbations. Therefore, the study demonstrates that perturbation methods alone (both regular and singular perturbations) are inadequate for use in developing a guidance algorithm for the atmospheric flight phase of a launch vehicle. During a second phase of the research effort, a hybrid analytic/numerical approach was developed and evaluated. The approach combines the numerical methods of collocation and the analytical method of regular perturbations. The concept of choosing intelligent interpolating functions is also introduced. Regular perturbation analysis allows the use of a crude representation for the collocation solution, and intelligent interpolating functions further reduce the number of elements without sacrificing the approximation accuracy. As a result, the combined method forms a powerful tool for solving real-time optimal control problems. Details of the approach are illustrated in a fourth order nonlinear example. The hybrid approach is then applied to the launch vehicle problem. The collocation solution is derived from a bilinear tangent steering law, and results in a guidance solution for the entire flight regime that includes both atmospheric and exoatmospheric flight phases.

  7. Deforming regular black holes

    NASA Astrophysics Data System (ADS)

    Neves, J. C. S.

    2017-06-01

    In this work, we have deformed regular black holes which possess a general mass term described by a function which generalizes the Bardeen and Hayward mass functions. By using linear constraints in the energy-momentum tensor to generate metrics, the solutions presented in this work are either regular or singular. That is, within this approach, it is possible to generate regular or singular black holes from regular or singular black holes. Moreover, contrary to the Bardeen and Hayward regular solutions, the deformed regular black holes may violate the weak energy condition despite the presence of the spherical symmetry. Some comments on accretion of deformed black holes in cosmological scenarios are made.

  8. A multiplicative regularization for force reconstruction

    NASA Astrophysics Data System (ADS)

    Aucejo, M.; De Smet, O.

    2017-02-01

    Additive regularizations, such as Tikhonov-like approaches, are certainly the most popular methods for reconstructing forces acting on a structure. These approaches require, however, the knowledge of a regularization parameter, that can be numerically computed using specific procedures. Unfortunately, these procedures are generally computationally intensive. For this particular reason, it could be of primary interest to propose a method able to proceed without defining any regularization parameter beforehand. In this paper, a multiplicative regularization is introduced for this purpose. By construction, the regularized solution has to be calculated in an iterative manner. In doing so, the amount of regularization is automatically adjusted throughout the resolution process. Validations using synthetic and experimental data highlight the ability of the proposed approach in providing consistent reconstructions.

  9. Terminal attractors for addressable memory in neural networks

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    1988-01-01

    A new type of attractors - terminal attractors - for an addressable memory in neural networks operating in continuous time is introduced. These attractors represent singular solutions of the dynamical system. They intersect (or envelope) the families of regular solutions while each regular solution approaches the terminal attractor in a finite time period. It is shown that terminal attractors can be incorporated into neural networks such that any desired set of these attractors with prescribed basins is provided by an appropriate selection of the weight matrix.

  10. Regularizing portfolio optimization

    NASA Astrophysics Data System (ADS)

    Still, Susanne; Kondor, Imre

    2010-07-01

    The optimization of large portfolios displays an inherent instability due to estimation error. This poses a fundamental problem, because solutions that are not stable under sample fluctuations may look optimal for a given sample, but are, in effect, very far from optimal with respect to the average risk. In this paper, we approach the problem from the point of view of statistical learning theory. The occurrence of the instability is intimately related to over-fitting, which can be avoided using known regularization methods. We show how regularized portfolio optimization with the expected shortfall as a risk measure is related to support vector regression. The budget constraint dictates a modification. We present the resulting optimization problem and discuss the solution. The L2 norm of the weight vector is used as a regularizer, which corresponds to a diversification 'pressure'. This means that diversification, besides counteracting downward fluctuations in some assets by upward fluctuations in others, is also crucial because it improves the stability of the solution. The approach we provide here allows for the simultaneous treatment of optimization and diversification in one framework that enables the investor to trade off between the two, depending on the size of the available dataset.

  11. Primordial cosmology in mimetic born-infeld gravity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bouhmadi-Lopez, Mariam; Chen, Che -Yu; Chen, Pisin

    Here, the Eddington-inspired-Born-Infeld (EiBI) model is reformulated within the mimetic approach. In the presence of a mimetic field, the model contains non-trivial vacuum solutions which could be free of spacetime singularity because of the Born-Infeld nature of the theory. We study a realistic primordial vacuum universe and prove the existence of regular solutions, such as primordial inflationary solutions of de Sitter type or bouncing solutions. Besides, the linear instabilities present in the EiBI model are found to be avoidable for some interesting bouncing solutions in which the physical metric as well as the auxiliary metric are regular at the backgroundmore » level.« less

  12. Primordial cosmology in mimetic born-infeld gravity

    DOE PAGES

    Bouhmadi-Lopez, Mariam; Chen, Che -Yu; Chen, Pisin

    2017-11-29

    Here, the Eddington-inspired-Born-Infeld (EiBI) model is reformulated within the mimetic approach. In the presence of a mimetic field, the model contains non-trivial vacuum solutions which could be free of spacetime singularity because of the Born-Infeld nature of the theory. We study a realistic primordial vacuum universe and prove the existence of regular solutions, such as primordial inflationary solutions of de Sitter type or bouncing solutions. Besides, the linear instabilities present in the EiBI model are found to be avoidable for some interesting bouncing solutions in which the physical metric as well as the auxiliary metric are regular at the backgroundmore » level.« less

  13. An overview of unconstrained free boundary problems

    PubMed Central

    Figalli, Alessio; Shahgholian, Henrik

    2015-01-01

    In this paper, we present a survey concerning unconstrained free boundary problems of type where B1 is the unit ball, Ω is an unknown open set, F1 and F2 are elliptic operators (admitting regular solutions), and is a functions space to be specified in each case. Our main objective is to discuss a unifying approach to the optimal regularity of solutions to the above matching problems, and list several open problems in this direction. PMID:26261367

  14. Novel harmonic regularization approach for variable selection in Cox's proportional hazards model.

    PubMed

    Chu, Ge-Jin; Liang, Yong; Wang, Jia-Xuan

    2014-01-01

    Variable selection is an important issue in regression and a number of variable selection methods have been proposed involving nonconvex penalty functions. In this paper, we investigate a novel harmonic regularization method, which can approximate nonconvex Lq  (1/2 < q < 1) regularizations, to select key risk factors in the Cox's proportional hazards model using microarray gene expression data. The harmonic regularization method can be efficiently solved using our proposed direct path seeking approach, which can produce solutions that closely approximate those for the convex loss function and the nonconvex regularization. Simulation results based on the artificial datasets and four real microarray gene expression datasets, such as real diffuse large B-cell lymphoma (DCBCL), the lung cancer, and the AML datasets, show that the harmonic regularization method can be more accurate for variable selection than existing Lasso series methods.

  15. Advanced Imaging Methods for Long-Baseline Optical Interferometry

    NASA Astrophysics Data System (ADS)

    Le Besnerais, G.; Lacour, S.; Mugnier, L. M.; Thiebaut, E.; Perrin, G.; Meimon, S.

    2008-11-01

    We address the data processing methods needed for imaging with a long baseline optical interferometer. We first describe parametric reconstruction approaches and adopt a general formulation of nonparametric image reconstruction as the solution of a constrained optimization problem. Within this framework, we present two recent reconstruction methods, Mira and Wisard, representative of the two generic approaches for dealing with the missing phase information. Mira is based on an implicit approach and a direct optimization of a Bayesian criterion while Wisard adopts a self-calibration approach and an alternate minimization scheme inspired from radio-astronomy. Both methods can handle various regularization criteria. We review commonly used regularization terms and introduce an original quadratic regularization called ldquosoft support constraintrdquo that favors the object compactness. It yields images of quality comparable to nonquadratic regularizations on the synthetic data we have processed. We then perform image reconstructions, both parametric and nonparametric, on astronomical data from the IOTA interferometer, and discuss the respective roles of parametric and nonparametric approaches for optical interferometric imaging.

  16. Novel Harmonic Regularization Approach for Variable Selection in Cox's Proportional Hazards Model

    PubMed Central

    Chu, Ge-Jin; Liang, Yong; Wang, Jia-Xuan

    2014-01-01

    Variable selection is an important issue in regression and a number of variable selection methods have been proposed involving nonconvex penalty functions. In this paper, we investigate a novel harmonic regularization method, which can approximate nonconvex Lq  (1/2 < q < 1) regularizations, to select key risk factors in the Cox's proportional hazards model using microarray gene expression data. The harmonic regularization method can be efficiently solved using our proposed direct path seeking approach, which can produce solutions that closely approximate those for the convex loss function and the nonconvex regularization. Simulation results based on the artificial datasets and four real microarray gene expression datasets, such as real diffuse large B-cell lymphoma (DCBCL), the lung cancer, and the AML datasets, show that the harmonic regularization method can be more accurate for variable selection than existing Lasso series methods. PMID:25506389

  17. Feasibility of inverse problem solution for determination of city emission function from night sky radiance measurements

    NASA Astrophysics Data System (ADS)

    Petržala, Jaromír

    2018-07-01

    The knowledge of the emission function of a city is crucial for simulation of sky glow in its vicinity. The indirect methods to achieve this function from radiances measured over a part of the sky have been recently developed. In principle, such methods represent an ill-posed inverse problem. This paper deals with the theoretical feasibility study of various approaches to solving of given inverse problem. Particularly, it means testing of fitness of various stabilizing functionals within the Tikhonov's regularization. Further, the L-curve and generalized cross validation methods were investigated as indicators of an optimal regularization parameter. At first, we created the theoretical model for calculation of a sky spectral radiance in the form of a functional of an emission spectral radiance. Consequently, all the mentioned approaches were examined in numerical experiments with synthetical data generated for the fictitious city and loaded by random errors. The results demonstrate that the second order Tikhonov's regularization method together with regularization parameter choice by the L-curve maximum curvature criterion provide solutions which are in good agreement with the supposed model emission functions.

  18. A Variational Approach to the Denoising of Images Based on Different Variants of the TV-Regularization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bildhauer, Michael, E-mail: bibi@math.uni-sb.de; Fuchs, Martin, E-mail: fuchs@math.uni-sb.de

    2012-12-15

    We discuss several variants of the TV-regularization model used in image recovery. The proposed alternatives are either of nearly linear growth or even of linear growth, but with some weak ellipticity properties. The main feature of the paper is the investigation of the analytic properties of the corresponding solutions.

  19. Dynamic experiment design regularization approach to adaptive imaging with array radar/SAR sensor systems.

    PubMed

    Shkvarko, Yuriy; Tuxpan, José; Santos, Stewart

    2011-01-01

    We consider a problem of high-resolution array radar/SAR imaging formalized in terms of a nonlinear ill-posed inverse problem of nonparametric estimation of the power spatial spectrum pattern (SSP) of the random wavefield scattered from a remotely sensed scene observed through a kernel signal formation operator and contaminated with random Gaussian noise. First, the Sobolev-type solution space is constructed to specify the class of consistent kernel SSP estimators with the reproducing kernel structures adapted to the metrics in such the solution space. Next, the "model-free" variational analysis (VA)-based image enhancement approach and the "model-based" descriptive experiment design (DEED) regularization paradigm are unified into a new dynamic experiment design (DYED) regularization framework. Application of the proposed DYED framework to the adaptive array radar/SAR imaging problem leads to a class of two-level (DEED-VA) regularized SSP reconstruction techniques that aggregate the kernel adaptive anisotropic windowing with the projections onto convex sets to enforce the consistency and robustness of the overall iterative SSP estimators. We also show how the proposed DYED regularization method may be considered as a generalization of the MVDR, APES and other high-resolution nonparametric adaptive radar sensing techniques. A family of the DYED-related algorithms is constructed and their effectiveness is finally illustrated via numerical simulations.

  20. We introduce an algorithm for the simultaneous reconstruction of faults and slip fields. We prove that the minimum of a related regularized functional converges to the unique solution of the fault inverse problem. We consider a Bayesian approach. We use a parallel multi-core platform and we discuss techniques to save on computational time.

    NASA Astrophysics Data System (ADS)

    Volkov, D.

    2017-12-01

    We introduce an algorithm for the simultaneous reconstruction of faults and slip fields on those faults. We define a regularized functional to be minimized for the reconstruction. We prove that the minimum of that functional converges to the unique solution of the related fault inverse problem. Due to inherent uncertainties in measurements, rather than seeking a deterministic solution to the fault inverse problem, we consider a Bayesian approach. The advantage of such an approach is that we obtain a way of quantifying uncertainties as part of our final answer. On the downside, this Bayesian approach leads to a very large computation. To contend with the size of this computation we developed an algorithm for the numerical solution to the stochastic minimization problem which can be easily implemented on a parallel multi-core platform and we discuss techniques to save on computational time. After showing how this algorithm performs on simulated data and assessing the effect of noise, we apply it to measured data. The data was recorded during a slow slip event in Guerrero, Mexico.

  1. Assessment of First- and Second-Order Wave-Excitation Load Models for Cylindrical Substructures: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pereyra, Brandon; Wendt, Fabian; Robertson, Amy

    2017-03-09

    The hydrodynamic loads on an offshore wind turbine's support structure present unique engineering challenges for offshore wind. Two typical approaches used for modeling these hydrodynamic loads are potential flow (PF) and strip theory (ST), the latter via Morison's equation. This study examines the first- and second-order wave-excitation surge forces on a fixed cylinder in regular waves computed by the PF and ST approaches to (1) verify their numerical implementations in HydroDyn and (2) understand when the ST approach breaks down. The numerical implementation of PF and ST in HydroDyn, a hydrodynamic time-domain solver implemented as a module in the FASTmore » wind turbine engineering tool, was verified by showing the consistency in the first- and second-order force output between the two methods across a range of wave frequencies. ST is known to be invalid at high frequencies, and this study investigates where the ST solution diverges from the PF solution. Regular waves across a range of frequencies were run in HydroDyn for a monopile substructure. As expected, the solutions for the first-order (linear) wave-excitation loads resulting from these regular waves are similar for PF and ST when the diameter of the cylinder is small compared to the length of the waves (generally when the diameter-to-wavelength ratio is less than 0.2). The same finding applies to the solutions for second-order wave-excitation loads, but for much smaller diameter-to-wavelength ratios (based on wavelengths of first-order waves).« less

  2. Assessment of First- and Second-Order Wave-Excitation Load Models for Cylindrical Substructures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pereyra, Brandon; Wendt, Fabian; Robertson, Amy

    2016-07-01

    The hydrodynamic loads on an offshore wind turbine's support structure present unique engineering challenges for offshore wind. Two typical approaches used for modeling these hydrodynamic loads are potential flow (PF) and strip theory (ST), the latter via Morison's equation. This study examines the first- and second-order wave-excitation surge forces on a fixed cylinder in regular waves computed by the PF and ST approaches to (1) verify their numerical implementations in HydroDyn and (2) understand when the ST approach breaks down. The numerical implementation of PF and ST in HydroDyn, a hydrodynamic time-domain solver implemented as a module in the FASTmore » wind turbine engineering tool, was verified by showing the consistency in the first- and second-order force output between the two methods across a range of wave frequencies. ST is known to be invalid at high frequencies, and this study investigates where the ST solution diverges from the PF solution. Regular waves across a range of frequencies were run in HydroDyn for a monopile substructure. As expected, the solutions for the first-order (linear) wave-excitation loads resulting from these regular waves are similar for PF and ST when the diameter of the cylinder is small compared to the length of the waves (generally when the diameter-to-wavelength ratio is less than 0.2). The same finding applies to the solutions for second-order wave-excitation loads, but for much smaller diameter-to-wavelength ratios (based on wavelengths of first-order waves).« less

  3. A hybrid approach to near-optimal launch vehicle guidance

    NASA Technical Reports Server (NTRS)

    Leung, Martin S. K.; Calise, Anthony J.

    1992-01-01

    This paper evaluates a proposed hybrid analytical/numerical approach to launch-vehicle guidance for ascent to orbit injection. The feedback-guidance approach is based on a piecewise nearly analytic zero-order solution evaluated using a collocation method. The zero-order solution is then improved through a regular perturbation analysis, wherein the neglected dynamics are corrected in the first-order term. For real-time implementation, the guidance approach requires solving a set of small dimension nonlinear algebraic equations and performing quadrature. Assessment of performance and reliability are carried out through closed-loop simulation for a vertically launched 2-stage heavy-lift capacity vehicle to a low earth orbit. The solutions are compared with optimal solutions generated from a multiple shooting code. In the example the guidance approach delivers over 99.9 percent of optimal performance and terminal constraint accuracy.

  4. High-resolution CSR GRACE RL05 mascons

    NASA Astrophysics Data System (ADS)

    Save, Himanshu; Bettadpur, Srinivas; Tapley, Byron D.

    2016-10-01

    The determination of the gravity model for the Gravity Recovery and Climate Experiment (GRACE) is susceptible to modeling errors, measurement noise, and observability issues. The ill-posed GRACE estimation problem causes the unconstrained GRACE RL05 solutions to have north-south stripes. We discuss the development of global equal area mascon solutions to improve the GRACE gravity information for the study of Earth surface processes. These regularized mascon solutions are developed with a 1° resolution using Tikhonov regularization in a geodesic grid domain. These solutions are derived from GRACE information only, and no external model or data is used to inform the constraints. The regularization matrix is time variable and will not bias or attenuate future regional signals to some past statistics from GRACE or other models. The resulting Center for Space Research (CSR) mascon solutions have no stripe errors and capture all the signals observed by GRACE within the measurement noise level. The solutions are not tailored for specific applications and are global in nature. This study discusses the solution approach and compares the resulting solutions with postprocessed results from the RL05 spherical harmonic solutions and other global mascon solutions for studies of Arctic ice sheet processes, ocean bottom pressure variation, and land surface total water storage change. This suite of comparisons leads to the conclusion that the mascon solutions presented here are an enhanced representation of the RL05 GRACE solutions and provide accurate surface-based gridded information that can be used without further processing.

  5. Numerical modeling of the radiative transfer in a turbid medium using the synthetic iteration.

    PubMed

    Budak, Vladimir P; Kaloshin, Gennady A; Shagalov, Oleg V; Zheltov, Victor S

    2015-07-27

    In this paper we propose the fast, but the accurate algorithm for numerical modeling of light fields in the turbid media slab. For the numerical solution of the radiative transfer equation (RTE) it is required its discretization based on the elimination of the solution anisotropic part and the replacement of the scattering integral by a finite sum. The solution regular part is determined numerically. A good choice of the method of the solution anisotropic part elimination determines the high convergence of the algorithm in the mean square metric. The method of synthetic iterations can be used to improve the convergence in the uniform metric. A significant increase in the solution accuracy with the use of synthetic iterations allows applying the two-stream approximation for the regular part determination. This approach permits to generalize the proposed method in the case of an arbitrary 3D geometry of the medium.

  6. Terminal attractors in neural networks

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    1989-01-01

    A new type of attractor (terminal attractors) for content-addressable memory, associative memory, and pattern recognition in artificial neural networks operating in continuous time is introduced. The idea of a terminal attractor is based upon a violation of the Lipschitz condition at a fixed point. As a result, the fixed point becomes a singular solution which envelopes the family of regular solutions, while each regular solution approaches such an attractor in finite time. It will be shown that terminal attractors can be incorporated into neural networks such that any desired set of these attractors with prescribed basins is provided by an appropriate selection of the synaptic weights. The applications of terminal attractors for content-addressable and associative memories, pattern recognition, self-organization, and for dynamical training are illustrated.

  7. Dynamic Experiment Design Regularization Approach to Adaptive Imaging with Array Radar/SAR Sensor Systems

    PubMed Central

    Shkvarko, Yuriy; Tuxpan, José; Santos, Stewart

    2011-01-01

    We consider a problem of high-resolution array radar/SAR imaging formalized in terms of a nonlinear ill-posed inverse problem of nonparametric estimation of the power spatial spectrum pattern (SSP) of the random wavefield scattered from a remotely sensed scene observed through a kernel signal formation operator and contaminated with random Gaussian noise. First, the Sobolev-type solution space is constructed to specify the class of consistent kernel SSP estimators with the reproducing kernel structures adapted to the metrics in such the solution space. Next, the “model-free” variational analysis (VA)-based image enhancement approach and the “model-based” descriptive experiment design (DEED) regularization paradigm are unified into a new dynamic experiment design (DYED) regularization framework. Application of the proposed DYED framework to the adaptive array radar/SAR imaging problem leads to a class of two-level (DEED-VA) regularized SSP reconstruction techniques that aggregate the kernel adaptive anisotropic windowing with the projections onto convex sets to enforce the consistency and robustness of the overall iterative SSP estimators. We also show how the proposed DYED regularization method may be considered as a generalization of the MVDR, APES and other high-resolution nonparametric adaptive radar sensing techniques. A family of the DYED-related algorithms is constructed and their effectiveness is finally illustrated via numerical simulations. PMID:22163859

  8. A regularization of the Burgers equation using a filtered convective velocity

    NASA Astrophysics Data System (ADS)

    Norgard, Greg; Mohseni, Kamran

    2008-08-01

    This paper examines the properties of a regularization of the Burgers equation in one and multiple dimensions using a filtered convective velocity, which we have dubbed as the convectively filtered Burgers (CFB) equation. A physical motivation behind the filtering technique is presented. An existence and uniqueness theorem for multiple dimensions and a general class of filters is proven. Multiple invariants of motion are found for the CFB equation which are shown to be shared with the viscous and inviscid Burgers equations. Traveling wave solutions are found for a general class of filters and are shown to converge to weak solutions of the inviscid Burgers equation with the correct wave speed. Numerical simulations are conducted in 1D and 2D cases where the shock behavior, shock thickness and kinetic energy decay are examined. Energy spectra are also examined and are shown to be related to the smoothness of the solutions. This approach is presented with the hope of being extended to shock regularization of compressible Euler equations.

  9. A regularized vortex-particle mesh method for large eddy simulation

    NASA Astrophysics Data System (ADS)

    Spietz, H. J.; Walther, J. H.; Hejlesen, M. M.

    2017-11-01

    We present recent developments of the remeshed vortex particle-mesh method for simulating incompressible fluid flow. The presented method relies on a parallel higher-order FFT based solver for the Poisson equation. Arbitrary high order is achieved through regularization of singular Green's function solutions to the Poisson equation and recently we have derived novel high order solutions for a mixture of open and periodic domains. With this approach the simulated variables may formally be viewed as the approximate solution to the filtered Navier Stokes equations, hence we use the method for Large Eddy Simulation by including a dynamic subfilter-scale model based on test-filters compatible with the aforementioned regularization functions. Further the subfilter-scale model uses Lagrangian averaging, which is a natural candidate in light of the Lagrangian nature of vortex particle methods. A multiresolution variation of the method is applied to simulate the benchmark problem of the flow past a square cylinder at Re = 22000 and the obtained results are compared to results from the literature.

  10. A hybrid inventory management system respondingto regular demand and surge demand

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohammad S. Roni; Mingzhou Jin; Sandra D. Eksioglu

    2014-06-01

    This paper proposes a hybrid policy for a stochastic inventory system facing regular demand and surge demand. The combination of two different demand patterns can be observed in many areas, such as healthcare inventory and humanitarian supply chain management. The surge demand has a lower arrival rate but higher demand volume per arrival. The solution approach proposed in this paper incorporates the level crossing method and mixed integer programming technique to optimize the hybrid inventory policy with both regular orders and emergency orders. The level crossing method is applied to obtain the equilibrium distributions of inventory levels under a givenmore » policy. The model is further transformed into a mixed integer program to identify an optimal hybrid policy. A sensitivity analysis is conducted to investigate the impact of parameters on the optimal inventory policy and minimum cost. Numerical results clearly show the benefit of using the proposed hybrid inventory model. The model and solution approach could help healthcare providers or humanitarian logistics providers in managing their emergency supplies in responding to surge demands.« less

  11. An approach for the regularization of a power flow solution around the maximum loading point

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kataoka, Y.

    1992-08-01

    In the conventional power flow solution, the boundary conditions are directly specified by active power and reactive power at each node, so that the singular point coincided with the maximum loading point. For this reason, the computations are often disturbed by ill-condition. This paper proposes a new method for getting the wide-range regularity by giving some modifications to the conventional power flow solution method, thereby eliminating the singular point or shifting it to the region with the voltage lower than that of the maximum loading point. Then, the continuous execution of V-P curves including maximum loading point is realized. Themore » efficiency and effectiveness of the method are tested in practical 598-nodes system in comparison with the conventional method.« less

  12. The Quality Control Circle: Is It for Education?

    ERIC Educational Resources Information Center

    Land, Arthur J.

    From its start in Japan after World War II, the Quality Control Circle (Q.C.) approach to management and organizational operation evolved into what it is today: people doing similar work meeting regularly to identify, objectively analyze, and develop solutions to problems. The Q.C. approach meets Maslow's theory of motivation by inviting…

  13. Travel time tomography with local image regularization by sparsity constrained dictionary learning

    NASA Astrophysics Data System (ADS)

    Bianco, M.; Gerstoft, P.

    2017-12-01

    We propose a regularization approach for 2D seismic travel time tomography which models small rectangular groups of slowness pixels, within an overall or `global' slowness image, as sparse linear combinations of atoms from a dictionary. The groups of slowness pixels are referred to as patches and a dictionary corresponds to a collection of functions or `atoms' describing the slowness in each patch. These functions could for example be wavelets.The patch regularization is incorporated into the global slowness image. The global image models the broad features, while the local patch images incorporate prior information from the dictionary. Further, high resolution slowness within patches is permitted if the travel times from the global estimates support it. The proposed approach is formulated as an algorithm, which is repeated until convergence is achieved: 1) From travel times, find the global slowness image with a minimum energy constraint on the pixel variance relative to a reference. 2) Find the patch level solutions to fit the global estimate as a sparse linear combination of dictionary atoms.3) Update the reference as the weighted average of the patch level solutions.This approach relies on the redundancy of the patches in the seismic image. Redundancy means that the patches are repetitions of a finite number of patterns, which are described by the dictionary atoms. Redundancy in the earth's structure was demonstrated in previous works in seismics where dictionaries of wavelet functions regularized inversion. We further exploit redundancy of the patches by using dictionary learning algorithms, a form of unsupervised machine learning, to estimate optimal dictionaries from the data in parallel with the inversion. We demonstrate our approach on densely, but irregularly sampled synthetic seismic images.

  14. Regularized Chapman-Enskog expansion for scalar conservation laws

    NASA Technical Reports Server (NTRS)

    Schochet, Steven; Tadmor, Eitan

    1990-01-01

    Rosenau has recently proposed a regularized version of the Chapman-Enskog expansion of hydrodynamics. This regularized expansion resembles the usual Navier-Stokes viscosity terms at law wave-numbers, but unlike the latter, it has the advantage of being a bounded macroscopic approximation to the linearized collision operator. The behavior of Rosenau regularization of the Chapman-Enskog expansion (RCE) is studied in the context of scalar conservation laws. It is shown that thie RCE model retains the essential properties of the usual viscosity approximation, e.g., existence of traveling waves, monotonicity, upper-Lipschitz continuity..., and at the same time, it sharpens the standard viscous shock layers. It is proved that the regularized RCE approximation converges to the underlying inviscid entropy solution as its mean-free-path epsilon approaches 0, and the convergence rate is estimated.

  15. Cross Validation Through Two-Dimensional Solution Surface for Cost-Sensitive SVM.

    PubMed

    Gu, Bin; Sheng, Victor S; Tay, Keng Yeow; Romano, Walter; Li, Shuo

    2017-06-01

    Model selection plays an important role in cost-sensitive SVM (CS-SVM). It has been proven that the global minimum cross validation (CV) error can be efficiently computed based on the solution path for one parameter learning problems. However, it is a challenge to obtain the global minimum CV error for CS-SVM based on one-dimensional solution path and traditional grid search, because CS-SVM is with two regularization parameters. In this paper, we propose a solution and error surfaces based CV approach (CV-SES). More specifically, we first compute a two-dimensional solution surface for CS-SVM based on a bi-parameter space partition algorithm, which can fit solutions of CS-SVM for all values of both regularization parameters. Then, we compute a two-dimensional validation error surface for each CV fold, which can fit validation errors of CS-SVM for all values of both regularization parameters. Finally, we obtain the CV error surface by superposing K validation error surfaces, which can find the global minimum CV error of CS-SVM. Experiments are conducted on seven datasets for cost sensitive learning and on four datasets for imbalanced learning. Experimental results not only show that our proposed CV-SES has a better generalization ability than CS-SVM with various hybrids between grid search and solution path methods, and than recent proposed cost-sensitive hinge loss SVM with three-dimensional grid search, but also show that CV-SES uses less running time.

  16. Efficient Solution of Three-Dimensional Problems of Acoustic and Electromagnetic Scattering by Open Surfaces

    NASA Technical Reports Server (NTRS)

    Turc, Catalin; Anand, Akash; Bruno, Oscar; Chaubell, Julian

    2011-01-01

    We present a computational methodology (a novel Nystrom approach based on use of a non-overlapping patch technique and Chebyshev discretizations) for efficient solution of problems of acoustic and electromagnetic scattering by open surfaces. Our integral equation formulations (1) Incorporate, as ansatz, the singular nature of open-surface integral-equation solutions, and (2) For the Electric Field Integral Equation (EFIE), use analytical regularizes that effectively reduce the number of iterations required by iterative linear-algebra solution based on Krylov-subspace iterative solvers.

  17. The charge conserving Poisson-Boltzmann equations: Existence, uniqueness, and maximum principle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Chiun-Chang, E-mail: chlee@mail.nhcue.edu.tw

    2014-05-15

    The present article is concerned with the charge conserving Poisson-Boltzmann (CCPB) equation in high-dimensional bounded smooth domains. The CCPB equation is a Poisson-Boltzmann type of equation with nonlocal coefficients. First, under the Robin boundary condition, we get the existence of weak solutions to this equation. The main approach is variational, based on minimization of a logarithm-type energy functional. To deal with the regularity of weak solutions, we establish a maximum modulus estimate for the standard Poisson-Boltzmann (PB) equation to show that weak solutions of the CCPB equation are essentially bounded. Then the classical solutions follow from the elliptic regularity theorem.more » Second, a maximum principle for the CCPB equation is established. In particular, we show that in the case of global electroneutrality, the solution achieves both its maximum and minimum values at the boundary. However, in the case of global non-electroneutrality, the solution may attain its maximum value at an interior point. In addition, under certain conditions on the boundary, we show that the global non-electroneutrality implies pointwise non-electroneutrality.« less

  18. Nonpolynomial Lagrangian approach to regular black holes

    NASA Astrophysics Data System (ADS)

    Colléaux, Aimeric; Chinaglia, Stefano; Zerbini, Sergio

    We present a review on Lagrangian models admitting spherically symmetric regular black holes (RBHs), and cosmological bounce solutions. Nonlinear electrodynamics, nonpolynomial gravity, and fluid approaches are explained in details. They consist respectively in a gauge invariant generalization of the Maxwell-Lagrangian, in modifications of the Einstein-Hilbert action via nonpolynomial curvature invariants, and finally in the reconstruction of density profiles able to cure the central singularity of black holes. The nonpolynomial gravity curvature invariants have the special property to be second-order and polynomial in the metric field, in spherically symmetric spacetimes. Along the way, other models and results are discussed, and some general properties that RBHs should satisfy are mentioned. A covariant Sakharov criterion for the absence of singularities in dynamical spherically symmetric spacetimes is also proposed and checked for some examples of such regular metric fields.

  19. A New Continuous-Time Equality-Constrained Optimization to Avoid Singularity.

    PubMed

    Quan, Quan; Cai, Kai-Yuan

    2016-02-01

    In equality-constrained optimization, a standard regularity assumption is often associated with feasible point methods, namely, that the gradients of constraints are linearly independent. In practice, the regularity assumption may be violated. In order to avoid such a singularity, a new projection matrix is proposed based on which a feasible point method to continuous-time, equality-constrained optimization is developed. First, the equality constraint is transformed into a continuous-time dynamical system with solutions that always satisfy the equality constraint. Second, a new projection matrix without singularity is proposed to realize the transformation. An update (or say a controller) is subsequently designed to decrease the objective function along the solutions of the transformed continuous-time dynamical system. The invariance principle is then applied to analyze the behavior of the solution. Furthermore, the proposed method is modified to address cases in which solutions do not satisfy the equality constraint. Finally, the proposed optimization approach is applied to three examples to demonstrate its effectiveness.

  20. Analysis of self-similar solutions of multidimensional conservation laws

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keyfitz, Barbara Lee

    2014-02-15

    This project focused on analysis of multidimensional conservation laws, specifically on extensions to the study of self-siminar solutions, a project initiated by the PI. In addition, progress was made on an approach to studying conservation laws of very low regularity; in this research, the context was a novel problem in chromatography. Two graduate students in mathematics were supported during the grant period, and have almost completed their thesis research.

  1. Simple, explicitly time-dependent, and regular solutions of the linearized vacuum Einstein equations in Bondi-Sachs coordinates

    NASA Astrophysics Data System (ADS)

    Mädler, Thomas

    2013-05-01

    Perturbations of the linearized vacuum Einstein equations in the Bondi-Sachs formulation of general relativity can be derived from a single master function with spin weight two, which is related to the Weyl scalar Ψ0, and which is determined by a simple wave equation. By utilizing a standard spin representation of tensors on a sphere and two different approaches to solve the master equation, we are able to determine two simple and explicitly time-dependent solutions. Both solutions, of which one is asymptotically flat, comply with the regularity conditions at the vertex of the null cone. For the asymptotically flat solution we calculate the corresponding linearized perturbations, describing all multipoles of spin-2 waves that propagate on a Minkowskian background spacetime. We also analyze the asymptotic behavior of this solution at null infinity using a Penrose compactification and calculate the Weyl scalar Ψ4. Because of its simplicity, the asymptotically flat solution presented here is ideally suited for test bed calculations in the Bondi-Sachs formulation of numerical relativity. It may be considered as a sibling of the Bergmann-Sachs or Teukolsky-Rinne solutions, on spacelike hypersurfaces, for a metric adapted to null hypersurfaces.

  2. Mascons, GRACE, and Time-variable Gravity

    NASA Technical Reports Server (NTRS)

    Lemoine, F.; Lutchke, S.; Rowlands, D.; Klosko, S.; Chinn, D.; Boy, J. P.

    2006-01-01

    The GRACE mission has been in orbit now for three years and now regularly produces snapshots of the Earth s gravity field on a monthly basis. The convenient standard approach has been to perform global solutions in spherical harmonics. Alternative local representations of mass variations using mascons show great promise and offer advantages in terms of computational efficiency, minimization of problems due to aliasing, and increased temporal resolution. In this paper, we discuss the results of processing the GRACE KBRR data from March 2003 through August 2005 to produce solutions for GRACE mass variations over mid-latitude and equatorial regions, such as South America, India and the United States, and over the polar regions (Antarctica and Greenland), with a focus on the methodology. We describe in particular mascon solutions developed on regular 4 degree x 4 degree grids, and those tailored specifically to drainage basins over these regions.

  3. A comparative study of minimum norm inverse methods for MEG imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leahy, R.M.; Mosher, J.C.; Phillips, J.W.

    1996-07-01

    The majority of MEG imaging techniques currently in use fall into the general class of (weighted) minimum norm methods. The minimization of a norm is used as the basis for choosing one from a generally infinite set of solutions that provide an equally good fit to the data. This ambiguity in the solution arises from the inherent non- uniqueness of the continuous inverse problem and is compounded by the imbalance between the relatively small number of measurements and the large number of source voxels. Here we present a unified view of the minimum norm methods and describe how we canmore » use Tikhonov regularization to avoid instabilities in the solutions due to noise. We then compare the performance of regularized versions of three well known linear minimum norm methods with the non-linear iteratively reweighted minimum norm method and a Bayesian approach.« less

  4. Sparse Reconstruction of Regional Gravity Signal Based on Stabilized Orthogonal Matching Pursuit (SOMP)

    NASA Astrophysics Data System (ADS)

    Saadat, S. A.; Safari, A.; Needell, D.

    2016-06-01

    The main role of gravity field recovery is the study of dynamic processes in the interior of the Earth especially in exploration geophysics. In this paper, the Stabilized Orthogonal Matching Pursuit (SOMP) algorithm is introduced for sparse reconstruction of regional gravity signals of the Earth. In practical applications, ill-posed problems may be encountered regarding unknown parameters that are sensitive to the data perturbations. Therefore, an appropriate regularization method needs to be applied to find a stabilized solution. The SOMP algorithm aims to regularize the norm of the solution vector, while also minimizing the norm of the corresponding residual vector. In this procedure, a convergence point of the algorithm that specifies optimal sparsity-level of the problem is determined. The results show that the SOMP algorithm finds the stabilized solution for the ill-posed problem at the optimal sparsity-level, improving upon existing sparsity based approaches.

  5. Ionospheric-thermospheric UV tomography: 1. Image space reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Dymond, K. F.; Budzien, S. A.; Hei, M. A.

    2017-03-01

    We present and discuss two algorithms of the class known as Image Space Reconstruction Algorithms (ISRAs) that we are applying to the solution of large-scale ionospheric tomography problems. ISRAs have several desirable features that make them useful for ionospheric tomography. In addition to producing nonnegative solutions, ISRAs are amenable to sparse-matrix formulations and are fast, stable, and robust. We present the results of our studies of two types of ISRA: the Least Squares Positive Definite and the Richardson-Lucy algorithms. We compare their performance to the Multiplicative Algebraic Reconstruction and Conjugate Gradient Least Squares algorithms. We then discuss the use of regularization in these algorithms and present our new approach based on regularization to a partial differential equation.

  6. Markov chain Monte Carlo techniques and spatial-temporal modelling for medical EIT.

    PubMed

    West, Robert M; Aykroyd, Robert G; Meng, Sha; Williams, Richard A

    2004-02-01

    Many imaging problems such as imaging with electrical impedance tomography (EIT) can be shown to be inverse problems: that is either there is no unique solution or the solution does not depend continuously on the data. As a consequence solution of inverse problems based on measured data alone is unstable, particularly if the mapping between the solution distribution and the measurements is also nonlinear as in EIT. To deliver a practical stable solution, it is necessary to make considerable use of prior information or regularization techniques. The role of a Bayesian approach is therefore of fundamental importance, especially when coupled with Markov chain Monte Carlo (MCMC) sampling to provide information about solution behaviour. Spatial smoothing is a commonly used approach to regularization. In the human thorax EIT example considered here nonlinearity increases the difficulty of imaging, using only boundary data, leading to reconstructions which are often rather too smooth. In particular, in medical imaging the resistivity distribution usually contains substantial jumps at the boundaries of different anatomical regions. With spatial smoothing these boundaries can be masked by blurring. This paper focuses on the medical application of EIT to monitor lung and cardiac function and uses explicit geometric information regarding anatomical structure and incorporates temporal correlation. Some simple properties are assumed known, or at least reliably estimated from separate studies, whereas others are estimated from the voltage measurements. This structural formulation will also allow direct estimation of clinically important quantities, such as ejection fraction and residual capacity, along with assessment of precision.

  7. Development of daily "swath" mascon solutions from GRACE

    NASA Astrophysics Data System (ADS)

    Save, Himanshu; Bettadpur, Srinivas

    2016-04-01

    The Gravity Recovery and Climate Experiment (GRACE) mission has provided invaluable and the only data of its kind over the past 14 years that measures the total water column in the Earth System. The GRACE project provides monthly average solutions and there are experimental quick-look solutions and regularized sliding window solutions available from Center for Space Research (CSR) that implement a sliding window approach and variable daily weights. The need for special handling of these solutions in data assimilation and the possibility of capturing the total water storage (TWS) signal at sub-monthly time scales motivated this study. This study discusses the progress of the development of true daily high resolution "swath" mascon total water storage estimate from GRACE using Tikhonov regularization. These solutions include the estimates of daily total water storage (TWS) for the mascon elements that were "observed" by the GRACE satellites on a given day. This paper discusses the computation techniques, signal, error and uncertainty characterization of these daily solutions. We discuss the comparisons with the official GRACE RL05 solutions and with CSR mascon solution to characterize the impact on science results especially at the sub-monthly time scales. The evaluation is done with emphasis on the temporal signal characteristics and validated against in-situ data set and multiple models.

  8. GRACE time-variable gravity field recovery using an improved energy balance approach

    NASA Astrophysics Data System (ADS)

    Shang, Kun; Guo, Junyi; Shum, C. K.; Dai, Chunli; Luo, Jia

    2015-12-01

    A new approach based on energy conservation principle for satellite gravimetry mission has been developed and yields more accurate estimation of in situ geopotential difference observables using K-band ranging (KBR) measurements from the Gravity Recovery and Climate Experiment (GRACE) twin-satellite mission. This new approach preserves more gravity information sensed by KBR range-rate measurements and reduces orbit error as compared to previous energy balance methods. Results from analysis of 11 yr of GRACE data indicated that the resulting geopotential difference estimates agree well with predicted values from official Level 2 solutions: with much higher correlation at 0.9, as compared to 0.5-0.8 reported by previous published energy balance studies. We demonstrate that our approach produced a comparable time-variable gravity solution with the Level 2 solutions. The regional GRACE temporal gravity solutions over Greenland reveals that a substantially higher temporal resolution is achievable at 10-d sampling as compared to the official monthly solutions, but without the compromise of spatial resolution, nor the need to use regularization or post-processing.

  9. Parameter identification in ODE models with oscillatory dynamics: a Fourier regularization approach

    NASA Astrophysics Data System (ADS)

    Chiara D'Autilia, Maria; Sgura, Ivonne; Bozzini, Benedetto

    2017-12-01

    In this paper we consider a parameter identification problem (PIP) for data oscillating in time, that can be described in terms of the dynamics of some ordinary differential equation (ODE) model, resulting in an optimization problem constrained by the ODEs. In problems with this type of data structure, simple application of the direct method of control theory (discretize-then-optimize) yields a least-squares cost function exhibiting multiple ‘low’ minima. Since in this situation any optimization algorithm is liable to fail in the approximation of a good solution, here we propose a Fourier regularization approach that is able to identify an iso-frequency manifold {{ S}} of codimension-one in the parameter space \

  10. Selection of regularization parameter for l1-regularized damage detection

    NASA Astrophysics Data System (ADS)

    Hou, Rongrong; Xia, Yong; Bao, Yuequan; Zhou, Xiaoqing

    2018-06-01

    The l1 regularization technique has been developed for structural health monitoring and damage detection through employing the sparsity condition of structural damage. The regularization parameter, which controls the trade-off between data fidelity and solution size of the regularization problem, exerts a crucial effect on the solution. However, the l1 regularization problem has no closed-form solution, and the regularization parameter is usually selected by experience. This study proposes two strategies of selecting the regularization parameter for the l1-regularized damage detection problem. The first method utilizes the residual and solution norms of the optimization problem and ensures that they are both small. The other method is based on the discrepancy principle, which requires that the variance of the discrepancy between the calculated and measured responses is close to the variance of the measurement noise. The two methods are applied to a cantilever beam and a three-story frame. A range of the regularization parameter, rather than one single value, can be determined. When the regularization parameter in this range is selected, the damage can be accurately identified even for multiple damage scenarios. This range also indicates the sensitivity degree of the damage identification problem to the regularization parameter.

  11. LP-stability for the strong solutions of the Navier-Stokes equations in the whole space

    NASA Astrophysics Data System (ADS)

    Beiraodaveiga, H.; Secchi, P.

    1985-10-01

    We consider the motion of a viscous fluid filling the whole space R3, governed by the classical Navier-Stokes equations (1). Existence of global (in time) regular solutions for that system of non-linear partial differential equations, is still an open problem. From either the mathematical and the physical point of view, an interesting property is the stability (or not) of the (eventual) global regular solutions. Here, we assume that v1(t,x) is a solution, with initial data a1(x). For small perturbations of a1, we want the solution v1(t,x) being slightly perturbed, too. Due to viscosity, it is even expected that the perturbed solution v2(t,x) approaches the unperturbed one, as time goes to + infinity. This is just the result proved in this paper. To measure the distance between v1(t,x) and v2(t,x), at each time t, suitable norms are introduced (LP-norms). For fluids filling a bounded vessel, exponential decay of the above distance, is expected. Such a strong result is not reasonable, for fluids filling the entire space.

  12. Statistical analysis of nonlinearly reconstructed near-infrared tomographic images: Part I--Theory and simulations.

    PubMed

    Pogue, Brian W; Song, Xiaomei; Tosteson, Tor D; McBride, Troy O; Jiang, Shudong; Paulsen, Keith D

    2002-07-01

    Near-infrared (NIR) diffuse tomography is an emerging method for imaging the interior of tissues to quantify concentrations of hemoglobin and exogenous chromophores non-invasively in vivo. It often exploits an optical diffusion model-based image reconstruction algorithm to estimate spatial property values from measurements of the light flux at the surface of the tissue. In this study, mean-squared error (MSE) over the image is used to evaluate methods for regularizing the ill-posed inverse image reconstruction problem in NIR tomography. Estimates of image bias and image standard deviation were calculated based upon 100 repeated reconstructions of a test image with randomly distributed noise added to the light flux measurements. It was observed that the bias error dominates at high regularization parameter values while variance dominates as the algorithm is allowed to approach the optimal solution. This optimum does not necessarily correspond to the minimum projection error solution, but typically requires further iteration with a decreasing regularization parameter to reach the lowest image error. Increasing measurement noise causes a need to constrain the minimum regularization parameter to higher values in order to achieve a minimum in the overall image MSE.

  13. An Onsager Singularity Theorem for Turbulent Solutions of Compressible Euler Equations

    NASA Astrophysics Data System (ADS)

    Drivas, Theodore D.; Eyink, Gregory L.

    2017-12-01

    We prove that bounded weak solutions of the compressible Euler equations will conserve thermodynamic entropy unless the solution fields have sufficiently low space-time Besov regularity. A quantity measuring kinetic energy cascade will also vanish for such Euler solutions, unless the same singularity conditions are satisfied. It is shown furthermore that strong limits of solutions of compressible Navier-Stokes equations that are bounded and exhibit anomalous dissipation are weak Euler solutions. These inviscid limit solutions have non-negative anomalous entropy production and kinetic energy dissipation, with both vanishing when solutions are above the critical degree of Besov regularity. Stationary, planar shocks in Euclidean space with an ideal-gas equation of state provide simple examples that satisfy the conditions of our theorems and which demonstrate sharpness of our L 3-based conditions. These conditions involve space-time Besov regularity, but we show that they are satisfied by Euler solutions that possess similar space regularity uniformly in time.

  14. Approaches to highly parameterized inversion-A guide to using PEST for groundwater-model calibration

    USGS Publications Warehouse

    Doherty, John E.; Hunt, Randall J.

    2010-01-01

    Highly parameterized groundwater models can create calibration difficulties. Regularized inversion-the combined use of large numbers of parameters with mathematical approaches for stable parameter estimation-is becoming a common approach to address these difficulties and enhance the transfer of information contained in field measurements to parameters used to model that system. Though commonly used in other industries, regularized inversion is somewhat imperfectly understood in the groundwater field. There is concern that this unfamiliarity can lead to underuse, and misuse, of the methodology. This document is constructed to facilitate the appropriate use of regularized inversion for calibrating highly parameterized groundwater models. The presentation is directed at an intermediate- to advanced-level modeler, and it focuses on the PEST software suite-a frequently used tool for highly parameterized model calibration and one that is widely supported by commercial graphical user interfaces. A brief overview of the regularized inversion approach is provided, and techniques for mathematical regularization offered by PEST are outlined, including Tikhonov, subspace, and hybrid schemes. Guidelines for applying regularized inversion techniques are presented after a logical progression of steps for building suitable PEST input. The discussion starts with use of pilot points as a parameterization device and processing/grouping observations to form multicomponent objective functions. A description of potential parameter solution methodologies and resources available through the PEST software and its supporting utility programs follows. Directing the parameter-estimation process through PEST control variables is then discussed, including guidance for monitoring and optimizing the performance of PEST. Comprehensive listings of PEST control variables, and of the roles performed by PEST utility support programs, are presented in the appendixes.

  15. W-phase estimation of first-order rupture distribution for megathrust earthquakes

    NASA Astrophysics Data System (ADS)

    Benavente, Roberto; Cummins, Phil; Dettmer, Jan

    2014-05-01

    Estimating the rupture pattern for large earthquakes during the first hour after the origin time can be crucial for rapid impact assessment and tsunami warning. However, the estimation of coseismic slip distribution models generally involves complex methodologies that are difficult to implement rapidly. Further, while model parameter uncertainty can be crucial for meaningful estimation, they are often ignored. In this work we develop a finite fault inversion for megathrust earthquakes which rapidly generates good first order estimates and uncertainties of spatial slip distributions. The algorithm uses W-phase waveforms and a linear automated regularization approach to invert for rupture models of some recent megathrust earthquakes. The W phase is a long period (100-1000 s) wave which arrives together with the P wave. Because it is fast, has small amplitude and a long-period character, the W phase is regularly used to estimate point source moment tensors by the NEIC and PTWC, among others, within an hour of earthquake occurrence. We use W-phase waveforms processed in a manner similar to that used for such point-source solutions. The inversion makes use of 3 component W-phase records retrieved from the Global Seismic Network. The inverse problem is formulated by a multiple time window method, resulting in a linear over-parametrized problem. The over-parametrization is addressed by Tikhonov regularization and regularization parameters are chosen according to the discrepancy principle by grid search. Noise on the data is addressed by estimating the data covariance matrix from data residuals. The matrix is obtained by starting with an a priori covariance matrix and then iteratively updating the matrix based on the residual errors of consecutive inversions. Then, a covariance matrix for the parameters is computed using a Bayesian approach. The application of this approach to recent megathrust earthquakes produces models which capture the most significant features of their slip distributions. Also, reliable solutions are generally obtained with data in a 30-minute window following the origin time, suggesting that a real-time system could obtain solutions in less than one hour following the origin time.

  16. Estimation of Faults in DC Electrical Power System

    NASA Technical Reports Server (NTRS)

    Gorinevsky, Dimitry; Boyd, Stephen; Poll, Scott

    2009-01-01

    This paper demonstrates a novel optimization-based approach to estimating fault states in a DC power system. Potential faults changing the circuit topology are included along with faulty measurements. Our approach can be considered as a relaxation of the mixed estimation problem. We develop a linear model of the circuit and pose a convex problem for estimating the faults and other hidden states. A sparse fault vector solution is computed by using 11 regularization. The solution is computed reliably and efficiently, and gives accurate diagnostics on the faults. We demonstrate a real-time implementation of the approach for an instrumented electrical power system testbed, the ADAPT testbed at NASA ARC. The estimates are computed in milliseconds on a PC. The approach performs well despite unmodeled transients and other modeling uncertainties present in the system.

  17. Construction of normal-regular decisions of Bessel typed special system

    NASA Astrophysics Data System (ADS)

    Tasmambetov, Zhaksylyk N.; Talipova, Meiramgul Zh.

    2017-09-01

    Studying a special system of differential equations in the separate production of the second order is solved by the degenerate hypergeometric function reducing to the Bessel functions of two variables. To construct a solution of this system near regular and irregular singularities, we use the method of Frobenius-Latysheva applying the concepts of rank and antirank. There is proved the basic theorem that establishes the existence of four linearly independent solutions of studying system type of Bessel. To prove the existence of normal-regular solutions we establish necessary conditions for the existence of such solutions. The existence and convergence of a normally regular solution are shown using the notion of rank and antirank.

  18. Model-Averaged ℓ1 Regularization using Markov Chain Monte Carlo Model Composition

    PubMed Central

    Fraley, Chris; Percival, Daniel

    2014-01-01

    Bayesian Model Averaging (BMA) is an effective technique for addressing model uncertainty in variable selection problems. However, current BMA approaches have computational difficulty dealing with data in which there are many more measurements (variables) than samples. This paper presents a method for combining ℓ1 regularization and Markov chain Monte Carlo model composition techniques for BMA. By treating the ℓ1 regularization path as a model space, we propose a method to resolve the model uncertainty issues arising in model averaging from solution path point selection. We show that this method is computationally and empirically effective for regression and classification in high-dimensional datasets. We apply our technique in simulations, as well as to some applications that arise in genomics. PMID:25642001

  19. Thermal depth profiling of vascular lesions: automated regularization of reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Verkruysse, Wim; Choi, Bernard; Zhang, Jenny R.; Kim, Jeehyun; Nelson, J. Stuart

    2008-03-01

    Pulsed photo-thermal radiometry (PPTR) is a non-invasive, non-contact diagnostic technique used to locate cutaneous chromophores such as melanin (epidermis) and hemoglobin (vascular structures). Clinical utility of PPTR is limited because it typically requires trained user intervention to regularize the inversion solution. Herein, the feasibility of automated regularization was studied. A second objective of this study was to depart from modeling port wine stain PWS, a vascular skin lesion frequently studied with PPTR, as strictly layered structures since this may influence conclusions regarding PPTR reconstruction quality. Average blood vessel depths, diameters and densities derived from histology of 30 PWS patients were used to generate 15 randomized lesion geometries for which we simulated PPTR signals. Reconstruction accuracy for subjective regularization was compared with that for automated regularization methods. The objective regularization approach performed better. However, the average difference was much smaller than the variation between the 15 simulated profiles. Reconstruction quality depended more on the actual profile to be reconstructed than on the reconstruction algorithm or regularization method. Similar, or better, accuracy reconstructions can be achieved with an automated regularization procedure which enhances prospects for user friendly implementation of PPTR to optimize laser therapy on an individual patient basis.

  20. Lattice Boltzmann approach for complex nonequilibrium flows.

    PubMed

    Montessori, A; Prestininzi, P; La Rocca, M; Succi, S

    2015-10-01

    We present a lattice Boltzmann realization of Grad's extended hydrodynamic approach to nonequilibrium flows. This is achieved by using higher-order isotropic lattices coupled with a higher-order regularization procedure. The method is assessed for flow across parallel plates and three-dimensional flows in porous media, showing excellent agreement of the mass flow with analytical and numerical solutions of the Boltzmann equation across the full range of Knudsen numbers, from the hydrodynamic regime to ballistic motion.

  1. Assessment of ALEGRA Computation for Magnetostatic Configurations

    DOE PAGES

    Grinfeld, Michael; Niederhaus, John Henry; Porwitzky, Andrew

    2016-03-01

    Here, a closed-form solution is described here for the equilibrium configurations of the magnetic field in a simple heterogeneous domain. This problem and its solution are used for rigorous assessment of the accuracy of the ALEGRA code in the quasistatic limit. By the equilibrium configuration we understand the static condition, or the stationary states without macroscopic current. The analysis includes quite a general class of 2D solutions for which a linear isotropic metallic matrix is placed inside a stationary magnetic field approaching a constant value H i° at infinity. The process of evolution of the magnetic fields inside and outsidemore » the inclusion and the parameters for which the quasi-static approach provides for self-consistent results is also explored. Lastly, it is demonstrated that under spatial mesh refinement, ALEGRA converges to the analytic solution for the interior of the inclusion at the expected rate, for both body-fitted and regular rectangular meshes.« less

  2. The elastic ratio: introducing curvature into ratio-based image segmentation.

    PubMed

    Schoenemann, Thomas; Masnou, Simon; Cremers, Daniel

    2011-09-01

    We present the first ratio-based image segmentation method that allows imposing curvature regularity of the region boundary. Our approach is a generalization of the ratio framework pioneered by Jermyn and Ishikawa so as to allow penalty functions that take into account the local curvature of the curve. The key idea is to cast the segmentation problem as one of finding cyclic paths of minimal ratio in a graph where each graph node represents a line segment. Among ratios whose discrete counterparts can be globally minimized with our approach, we focus in particular on the elastic ratio [Formula: see text] that depends, given an image I, on the oriented boundary C of the segmented region candidate. Minimizing this ratio amounts to finding a curve, neither small nor too curvy, through which the brightness flux is maximal. We prove the existence of minimizers for this criterion among continuous curves with mild regularity assumptions. We also prove that the discrete minimizers provided by our graph-based algorithm converge, as the resolution increases, to continuous minimizers. In contrast to most existing segmentation methods with computable and meaningful, i.e., nondegenerate, global optima, the proposed approach is fully unsupervised in the sense that it does not require any kind of user input such as seed nodes. Numerical experiments demonstrate that curvature regularity allows substantial improvement of the quality of segmentations. Furthermore, our results allow drawing conclusions about global optima of a parameterization-independent version of the snakes functional: the proposed algorithm allows determining parameter values where the functional has a meaningful solution and simultaneously provides the corresponding global solution.

  3. Estimation of High-Dimensional Graphical Models Using Regularized Score Matching

    PubMed Central

    Lin, Lina; Drton, Mathias; Shojaie, Ali

    2017-01-01

    Graphical models are widely used to model stochastic dependences among large collections of variables. We introduce a new method of estimating undirected conditional independence graphs based on the score matching loss, introduced by Hyvärinen (2005), and subsequently extended in Hyvärinen (2007). The regularized score matching method we propose applies to settings with continuous observations and allows for computationally efficient treatment of possibly non-Gaussian exponential family models. In the well-explored Gaussian setting, regularized score matching avoids issues of asymmetry that arise when applying the technique of neighborhood selection, and compared to existing methods that directly yield symmetric estimates, the score matching approach has the advantage that the considered loss is quadratic and gives piecewise linear solution paths under ℓ1 regularization. Under suitable irrepresentability conditions, we show that ℓ1-regularized score matching is consistent for graph estimation in sparse high-dimensional settings. Through numerical experiments and an application to RNAseq data, we confirm that regularized score matching achieves state-of-the-art performance in the Gaussian case and provides a valuable tool for computationally efficient estimation in non-Gaussian graphical models. PMID:28638498

  4. A novel scatter-matrix eigenvalues-based total variation (SMETV) regularization for medical image restoration

    NASA Astrophysics Data System (ADS)

    Huang, Zhenghua; Zhang, Tianxu; Deng, Lihua; Fang, Hao; Li, Qian

    2015-12-01

    Total variation(TV) based on regularization has been proven as a popular and effective model for image restoration, because of its ability of edge preserved. However, as the TV favors a piece-wise constant solution, the processing results in the flat regions of the image are easily produced "staircase effects", and the amplitude of the edges will be underestimated; the underlying cause of the problem is that the regularization parameter can not be changeable with spatial local information of image. In this paper, we propose a novel Scatter-matrix eigenvalues-based TV(SMETV) regularization with image blind restoration algorithm for deblurring medical images. The spatial information in different image regions is incorporated into regularization by using the edge indicator called difference eigenvalue to distinguish edges from flat areas. The proposed algorithm can effectively reduce the noise in flat regions as well as preserve the edge and detailed information. Moreover, it becomes more robust with the change of the regularization parameter. Extensive experiments demonstrate that the proposed approach produces results superior to most methods in both visual image quality and quantitative measures.

  5. Regular and singular pulse and front solutions and possible isochronous behavior in the short-pulse equation: Phase-plane, multi-infinite series and variational approaches

    NASA Astrophysics Data System (ADS)

    Gambino, G.; Tanriver, U.; Guha, P.; Choudhury, A. Ghose; Choudhury, S. Roy

    2015-02-01

    In this paper we employ three recent analytical approaches to investigate the possible classes of traveling wave solutions of some members of a family of so-called short-pulse equations (SPE). A recent, novel application of phase-plane analysis is first employed to show the existence of breaking kink wave solutions in certain parameter regimes. Secondly, smooth traveling waves are derived using a recent technique to derive convergent multi-infinite series solutions for the homoclinic (heteroclinic) orbits of the traveling-wave equations for the SPE equation, as well as for its generalized version with arbitrary coefficients. These correspond to pulse (kink or shock) solutions respectively of the original PDEs. We perform many numerical tests in different parameter regime to pinpoint real saddle equilibrium points of the corresponding traveling-wave equations, as well as ensure simultaneous convergence and continuity of the multi-infinite series solutions for the homoclinic/heteroclinic orbits anchored by these saddle points. Unlike the majority of unaccelerated convergent series, high accuracy is attained with relatively few terms. And finally, variational methods are employed to generate families of both regular and embedded solitary wave solutions for the SPE PDE. The technique for obtaining the embedded solitons incorporates several recent generalizations of the usual variational technique and it is thus topical in itself. One unusual feature of the solitary waves derived here is that we are able to obtain them in analytical form (within the assumed ansatz for the trial functions). Thus, a direct error analysis is performed, showing the accuracy of the resulting solitary waves. Given the importance of solitary wave solutions in wave dynamics and information propagation in nonlinear PDEs, as well as the fact that not much is known about solutions of the family of generalized SPE equations considered here, the results obtained are both new and timely.

  6. Regular black holes: Electrically charged solutions, Reissner-Nordstroem outside a de Sitter core

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lemos, Jose P. S.; Zanchin, Vilson T.; Centro de Ciencias Naturais e Humanas, Universidade Federal do ABC, Rua Santa Adelia, 166, 09210-170, Santo Andre, Sao Paulo

    2011-06-15

    To have the correct picture of a black hole as a whole, it is of crucial importance to understand its interior. The singularities that lurk inside the horizon of the usual Kerr-Newman family of black hole solutions signal an endpoint to the physical laws and, as such, should be substituted in one way or another. A proposal that has been around for sometime is to replace the singular region of the spacetime by a region containing some form of matter or false vacuum configuration that can also cohabit with the black hole interior. Black holes without singularities are called regularmore » black holes. In the present work, regular black hole solutions are found within general relativity coupled to Maxwell's electromagnetism and charged matter. We show that there are objects which correspond to regular charged black holes, whose interior region is de Sitter, whose exterior region is Reissner-Nordstroem, and the boundary between both regions is made of an electrically charged spherically symmetric coat. There are several types of solutions: regular nonextremal black holes with a null matter boundary, regular nonextremal black holes with a timelike matter boundary, regular extremal black holes with a timelike matter boundary, and regular overcharged stars with a timelike matter boundary. The main physical and geometrical properties of such charged regular solutions are analyzed.« less

  7. Sparse Solutions for Single Class SVMs: A Bi-Criterion Approach

    NASA Technical Reports Server (NTRS)

    Das, Santanu; Oza, Nikunj C.

    2011-01-01

    In this paper we propose an innovative learning algorithm - a variation of One-class nu Support Vector Machines (SVMs) learning algorithm to produce sparser solutions with much reduced computational complexities. The proposed technique returns an approximate solution, nearly as good as the solution set obtained by the classical approach, by minimizing the original risk function along with a regularization term. We introduce a bi-criterion optimization that helps guide the search towards the optimal set in much reduced time. The outcome of the proposed learning technique was compared with the benchmark one-class Support Vector machines algorithm which more often leads to solutions with redundant support vectors. Through out the analysis, the problem size for both optimization routines was kept consistent. We have tested the proposed algorithm on a variety of data sources under different conditions to demonstrate the effectiveness. In all cases the proposed algorithm closely preserves the accuracy of standard one-class nu SVMs while reducing both training time and test time by several factors.

  8. Traction cytometry: regularization in the Fourier approach and comparisons with finite element method.

    PubMed

    Kulkarni, Ankur H; Ghosh, Prasenjit; Seetharaman, Ashwin; Kondaiah, Paturu; Gundiah, Namrata

    2018-05-09

    Traction forces exerted by adherent cells are quantified using displacements of embedded markers on polyacrylamide substrates due to cell contractility. Fourier Transform Traction Cytometry (FTTC) is widely used to calculate tractions but has inherent limitations due to errors in the displacement fields; these are mitigated through a regularization parameter (γ) in the Reg-FTTC method. An alternate finite element (FE) approach computes tractions on a domain using known boundary conditions. Robust verification and recovery studies are lacking but essential in assessing the accuracy and noise sensitivity of the traction solutions from the different methods. We implemented the L2 regularization method and defined a maximum curvature point in the traction with γ plot as the optimal regularization parameter (γ*) in the Reg-FTTC approach. Traction reconstructions using γ* yield accurate values of low and maximum tractions (Tmax) in the presence of up to 5% noise. Reg-FTTC is hence a clear improvement over the FTTC method but is inadequate to reconstruct low stresses such as those at nascent focal adhesions. FE, implemented using a node-by-node comparison, showed an intermediate reconstruction compared to Reg-FTTC. We performed experiments using mouse embryonic fibroblast (MEF) and compared results between these approaches. Tractions from FTTC and FE showed differences of ∼92% and 22% as compared to Reg-FTTC. Selection of an optimum value of γ for each cell reduced variability in the computed tractions as compared to using a single value of γ for all the MEF cells in this study.

  9. A new approach to blind deconvolution of astronomical images

    NASA Astrophysics Data System (ADS)

    Vorontsov, S. V.; Jefferies, S. M.

    2017-05-01

    We readdress the strategy of finding approximate regularized solutions to the blind deconvolution problem, when both the object and the point-spread function (PSF) have finite support. Our approach consists in addressing fixed points of an iteration in which both the object x and the PSF y are approximated in an alternating manner, discarding the previous approximation for x when updating x (similarly for y), and considering the resultant fixed points as candidates for a sensible solution. Alternating approximations are performed by truncated iterative least-squares descents. The number of descents in the object- and in the PSF-space play a role of two regularization parameters. Selection of appropriate fixed points (which may not be unique) is performed by relaxing the regularization gradually, using the previous fixed point as an initial guess for finding the next one, which brings an approximation of better spatial resolution. We report the results of artificial experiments with noise-free data, targeted at examining the potential capability of the technique to deconvolve images of high complexity. We also show the results obtained with two sets of satellite images acquired using ground-based telescopes with and without adaptive optics compensation. The new approach brings much better results when compared with an alternating minimization technique based on positivity-constrained conjugate gradients, where the iterations stagnate when addressing data of high complexity. In the alternating-approximation step, we examine the performance of three different non-blind iterative deconvolution algorithms. The best results are provided by the non-negativity-constrained successive over-relaxation technique (+SOR) supplemented with an adaptive scheduling of the relaxation parameter. Results of comparable quality are obtained with steepest descents modified by imposing the non-negativity constraint, at the expense of higher numerical costs. The Richardson-Lucy (or expectation-maximization) algorithm fails to locate stable fixed points in our experiments, due apparently to inappropriate regularization properties.

  10. Optimal guidance law development for an advanced launch system

    NASA Technical Reports Server (NTRS)

    Calise, Anthony J.; Hodges, Dewey H.; Leung, Martin S.; Bless, Robert R.

    1991-01-01

    The proposed investigation on a Matched Asymptotic Expansion (MAE) method was carried out. It was concluded that the method of MAE is not applicable to launch vehicle ascent trajectory optimization due to a lack of a suitable stretched variable. More work was done on the earlier regular perturbation approach using a piecewise analytic zeroth order solution to generate a more accurate approximation. In the meantime, a singular perturbation approach using manifold theory is also under current investigation. Work on a general computational environment based on the use of MACSYMA and the weak Hamiltonian finite element method continued during this period. This methodology is capable of the solution of a large class of optimal control problems.

  11. An adaptive regularization parameter choice strategy for multispectral bioluminescence tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feng Jinchao; Qin Chenghu; Jia Kebin

    2011-11-15

    Purpose: Bioluminescence tomography (BLT) provides an effective tool for monitoring physiological and pathological activities in vivo. However, the measured data in bioluminescence imaging are corrupted by noise. Therefore, regularization methods are commonly used to find a regularized solution. Nevertheless, for the quality of the reconstructed bioluminescent source obtained by regularization methods, the choice of the regularization parameters is crucial. To date, the selection of regularization parameters remains challenging. With regards to the above problems, the authors proposed a BLT reconstruction algorithm with an adaptive parameter choice rule. Methods: The proposed reconstruction algorithm uses a diffusion equation for modeling the bioluminescentmore » photon transport. The diffusion equation is solved with a finite element method. Computed tomography (CT) images provide anatomical information regarding the geometry of the small animal and its internal organs. To reduce the ill-posedness of BLT, spectral information and the optimal permissible source region are employed. Then, the relationship between the unknown source distribution and multiview and multispectral boundary measurements is established based on the finite element method and the optimal permissible source region. Since the measured data are noisy, the BLT reconstruction is formulated as l{sub 2} data fidelity and a general regularization term. When choosing the regularization parameters for BLT, an efficient model function approach is proposed, which does not require knowledge of the noise level. This approach only requests the computation of the residual and regularized solution norm. With this knowledge, we construct the model function to approximate the objective function, and the regularization parameter is updated iteratively. Results: First, the micro-CT based mouse phantom was used for simulation verification. Simulation experiments were used to illustrate why multispectral data were used rather than monochromatic data. Furthermore, the study conducted using an adaptive regularization parameter demonstrated our ability to accurately localize the bioluminescent source. With the adaptively estimated regularization parameter, the reconstructed center position of the source was (20.37, 31.05, 12.95) mm, and the distance to the real source was 0.63 mm. The results of the dual-source experiments further showed that our algorithm could localize the bioluminescent sources accurately. The authors then presented experimental evidence that the proposed algorithm exhibited its calculated efficiency over the heuristic method. The effectiveness of the new algorithm was also confirmed by comparing it with the L-curve method. Furthermore, various initial speculations regarding the regularization parameter were used to illustrate the convergence of our algorithm. Finally, in vivo mouse experiment further illustrates the effectiveness of the proposed algorithm. Conclusions: Utilizing numerical, physical phantom and in vivo examples, we demonstrated that the bioluminescent sources could be reconstructed accurately with automatic regularization parameters. The proposed algorithm exhibited superior performance than both the heuristic regularization parameter choice method and L-curve method based on the computational speed and localization error.« less

  12. Quantifying non-linear dynamics of mass-springs in series oscillators via asymptotic approach

    NASA Astrophysics Data System (ADS)

    Starosta, Roman; Sypniewska-Kamińska, Grażyna; Awrejcewicz, Jan

    2017-05-01

    Dynamical regular response of an oscillator with two serially connected springs with nonlinear characteristics of cubic type and governed by a set of differential-algebraic equations (DAEs) is studied. The classical approach of the multiple scales method (MSM) in time domain has been employed and appropriately modified to solve the governing DAEs of two systems, i.e. with one- and two degrees-of-freedom. The approximate analytical solutions have been verified by numerical simulations.

  13. On the regularity criterion of weak solutions for the 3D MHD equations

    NASA Astrophysics Data System (ADS)

    Gala, Sadek; Ragusa, Maria Alessandra

    2017-12-01

    The paper deals with the 3D incompressible MHD equations and aims at improving a regularity criterion in terms of the horizontal gradient of velocity and magnetic field. It is proved that the weak solution ( u, b) becomes regular provided that ( \

  14. 4D-tomographic reconstruction of water vapor using the hybrid regularization technique with application to the North West of Iran

    NASA Astrophysics Data System (ADS)

    Adavi, Zohre; Mashhadi-Hossainali, Masoud

    2015-04-01

    Water vapor is considered as one of the most important weather parameter in meteorology. Its non-uniform distribution, which is due to the atmospheric phenomena above the surface of the earth, depends both on space and time. Due to the limited spatial and temporal coverage of observations, estimating water vapor is still a challenge in meteorology and related fields such as positioning and geodetic techniques. Tomography is a method for modeling the spatio-temporal variations of this parameter. By analyzing the impact of troposphere on the Global Navigation Satellite (GNSS) signals, inversion techniques are used for modeling the water vapor in this approach. Non-uniqueness and instability of solution are the two characteristic features of this problem. Horizontal and/or vertical constraints are usually used to compute a unique solution for this problem. Here, a hybrid regularization method is used for computing a regularized solution. The adopted method is based on the Least-Square QR (LSQR) and Tikhonov regularization techniques. This method benefits from the advantages of both the iterative and direct techniques. Moreover, it is independent of initial values. Based on this property and using an appropriate resolution for the model, firstly the number of model elements which are not constrained by GPS measurement are minimized and then; water vapor density is only estimated at the voxels which are constrained by these measurements. In other words, no constraint is added to solve the problem. Reconstructed profiles of water vapor are validated using radiosonde measurements.

  15. On the Use of Nonlinear Regularization in Inverse Methods for the Solar Tachocline Profile Determination

    NASA Astrophysics Data System (ADS)

    Corbard, T.; Berthomieu, G.; Provost, J.; Blanc-Feraud, L.

    Inferring the solar rotation from observed frequency splittings represents an ill-posed problem in the sense of Hadamard and the traditional approach used to override this difficulty consists in regularizing the problem by adding some a priori information on the global smoothness of the solution defined as the norm of its first or second derivative. Nevertheless, inversions of rotational splittings (e.g. Corbard et al., 1998; Schou et al., 1998) have shown that the surface layers and the so-called solar tachocline (Spiegel & Zahn 1992) at the base of the convection zone are regions in which high radial gradients of the rotation rate occur. %there exist high gradients in the solar rotation profile near %the surface and at the base of the convection zone (e.g. Corbard et al. 1998) %in the so-called solar tachocline (Spiegel & Zahn 1992). Therefore, the global smoothness a-priori which tends to smooth out every high gradient in the solution may not be appropriate for the study of a zone like the tachocline which is of particular interest for the study of solar dynamics (e.g. Elliot 1997). In order to infer the fine structure of such regions with high gradients by inverting helioseismic data, we have to find a way to preserve these zones in the inversion process. Setting a more adapted constraint on the solution leads to non-linear regularization methods that are in current use for edge-preserving regularization in computed imaging (e.g. Blanc-Feraud et al. 1995). In this work, we investigate their use in the helioseismic context of rotational inversions.

  16. Sparse Image Reconstruction on the Sphere: Analysis and Synthesis.

    PubMed

    Wallis, Christopher G R; Wiaux, Yves; McEwen, Jason D

    2017-11-01

    We develop techniques to solve ill-posed inverse problems on the sphere by sparse regularization, exploiting sparsity in both axisymmetric and directional scale-discretized wavelet space. Denoising, inpainting, and deconvolution problems and combinations thereof, are considered as examples. Inverse problems are solved in both the analysis and synthesis settings, with a number of different sampling schemes. The most effective approach is that with the most restricted solution-space, which depends on the interplay between the adopted sampling scheme, the selection of the analysis/synthesis problem, and any weighting of the l 1 norm appearing in the regularization problem. More efficient sampling schemes on the sphere improve reconstruction fidelity by restricting the solution-space and also by improving sparsity in wavelet space. We apply the technique to denoise Planck 353-GHz observations, improving the ability to extract the structure of Galactic dust emission, which is important for studying Galactic magnetism.

  17. Sparse Coding and Counting for Robust Visual Tracking

    PubMed Central

    Liu, Risheng; Wang, Jing; Shang, Xiaoke; Wang, Yiyang; Su, Zhixun; Cai, Yu

    2016-01-01

    In this paper, we propose a novel sparse coding and counting method under Bayesian framework for visual tracking. In contrast to existing methods, the proposed method employs the combination of L0 and L1 norm to regularize the linear coefficients of incrementally updated linear basis. The sparsity constraint enables the tracker to effectively handle difficult challenges, such as occlusion or image corruption. To achieve real-time processing, we propose a fast and efficient numerical algorithm for solving the proposed model. Although it is an NP-hard problem, the proposed accelerated proximal gradient (APG) approach is guaranteed to converge to a solution quickly. Besides, we provide a closed solution of combining L0 and L1 regularized representation to obtain better sparsity. Experimental results on challenging video sequences demonstrate that the proposed method achieves state-of-the-art results both in accuracy and speed. PMID:27992474

  18. Accurate solution of the Poisson equation with discontinuities

    NASA Astrophysics Data System (ADS)

    Nave, Jean-Christophe; Marques, Alexandre; Rosales, Rodolfo

    2017-11-01

    Solving the Poisson equation in the presence of discontinuities is of great importance in many applications of science and engineering. In many cases, the discontinuities are caused by interfaces between different media, such as in multiphase flows. These interfaces are themselves solutions to differential equations, and can assume complex configurations. For this reason, it is convenient to embed the interface into a regular triangulation or Cartesian grid and solve the Poisson equation in this regular domain. We present an extension of the Correction Function Method (CFM), which was developed to solve the Poisson equation in the context of embedded interfaces. The distinctive feature of the CFM is that it uses partial differential equations to construct smooth extensions of the solution in the vicinity of interfaces. A consequence of this approach is that it can achieve high order of accuracy while maintaining compact discretizations. The extension we present removes the restrictions of the original CFM, and yields a method that can solve the Poisson equation when discontinuities are present in the solution, the coefficients of the equation (material properties), and the source term. We show results computed to fourth order of accuracy in two and three dimensions. This work was partially funded by DARPA, NSF, and NSERC.

  19. Application of the Group Foliation Method to the Complex Monge-Ampère Equation

    NASA Astrophysics Data System (ADS)

    Nutku, Y.; Sheftel, M. B.

    2001-04-01

    We apply the method of group foliation to the complex Monge-Ampère equation ( CMA 2) to establish a regular framework for finding its non-invariant solutions. We employ an infinite symmetry subgroup of CMA 2 to produce a foliation of the solution space into orbits of solutions with respect to this group and a corresponding splitting of CMA 2 into an automorphic system and a resolvent system. We propose a new approach to group foliation which is based on the commutator algebra of operators of invariant differentiation. This algebra together with its Jacobi identities provides the commutator representation of the resolvent system.

  20. Efficient Regular Perovskite Solar Cells Based on Pristine [70]Fullerene as Electron-Selective Contact.

    PubMed

    Collavini, Silvia; Kosta, Ivet; Völker, Sebastian F; Cabanero, German; Grande, Hans J; Tena-Zaera, Ramón; Delgado, Juan Luis

    2016-06-08

    [70]Fullerene is presented as an efficient alternative electron-selective contact (ESC) for regular-architecture perovskite solar cells (PSCs). A smart and simple, well-described solution processing protocol for the preparation of [70]- and [60]fullerene-based solar cells, namely the fullerene saturation approach (FSA), allowed us to obtain similar power conversion efficiencies for both fullerene materials (i.e., 10.4 and 11.4 % for [70]- and [60]fullerene-based devices, respectively). Importantly, despite the low electron mobility and significant visible-light absorption of [70]fullerene, the presented protocol allows the employment of [70]fullerene as an efficient ESC. The [70]fullerene film thickness and its solubility in the perovskite processing solutions are crucial parameters, which can be controlled by the use of this simple solution processing protocol. The damage to the [70]fullerene film through dissolution during the perovskite deposition is avoided through the saturation of the perovskite processing solution with [70]fullerene. Additionally, this fullerene-saturation strategy improves the performance of the perovskite film significantly and enhances the power conversion efficiency of solar cells based on different ESCs (i.e., [60]fullerene, [70]fullerene, and TiO2 ). Therefore, this universal solution processing protocol widens the opportunities for the further development of PSCs. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. FAST TRACK COMMUNICATION: Regularized Kerr-Newman solution as a gravitating soliton

    NASA Astrophysics Data System (ADS)

    Burinskii, Alexander

    2010-10-01

    The charged, spinning and gravitating soliton is realized as a regular solution of the Kerr-Newman (KN) field coupled with a chiral Higgs model. A regular core of the solution is formed by a domain wall bubble interpolating between the external KN solution and a flat superconducting interior. An internal electromagnetic (em) field is expelled to the boundary of the bubble by the Higgs field. The solution reveals two new peculiarities: (i) the Higgs field is oscillating, similar to the known oscillon models; (ii) the em field forms on the edge of the bubble a Wilson loop, resulting in quantization of the total angular momentum.

  2. Generalised solutions for fully nonlinear PDE systems and existence-uniqueness theorems

    NASA Astrophysics Data System (ADS)

    Katzourakis, Nikos

    2017-07-01

    We introduce a new theory of generalised solutions which applies to fully nonlinear PDE systems of any order and allows for merely measurable maps as solutions. This approach bypasses the standard problems arising by the application of Distributions to PDEs and is not based on either integration by parts or on the maximum principle. Instead, our starting point builds on the probabilistic representation of derivatives via limits of difference quotients in the Young measures over a toric compactification of the space of jets. After developing some basic theory, as a first application we consider the Dirichlet problem and we prove existence-uniqueness-partial regularity of solutions to fully nonlinear degenerate elliptic 2nd order systems and also existence of solutions to the ∞-Laplace system of vectorial Calculus of Variations in L∞.

  3. Reconstruction of dynamic image series from undersampled MRI data using data-driven model consistency condition (MOCCO).

    PubMed

    Velikina, Julia V; Samsonov, Alexey A

    2015-11-01

    To accelerate dynamic MR imaging through development of a novel image reconstruction technique using low-rank temporal signal models preestimated from training data. We introduce the model consistency condition (MOCCO) technique, which utilizes temporal models to regularize reconstruction without constraining the solution to be low-rank, as is performed in related techniques. This is achieved by using a data-driven model to design a transform for compressed sensing-type regularization. The enforcement of general compliance with the model without excessively penalizing deviating signal allows recovery of a full-rank solution. Our method was compared with a standard low-rank approach utilizing model-based dimensionality reduction in phantoms and patient examinations for time-resolved contrast-enhanced angiography (CE-MRA) and cardiac CINE imaging. We studied the sensitivity of all methods to rank reduction and temporal subspace modeling errors. MOCCO demonstrated reduced sensitivity to modeling errors compared with the standard approach. Full-rank MOCCO solutions showed significantly improved preservation of temporal fidelity and aliasing/noise suppression in highly accelerated CE-MRA (acceleration up to 27) and cardiac CINE (acceleration up to 15) data. MOCCO overcomes several important deficiencies of previously proposed methods based on pre-estimated temporal models and allows high quality image restoration from highly undersampled CE-MRA and cardiac CINE data. © 2014 Wiley Periodicals, Inc.

  4. RECONSTRUCTION OF DYNAMIC IMAGE SERIES FROM UNDERSAMPLED MRI DATA USING DATA-DRIVEN MODEL CONSISTENCY CONDITION (MOCCO)

    PubMed Central

    Velikina, Julia V.; Samsonov, Alexey A.

    2014-01-01

    Purpose To accelerate dynamic MR imaging through development of a novel image reconstruction technique using low-rank temporal signal models pre-estimated from training data. Theory We introduce the MOdel Consistency COndition (MOCCO) technique that utilizes temporal models to regularize the reconstruction without constraining the solution to be low-rank as performed in related techniques. This is achieved by using a data-driven model to design a transform for compressed sensing-type regularization. The enforcement of general compliance with the model without excessively penalizing deviating signal allows recovery of a full-rank solution. Methods Our method was compared to standard low-rank approach utilizing model-based dimensionality reduction in phantoms and patient examinations for time-resolved contrast-enhanced angiography (CE MRA) and cardiac CINE imaging. We studied sensitivity of all methods to rank-reduction and temporal subspace modeling errors. Results MOCCO demonstrated reduced sensitivity to modeling errors compared to the standard approach. Full-rank MOCCO solutions showed significantly improved preservation of temporal fidelity and aliasing/noise suppression in highly accelerated CE MRA (acceleration up to 27) and cardiac CINE (acceleration up to 15) data. Conclusions MOCCO overcomes several important deficiencies of previously proposed methods based on pre-estimated temporal models and allows high quality image restoration from highly undersampled CE-MRA and cardiac CINE data. PMID:25399724

  5. Partial regularity of weak solutions to a PDE system with cubic nonlinearity

    NASA Astrophysics Data System (ADS)

    Liu, Jian-Guo; Xu, Xiangsheng

    2018-04-01

    In this paper we investigate regularity properties of weak solutions to a PDE system that arises in the study of biological transport networks. The system consists of a possibly singular elliptic equation for the scalar pressure of the underlying biological network coupled to a diffusion equation for the conductance vector of the network. There are several different types of nonlinearities in the system. Of particular mathematical interest is a term that is a polynomial function of solutions and their partial derivatives and this polynomial function has degree three. That is, the system contains a cubic nonlinearity. Only weak solutions to the system have been shown to exist. The regularity theory for the system remains fundamentally incomplete. In particular, it is not known whether or not weak solutions develop singularities. In this paper we obtain a partial regularity theorem, which gives an estimate for the parabolic Hausdorff dimension of the set of possible singular points.

  6. Source term identification in atmospheric modelling via sparse optimization

    NASA Astrophysics Data System (ADS)

    Adam, Lukas; Branda, Martin; Hamburger, Thomas

    2015-04-01

    Inverse modelling plays an important role in identifying the amount of harmful substances released into atmosphere during major incidents such as power plant accidents or volcano eruptions. Another possible application of inverse modelling lies in the monitoring the CO2 emission limits where only observations at certain places are available and the task is to estimate the total releases at given locations. This gives rise to minimizing the discrepancy between the observations and the model predictions. There are two standard ways of solving such problems. In the first one, this discrepancy is regularized by adding additional terms. Such terms may include Tikhonov regularization, distance from a priori information or a smoothing term. The resulting, usually quadratic, problem is then solved via standard optimization solvers. The second approach assumes that the error term has a (normal) distribution and makes use of Bayesian modelling to identify the source term. Instead of following the above-mentioned approaches, we utilize techniques from the field of compressive sensing. Such techniques look for a sparsest solution (solution with the smallest number of nonzeros) of a linear system, where a maximal allowed error term may be added to this system. Even though this field is a developed one with many possible solution techniques, most of them do not consider even the simplest constraints which are naturally present in atmospheric modelling. One of such examples is the nonnegativity of release amounts. We believe that the concept of a sparse solution is natural in both problems of identification of the source location and of the time process of the source release. In the first case, it is usually assumed that there are only few release points and the task is to find them. In the second case, the time window is usually much longer than the duration of the actual release. In both cases, the optimal solution should contain a large amount of zeros, giving rise to the concept of sparsity. In the paper, we summarize several optimization techniques which are used for finding sparse solutions and propose their modifications to handle selected constraints such as nonnegativity constraints and simple linear constraints, for example the minimal or maximal amount of total release. These techniques range from successive convex approximations to solution of one nonconvex problem. On simple examples, we explain these techniques and compare them from the point of implementation simplicity, approximation capability and convergence properties. Finally, these methods will be applied on the European Tracer Experiment (ETEX) data and the results will be compared with the current state of arts techniques such as regularized least squares or Bayesian approach. The obtained results show the surprisingly good results of these techniques. This research is supported by EEA/Norwegian Financial Mechanism under project 7F14287 STRADI.

  7. Improvements in GRACE Gravity Fields Using Regularization

    NASA Astrophysics Data System (ADS)

    Save, H.; Bettadpur, S.; Tapley, B. D.

    2008-12-01

    The unconstrained global gravity field models derived from GRACE are susceptible to systematic errors that show up as broad "stripes" aligned in a North-South direction on the global maps of mass flux. These errors are believed to be a consequence of both systematic and random errors in the data that are amplified by the nature of the gravity field inverse problem. These errors impede scientific exploitation of the GRACE data products, and limit the realizable spatial resolution of the GRACE global gravity fields in certain regions. We use regularization techniques to reduce these "stripe" errors in the gravity field products. The regularization criteria are designed such that there is no attenuation of the signal and that the solutions fit the observations as well as an unconstrained solution. We have used a computationally inexpensive method, normally referred to as "L-ribbon", to find the regularization parameter. This paper discusses the characteristics and statistics of a 5-year time-series of regularized gravity field solutions. The solutions show markedly reduced stripes, are of uniformly good quality over time, and leave little or no systematic observation residuals, which is a frequent consequence of signal suppression from regularization. Up to degree 14, the signal in regularized solution shows correlation greater than 0.8 with the un-regularized CSR Release-04 solutions. Signals from large-amplitude and small-spatial extent events - such as the Great Sumatra Andaman Earthquake of 2004 - are visible in the global solutions without using special post-facto error reduction techniques employed previously in the literature. Hydrological signals as small as 5 cm water-layer equivalent in the small river basins, like Indus and Nile for example, are clearly evident, in contrast to noisy estimates from RL04. The residual variability over the oceans relative to a seasonal fit is small except at higher latitudes, and is evident without the need for de-striping or spatial smoothing.

  8. Using Tikhonov Regularization for Spatial Projections from CSR Regularized Spherical Harmonic GRACE Solutions

    NASA Astrophysics Data System (ADS)

    Save, H.; Bettadpur, S. V.

    2013-12-01

    It has been demonstrated before that using Tikhonov regularization produces spherical harmonic solutions from GRACE that have very little residual stripes while capturing all the signal observed by GRACE within the noise level. This paper demonstrates a two-step process and uses Tikhonov regularization to remove the residual stripes in the CSR regularized spherical harmonic coefficients when computing the spatial projections. We discuss methods to produce mass anomaly grids that have no stripe features while satisfying the necessary condition of capturing all observed signal within the GRACE noise level.

  9. FOREWORD: Tackling inverse problems in a Banach space environment: from theory to applications Tackling inverse problems in a Banach space environment: from theory to applications

    NASA Astrophysics Data System (ADS)

    Schuster, Thomas; Hofmann, Bernd; Kaltenbacher, Barbara

    2012-10-01

    Inverse problems can usually be modelled as operator equations in infinite-dimensional spaces with a forward operator acting between Hilbert or Banach spaces—a formulation which quite often also serves as the basis for defining and analyzing solution methods. The additional amount of structure and geometric interpretability provided by the concept of an inner product has rendered these methods amenable to a convergence analysis, a fact which has led to a rigorous and comprehensive study of regularization methods in Hilbert spaces over the last three decades. However, for numerous problems such as x-ray diffractometry, certain inverse scattering problems and a number of parameter identification problems in PDEs, the reasons for using a Hilbert space setting seem to be based on conventions rather than an appropriate and realistic model choice, so often a Banach space setting would be closer to reality. Furthermore, non-Hilbertian regularization and data fidelity terms incorporating a priori information on solution and noise, such as general Lp-norms, TV-type norms, or the Kullback-Leibler divergence, have recently become very popular. These facts have motivated intensive investigations on regularization methods in Banach spaces, a topic which has emerged as a highly active research field within the area of inverse problems. Meanwhile some of the most well-known regularization approaches, such as Tikhonov-type methods requiring the solution of extremal problems, and iterative ones like the Landweber method, the Gauss-Newton method, as well as the approximate inverse method, have been investigated for linear and nonlinear operator equations in Banach spaces. Convergence with rates has been proven and conditions on the solution smoothness and on the structure of nonlinearity have been formulated. Still, beyond the existing results a large number of challenging open questions have arisen, due to the more involved handling of general Banach spaces and the larger variety of concrete instances with special properties. The aim of this special section is to provide a forum for highly topical ongoing work in the area of regularization in Banach spaces, its numerics and its applications. Indeed, we have been lucky enough to obtain a number of excellent papers both from colleagues who have previously been contributing to this topic and from researchers entering the field due to its relevance in practical inverse problems. We would like to thank all contributers for enabling us to present a high quality collection of papers on topics ranging from various aspects of regularization via efficient numerical solution to applications in PDE models. We give a brief overview of the contributions included in this issue (here ordered alphabetically by first author). In their paper, Iterative regularization with general penalty term—theory and application to L1 and TV regularization, Radu Bot and Torsten Hein provide an extension of the Landweber iteration for linear operator equations in Banach space to general operators in place of the inverse duality mapping, which corresponds to the use of general regularization functionals in variational regularization. The L∞ topology in data space corresponds to the frequently occuring situation of uniformly distributed data noise. A numerically efficient solution of the resulting Tikhonov regularization problem via a Moreau-Yosida appriximation and a semismooth Newton method, along with a δ-free regularization parameter choice rule, is the topic of the paper L∞ fitting for inverse problems with uniform noise by Christian Clason. Extension of convergence rates results from classical source conditions to their generalization via variational inequalities with a priori and a posteriori stopping rules is the main contribution of the paper Regularization of linear ill-posed problems by the augmented Lagrangian method and variational inequalities by Klaus Frick and Markus Grasmair, again in the context of some iterative method. A powerful tool for proving convergence rates of Tikhonov type but also other regularization methods in Banach spaces are assumptions of the type of variational inequalities that combine conditions on solution smoothness (i.e., source conditions in the Hilbert space case) and nonlinearity of the forward operator. In Parameter choice in Banach space regularization under variational inequalities, Bernd Hofmann and Peter Mathé provide results with general error measures and especially study the question of regularization parameter choice. Daijun Jiang, Hui Feng, and Jun Zou consider an application of Banach space ideas in the context of an application problem in their paper Convergence rates of Tikhonov regularizations for parameter identifiation in a parabolic-elliptic system, namely the identification of a distributed diffusion coefficient in a coupled elliptic-parabolic system. In particular, they show convergence rates of Lp-H1 (variational) regularization for the application under consideration via the use and verification of certain source and nonlinearity conditions. In computational practice, the Lp norm with p close to one is often used as a substitute for the actually sparsity promoting L1 norm. In Norm sensitivity of sparsity regularization with respect to p, Kamil S Kazimierski, Peter Maass and Robin Strehlow consider the question of how sensitive the Tikhonov regularized solution is with respect to p. They do so by computing the derivative via the implicit function theorem, particularly at the crucial value, p=1. Another iterative regularization method in Banach space is considered by Qinian Jin and Linda Stals in Nonstationary iterated Tikhonov regularization for ill-posed problems in Banach spaces. Using a variational formulation and under some smoothness and convexity assumption on the preimage space, they extend the convergence analysis of the well-known iterative Tikhonov method for linear problems in Hilbert space to a more general Banach space framework. Systems of linear or nonlinear operators can be efficiently treated by cyclic iterations, thus several variants of gradient and Newton-type Kaczmarz methods have already been studied in the Hilbert space setting. Antonio Leitão and M Marques Alves in their paper On Landweber---Kaczmarz methods for regularizing systems of ill-posed equations in Banach spaces carry out an extension to Banach spaces for the fundamental Landweber version. The impact of perturbations in the evaluation of the forward operator and its derivative on the convergence behaviour of regularization methods is a practically and highly relevant issue. It is treated in the paper Convergence rates analysis of Tikhonov regularization for nonlinear ill-posed problems with noisy operators by Shuai Lu and Jens Flemming for variational regularization of nonlinear problems in Banach spaces. In The approximate inverse in action: IV. Semi-discrete equations in a Banach space setting, Thomas Schuster, Andreas Rieder and Frank Schöpfer extend the concept of approximate inverse to the practically and highly relevant situation of finitely many measurements and a general smooth and convex Banach space as preimage space. They devise two approaches for computing the reconstruction kernels required in the method and provide convergence and regularization results. Frank Werner and Thorsten Hohage in Convergence rates in expectation for Tikhonov-type regularization of inverse problems with Poisson data prove convergence rates results for variational regularization with general convex regularization term and the Kullback-Leibler distance as data fidelity term by combining a new result on Poisson distributed data with a deterministic rates analysis. Finally, we would like to thank the Inverse Problems team, especially Joanna Evangelides and Chris Wileman, for their extraordinary smooth and productive cooperation, as well as Alfred K Louis for his kind support of our initiative.

  10. Total variation regularization of the 3-D gravity inverse problem using a randomized generalized singular value decomposition

    NASA Astrophysics Data System (ADS)

    Vatankhah, Saeed; Renaut, Rosemary A.; Ardestani, Vahid E.

    2018-04-01

    We present a fast algorithm for the total variation regularization of the 3-D gravity inverse problem. Through imposition of the total variation regularization, subsurface structures presenting with sharp discontinuities are preserved better than when using a conventional minimum-structure inversion. The associated problem formulation for the regularization is nonlinear but can be solved using an iteratively reweighted least-squares algorithm. For small-scale problems the regularized least-squares problem at each iteration can be solved using the generalized singular value decomposition. This is not feasible for large-scale, or even moderate-scale, problems. Instead we introduce the use of a randomized generalized singular value decomposition in order to reduce the dimensions of the problem and provide an effective and efficient solution technique. For further efficiency an alternating direction algorithm is used to implement the total variation weighting operator within the iteratively reweighted least-squares algorithm. Presented results for synthetic examples demonstrate that the novel randomized decomposition provides good accuracy for reduced computational and memory demands as compared to use of classical approaches.

  11. Minimum mean squared error (MSE) adjustment and the optimal Tykhonov-Phillips regularization parameter via reproducing best invariant quadratic uniformly unbiased estimates (repro-BIQUUE)

    NASA Astrophysics Data System (ADS)

    Schaffrin, Burkhard

    2008-02-01

    In a linear Gauss-Markov model, the parameter estimates from BLUUE (Best Linear Uniformly Unbiased Estimate) are not robust against possible outliers in the observations. Moreover, by giving up the unbiasedness constraint, the mean squared error (MSE) risk may be further reduced, in particular when the problem is ill-posed. In this paper, the α-weighted S-homBLE (Best homogeneously Linear Estimate) is derived via formulas originally used for variance component estimation on the basis of the repro-BIQUUE (reproducing Best Invariant Quadratic Uniformly Unbiased Estimate) principle in a model with stochastic prior information. In the present model, however, such prior information is not included, which allows the comparison of the stochastic approach (α-weighted S-homBLE) with the well-established algebraic approach of Tykhonov-Phillips regularization, also known as R-HAPS (Hybrid APproximation Solution), whenever the inverse of the “substitute matrix” S exists and is chosen as the R matrix that defines the relative impact of the regularizing term on the final result.

  12. L(2) stability for weak solutions of the Navier-Stokes equations in R(3)

    NASA Astrophysics Data System (ADS)

    Secchi, P.

    1985-11-01

    We consider the motion of a viscous fluid filling the whole space R3, governed by the classical Navier-Stokes equations (1). Existence of global (in time) regular solutions for that system of non-linear partial differential equations is still an open problem. Up to now, the only available global existence theorem (other than for sufficiently small initial data) is that of weak (turbulent) solutions. From both the mathematical and the physical point of view, an interesting property is the stability of such weak solutions. We assume that v(t,x) is a solution, with initial datum vO(x). We suppose that the initial datum is perturbed and consider one weak solution u corresponding to the new initial velocity. Then we prove that, due to viscosity, the perturbed weak solution u approaches in a suitable norm the unperturbed one, as time goes to + infinity, without smallness assumptions on the initial perturbation.

  13. Isothermal separation processes

    NASA Technical Reports Server (NTRS)

    England, C.

    1982-01-01

    The isothermal processes of membrane separation, supercritical extraction and chromatography were examined using availability analysis. The general approach was to derive equations that identified where energy is consumed in these processes and how they compare with conventional separation methods. These separation methods are characterized by pure work inputs, chiefly in the form of a pressure drop which supplies the required energy. Equations were derived for the energy requirement in terms of regular solution theory. This approach is believed to accurately predict the work of separation in terms of the heat of solution and the entropy of mixing. It can form the basis of a convenient calculation method for optimizing membrane and solvent properties for particular applications. Calculations were made on the energy requirements for a membrane process separating air into its components.

  14. Conditional Anomaly Detection with Soft Harmonic Functions

    PubMed Central

    Valko, Michal; Kveton, Branislav; Valizadegan, Hamed; Cooper, Gregory F.; Hauskrecht, Milos

    2012-01-01

    In this paper, we consider the problem of conditional anomaly detection that aims to identify data instances with an unusual response or a class label. We develop a new non-parametric approach for conditional anomaly detection based on the soft harmonic solution, with which we estimate the confidence of the label to detect anomalous mislabeling. We further regularize the solution to avoid the detection of isolated examples and examples on the boundary of the distribution support. We demonstrate the efficacy of the proposed method on several synthetic and UCI ML datasets in detecting unusual labels when compared to several baseline approaches. We also evaluate the performance of our method on a real-world electronic health record dataset where we seek to identify unusual patient-management decisions. PMID:25309142

  15. Conditional Anomaly Detection with Soft Harmonic Functions.

    PubMed

    Valko, Michal; Kveton, Branislav; Valizadegan, Hamed; Cooper, Gregory F; Hauskrecht, Milos

    2011-01-01

    In this paper, we consider the problem of conditional anomaly detection that aims to identify data instances with an unusual response or a class label. We develop a new non-parametric approach for conditional anomaly detection based on the soft harmonic solution, with which we estimate the confidence of the label to detect anomalous mislabeling. We further regularize the solution to avoid the detection of isolated examples and examples on the boundary of the distribution support. We demonstrate the efficacy of the proposed method on several synthetic and UCI ML datasets in detecting unusual labels when compared to several baseline approaches. We also evaluate the performance of our method on a real-world electronic health record dataset where we seek to identify unusual patient-management decisions.

  16. 3D first-arrival traveltime tomography with modified total variation regularization

    NASA Astrophysics Data System (ADS)

    Jiang, Wenbin; Zhang, Jie

    2018-02-01

    Three-dimensional (3D) seismic surveys have become a major tool in the exploration and exploitation of hydrocarbons. 3D seismic first-arrival traveltime tomography is a robust method for near-surface velocity estimation. A common approach for stabilizing the ill-posed inverse problem is to apply Tikhonov regularization to the inversion. However, the Tikhonov regularization method recovers smooth local structures while blurring the sharp features in the model solution. We present a 3D first-arrival traveltime tomography method with modified total variation (MTV) regularization to preserve sharp velocity contrasts and improve the accuracy of velocity inversion. To solve the minimization problem of the new traveltime tomography method, we decouple the original optimization problem into two following subproblems: a standard traveltime tomography problem with the traditional Tikhonov regularization and a L2 total variation problem. We apply the conjugate gradient method and split-Bregman iterative method to solve these two subproblems, respectively. Our synthetic examples show that the new method produces higher resolution models than the conventional traveltime tomography with Tikhonov regularization. We apply the technique to field data. The stacking section shows significant improvements with static corrections from the MTV traveltime tomography.

  17. Direct Regularized Estimation of Retinal Vascular Oxygen Tension Based on an Experimental Model

    PubMed Central

    Yildirim, Isa; Ansari, Rashid; Yetik, I. Samil; Shahidi, Mahnaz

    2014-01-01

    Phosphorescence lifetime imaging is commonly used to generate oxygen tension maps of retinal blood vessels by classical least squares (LS) estimation method. A spatial regularization method was later proposed and provided improved results. However, both methods obtain oxygen tension values from the estimates of intermediate variables, and do not yield an optimum estimate of oxygen tension values, due to their nonlinear dependence on the ratio of intermediate variables. In this paper, we provide an improved solution by devising a regularized direct least squares (RDLS) method that exploits available knowledge in studies that provide models of oxygen tension in retinal arteries and veins, unlike the earlier regularized LS approach where knowledge about intermediate variables is limited. The performance of the proposed RDLS method is evaluated by investigating and comparing the bias, variance, oxygen tension maps, 1-D profiles of arterial oxygen tension, and mean absolute error with those of earlier methods, and its superior performance both quantitatively and qualitatively is demonstrated. PMID:23732915

  18. Reducing errors in the GRACE gravity solutions using regularization

    NASA Astrophysics Data System (ADS)

    Save, Himanshu; Bettadpur, Srinivas; Tapley, Byron D.

    2012-09-01

    The nature of the gravity field inverse problem amplifies the noise in the GRACE data, which creeps into the mid and high degree and order harmonic coefficients of the Earth's monthly gravity fields provided by GRACE. Due to the use of imperfect background models and data noise, these errors are manifested as north-south striping in the monthly global maps of equivalent water heights. In order to reduce these errors, this study investigates the use of the L-curve method with Tikhonov regularization. L-curve is a popular aid for determining a suitable value of the regularization parameter when solving linear discrete ill-posed problems using Tikhonov regularization. However, the computational effort required to determine the L-curve is prohibitively high for a large-scale problem like GRACE. This study implements a parameter-choice method, using Lanczos bidiagonalization which is a computationally inexpensive approximation to L-curve. Lanczos bidiagonalization is implemented with orthogonal transformation in a parallel computing environment and projects a large estimation problem on a problem of the size of about 2 orders of magnitude smaller for computing the regularization parameter. Errors in the GRACE solution time series have certain characteristics that vary depending on the ground track coverage of the solutions. These errors increase with increasing degree and order. In addition, certain resonant and near-resonant harmonic coefficients have higher errors as compared with the other coefficients. Using the knowledge of these characteristics, this study designs a regularization matrix that provides a constraint on the geopotential coefficients as a function of its degree and order. This regularization matrix is then used to compute the appropriate regularization parameter for each monthly solution. A 7-year time-series of the candidate regularized solutions (Mar 2003-Feb 2010) show markedly reduced error stripes compared with the unconstrained GRACE release 4 solutions (RL04) from the Center for Space Research (CSR). Post-fit residual analysis shows that the regularized solutions fit the data to within the noise level of GRACE. A time series of filtered hydrological model is used to confirm that signal attenuation for basins in the Total Runoff Integrating Pathways (TRIP) database over 320 km radii is less than 1 cm equivalent water height RMS, which is within the noise level of GRACE.

  19. Regular black holes in f(T) Gravity through a nonlinear electrodynamics source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Junior, Ednaldo L.B.; Rodrigues, Manuel E.; Houndjo, Mahouton J.S., E-mail: ednaldobarrosjr@gmail.com, E-mail: esialg@gmail.com, E-mail: sthoundjo@yahoo.fr

    2015-10-01

    We seek to obtain a new class of exact solutions of regular black holes in f(T) Gravity with non-linear electrodynamics material content, with spherical symmetry in 4D. The equations of motion provide the regaining of various solutions of General Relativity, as a particular case where the function f(T)=T. We developed a powerful method for finding exact solutions, where we get the first new class of regular black holes solutions in the f(T) Theory, where all the geometrics scalars disappear at the origin of the radial coordinate and are finite everywhere, as well as a new class of singular black holes.

  20. Regularity theory for general stable operators

    NASA Astrophysics Data System (ADS)

    Ros-Oton, Xavier; Serra, Joaquim

    2016-06-01

    We establish sharp regularity estimates for solutions to Lu = f in Ω ⊂Rn, L being the generator of any stable and symmetric Lévy process. Such nonlocal operators L depend on a finite measure on S n - 1, called the spectral measure. First, we study the interior regularity of solutions to Lu = f in B1. We prove that if f is Cα then u belong to C α + 2 s whenever α + 2 s is not an integer. In case f ∈L∞, we show that the solution u is C2s when s ≠ 1 / 2, and C 2 s - ɛ for all ɛ > 0 when s = 1 / 2. Then, we study the boundary regularity of solutions to Lu = f in Ω, u = 0 in Rn ∖ Ω, in C 1 , 1 domains Ω. We show that solutions u satisfy u /ds ∈C s - ɛ (Ω ‾) for all ɛ > 0, where d is the distance to ∂Ω. Finally, we show that our results are sharp by constructing two counterexamples.

  1. Application of regular associated solution model to the liquidus curves of the Sn-Te and Sn-SnS systems

    NASA Astrophysics Data System (ADS)

    Eric, H.

    1982-12-01

    The liquidus curves of the Sn-Te and Sn-SnS systems were evaluated by the regular associated solution model (RAS). The main assumption of this theory is the existence of species A, B and associated complexes AB in the liquid phase. Thermodynamic properties of the binary A-B system are derived by ternary regular solution equations. Calculations based on this model for the Sn-Te and Sn-SnS systems are in agreement with published data.

  2. SPIRiT: Iterative Self-consistent Parallel Imaging Reconstruction from Arbitrary k-Space

    PubMed Central

    Lustig, Michael; Pauly, John M.

    2010-01-01

    A new approach to autocalibrating, coil-by-coil parallel imaging reconstruction is presented. It is a generalized reconstruction framework based on self consistency. The reconstruction problem is formulated as an optimization that yields the most consistent solution with the calibration and acquisition data. The approach is general and can accurately reconstruct images from arbitrary k-space sampling patterns. The formulation can flexibly incorporate additional image priors such as off-resonance correction and regularization terms that appear in compressed sensing. Several iterative strategies to solve the posed reconstruction problem in both image and k-space domain are presented. These are based on a projection over convex sets (POCS) and a conjugate gradient (CG) algorithms. Phantom and in-vivo studies demonstrate efficient reconstructions from undersampled Cartesian and spiral trajectories. Reconstructions that include off-resonance correction and nonlinear ℓ1-wavelet regularization are also demonstrated. PMID:20665790

  3. A gradient enhanced plasticity-damage microplane model for concrete

    NASA Astrophysics Data System (ADS)

    Zreid, Imadeddin; Kaliske, Michael

    2018-03-01

    Computational modeling of concrete poses two main types of challenges. The first is the mathematical description of local response for such a heterogeneous material under all stress states, and the second is the stability and efficiency of the numerical implementation in finite element codes. The paper at hand presents a comprehensive approach addressing both issues. Adopting the microplane theory, a combined plasticity-damage model is formulated and regularized by an implicit gradient enhancement. The plasticity part introduces a new microplane smooth 3-surface cap yield function, which provides a stable numerical solution within an implicit finite element algorithm. The damage part utilizes a split, which can describe the transition of loading between tension and compression. Regularization of the model by the implicit gradient approach eliminates the mesh sensitivity and numerical instabilities. Identification methods for model parameters are proposed and several numerical examples of plain and reinforced concrete are carried out for illustration.

  4. Ab initio calculation of excess properties of La{sub 1−x}(Ln,An){sub x}PO{sub 4} solid solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Yan; JARA High-Performance Computing, Schinkelstrasse 2, 52062 Aachen; Kowalski, Piotr M., E-mail: p.kowalski@fz-juelich.de

    2014-12-15

    We used ab initio computational approach to predict the excess enthalpy of mixing and the corresponding regular/subregular model parameters for La{sub 1−x}Ln{sub x}PO{sub 4} (Ln=Ce,…, Tb) and La{sub 1−x}An{sub x}PO{sub 4} (An=Pu, Am and Cm) monazite-type solid solutions. We found that the regular model interaction parameter W computed for La{sub 1−x}Ln{sub x}PO{sub 4} solid solutions matches the few existing experimental data. Within the lanthanide series W increases quadratically with the volume mismatch between LaPO{sub 4} and LnPO{sub 4} endmembers (ΔV=V{sub LaPO{sub 4}}−V{sub LnPO{sub 4}}), so that W(kJ/mol)=0.618(ΔV(cm{sup 3}/mol)){sup 2}. We demonstrate that this relationship also fits the interaction parameters computedmore » for La{sub 1−x}An{sub x}PO{sub 4} solid solutions. This shows that lanthanides can be used as surrogates for investigation of the thermodynamic mixing properties of actinide-bearing solid solutions. - Highlights: • The excess enthalpies of mixing for monazite-type solid solutions are computed. • The excess enthalpies increase with the endmembers volume mismatch. • The relationship derived for lanthanides is transferable to La{sub 1−x}An{sub x}PO{sub 4} systems.« less

  5. Mixture of Segmenters with Discriminative Spatial Regularization and Sparse Weight Selection*

    PubMed Central

    Chen, Ting; Rangarajan, Anand; Eisenschenk, Stephan J.

    2011-01-01

    This paper presents a novel segmentation algorithm which automatically learns the combination of weak segmenters and builds a strong one based on the assumption that the locally weighted combination varies w.r.t. both the weak segmenters and the training images. We learn the weighted combination during the training stage using a discriminative spatial regularization which depends on training set labels. A closed form solution to the cost function is derived for this approach. In the testing stage, a sparse regularization scheme is imposed to avoid overfitting. To the best of our knowledge, such a segmentation technique has never been reported in literature and we empirically show that it significantly improves on the performances of the weak segmenters. After showcasing the performance of the algorithm in the context of atlas-based segmentation, we present comparisons to the existing weak segmenter combination strategies on a hippocampal data set. PMID:22003748

  6. Refraction tomography mapping of near-surface dipping layers using landstreamer data at East Canyon Dam, Utah

    USGS Publications Warehouse

    Ivanov, J.; Miller, R.D.; Markiewicz, R.D.; Xia, J.

    2008-01-01

    We apply the P-wave refraction-tomography method to seismic data collected with a landstreamer. Refraction-tomography inversion solutions were determined using regularization parameters that provided the most realistic near-surface solutions that best matched the dipping layer structure of nearby outcrops. A reasonably well matched solution was obtained using an unusual set of optimal regularization parameters. In comparison, the use of conventional regularization parameters did not provide as realistic results. Thus, we consider that even if there is only qualitative a-priori information about a site (i.e., visual) - in the case of the East Canyon Dam, Utah - it might be possible to minimize the refraction nonuniqueness by estimating the most appropriate regularization parameters.

  7. Solving ill-posed control problems by stabilized finite element methods: an alternative to Tikhonov regularization

    NASA Astrophysics Data System (ADS)

    Burman, Erik; Hansbo, Peter; Larson, Mats G.

    2018-03-01

    Tikhonov regularization is one of the most commonly used methods for the regularization of ill-posed problems. In the setting of finite element solutions of elliptic partial differential control problems, Tikhonov regularization amounts to adding suitably weighted least squares terms of the control variable, or derivatives thereof, to the Lagrangian determining the optimality system. In this note we show that the stabilization methods for discretely ill-posed problems developed in the setting of convection-dominated convection-diffusion problems, can be highly suitable for stabilizing optimal control problems, and that Tikhonov regularization will lead to less accurate discrete solutions. We consider some inverse problems for Poisson’s equation as an illustration and derive new error estimates both for the reconstruction of the solution from the measured data and reconstruction of the source term from the measured data. These estimates include both the effect of the discretization error and error in the measurements.

  8. A Piecewise Deterministic Markov Toy Model for Traffic/Maintenance and Associated Hamilton–Jacobi Integrodifferential Systems on Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goreac, Dan, E-mail: Dan.Goreac@u-pem.fr; Kobylanski, Magdalena, E-mail: Magdalena.Kobylanski@u-pem.fr; Martinez, Miguel, E-mail: Miguel.Martinez@u-pem.fr

    2016-10-15

    We study optimal control problems in infinite horizon whxen the dynamics belong to a specific class of piecewise deterministic Markov processes constrained to star-shaped networks (corresponding to a toy traffic model). We adapt the results in Soner (SIAM J Control Optim 24(6):1110–1122, 1986) to prove the regularity of the value function and the dynamic programming principle. Extending the networks and Krylov’s “shaking the coefficients” method, we prove that the value function can be seen as the solution to a linearized optimization problem set on a convenient set of probability measures. The approach relies entirely on viscosity arguments. As a by-product,more » the dual formulation guarantees that the value function is the pointwise supremum over regular subsolutions of the associated Hamilton–Jacobi integrodifferential system. This ensures that the value function satisfies Perron’s preconization for the (unique) candidate to viscosity solution.« less

  9. Image Restoration Using Functional and Anatomical Information Fusion with Application to SPECT-MRI Images

    PubMed Central

    Benameur, S.; Mignotte, M.; Meunier, J.; Soucy, J. -P.

    2009-01-01

    Image restoration is usually viewed as an ill-posed problem in image processing, since there is no unique solution associated with it. The quality of restored image closely depends on the constraints imposed of the characteristics of the solution. In this paper, we propose an original extension of the NAS-RIF restoration technique by using information fusion as prior information with application in SPECT medical imaging. That extension allows the restoration process to be constrained by efficiently incorporating, within the NAS-RIF method, a regularization term which stabilizes the inverse solution. Our restoration method is constrained by anatomical information extracted from a high resolution anatomical procedure such as magnetic resonance imaging (MRI). This structural anatomy-based regularization term uses the result of an unsupervised Markovian segmentation obtained after a preliminary registration step between the MRI and SPECT data volumes from each patient. This method was successfully tested on 30 pairs of brain MRI and SPECT acquisitions from different subjects and on Hoffman and Jaszczak SPECT phantoms. The experiments demonstrated that the method performs better, in terms of signal-to-noise ratio, than a classical supervised restoration approach using a Metz filter. PMID:19812704

  10. Poisson image reconstruction with Hessian Schatten-norm regularization.

    PubMed

    Lefkimmiatis, Stamatios; Unser, Michael

    2013-11-01

    Poisson inverse problems arise in many modern imaging applications, including biomedical and astronomical ones. The main challenge is to obtain an estimate of the underlying image from a set of measurements degraded by a linear operator and further corrupted by Poisson noise. In this paper, we propose an efficient framework for Poisson image reconstruction, under a regularization approach, which depends on matrix-valued regularization operators. In particular, the employed regularizers involve the Hessian as the regularization operator and Schatten matrix norms as the potential functions. For the solution of the problem, we propose two optimization algorithms that are specifically tailored to the Poisson nature of the noise. These algorithms are based on an augmented-Lagrangian formulation of the problem and correspond to two variants of the alternating direction method of multipliers. Further, we derive a link that relates the proximal map of an l(p) norm with the proximal map of a Schatten matrix norm of order p. This link plays a key role in the development of one of the proposed algorithms. Finally, we provide experimental results on natural and biological images for the task of Poisson image deblurring and demonstrate the practical relevance and effectiveness of the proposed framework.

  11. Extensive regularization of the coupled cluster methods based on the generating functional formalism: application to gas-phase benchmarks and to the S(N)2 reaction of CHCl3 and OH- in water

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kowalski, Karol; Valiev, Marat

    2009-12-21

    The recently introduced energy expansion based on the use of generating functional (GF) [K. Kowalski, P.D. Fan, J. Chem. Phys. 130, 084112 (2009)] provides a way of constructing size-consistent non-iterative coupled-cluster (CC) corrections in terms of moments of the CC equations. To take advantage of this expansion in a strongly interacting regime, the regularization of the cluster amplitudes is required in order to counteract the effect of excessive growth of the norm of the CC wavefunction. Although proven to be effcient, the previously discussed form of the regularization does not lead to rigorously size-consistent corrections. In this paper we addressmore » the issue of size-consistent regularization of the GF expansion by redefning the equations for the cluster amplitudes. The performance and basic features of proposed methodology is illustrated on several gas-phase benchmark systems. Moreover, the regularized GF approaches are combined with QM/MM module and applied to describe the SN2 reaction of CHCl3 and OH- in aqueous solution.« less

  12. A Dictionary Learning Approach with Overlap for the Low Dose Computed Tomography Reconstruction and Its Vectorial Application to Differential Phase Tomography

    PubMed Central

    Mirone, Alessandro; Brun, Emmanuel; Coan, Paola

    2014-01-01

    X-ray based Phase-Contrast Imaging (PCI) techniques have been demonstrated to enhance the visualization of soft tissues in comparison to conventional imaging methods. Nevertheless the delivered dose as reported in the literature of biomedical PCI applications often equals or exceeds the limits prescribed in clinical diagnostics. The optimization of new computed tomography strategies which include the development and implementation of advanced image reconstruction procedures is thus a key aspect. In this scenario, we implemented a dictionary learning method with a new form of convex functional. This functional contains in addition to the usual sparsity inducing and fidelity terms, a new term which forces similarity between overlapping patches in the superimposed regions. The functional depends on two free regularization parameters: a coefficient multiplying the sparsity-inducing norm of the patch basis functions coefficients, and a coefficient multiplying the norm of the differences between patches in the overlapping regions. The solution is found by applying the iterative proximal gradient descent method with FISTA acceleration. The gradient is computed by calculating projection of the solution and its error backprojection at each iterative step. We study the quality of the solution, as a function of the regularization parameters and noise, on synthetic data for which the solution is a-priori known. We apply the method on experimental data in the case of Differential Phase Tomography. For this case we use an original approach which consists in using vectorial patches, each patch having two components: one per each gradient component. The resulting algorithm, implemented in the European Synchrotron Radiation Facility tomography reconstruction code PyHST, has proven to be efficient and well-adapted to strongly reduce the required dose and the number of projections in medical tomography. PMID:25531987

  13. A dictionary learning approach with overlap for the low dose computed tomography reconstruction and its vectorial application to differential phase tomography.

    PubMed

    Mirone, Alessandro; Brun, Emmanuel; Coan, Paola

    2014-01-01

    X-ray based Phase-Contrast Imaging (PCI) techniques have been demonstrated to enhance the visualization of soft tissues in comparison to conventional imaging methods. Nevertheless the delivered dose as reported in the literature of biomedical PCI applications often equals or exceeds the limits prescribed in clinical diagnostics. The optimization of new computed tomography strategies which include the development and implementation of advanced image reconstruction procedures is thus a key aspect. In this scenario, we implemented a dictionary learning method with a new form of convex functional. This functional contains in addition to the usual sparsity inducing and fidelity terms, a new term which forces similarity between overlapping patches in the superimposed regions. The functional depends on two free regularization parameters: a coefficient multiplying the sparsity-inducing L1 norm of the patch basis functions coefficients, and a coefficient multiplying the L2 norm of the differences between patches in the overlapping regions. The solution is found by applying the iterative proximal gradient descent method with FISTA acceleration. The gradient is computed by calculating projection of the solution and its error backprojection at each iterative step. We study the quality of the solution, as a function of the regularization parameters and noise, on synthetic data for which the solution is a-priori known. We apply the method on experimental data in the case of Differential Phase Tomography. For this case we use an original approach which consists in using vectorial patches, each patch having two components: one per each gradient component. The resulting algorithm, implemented in the European Synchrotron Radiation Facility tomography reconstruction code PyHST, has proven to be efficient and well-adapted to strongly reduce the required dose and the number of projections in medical tomography.

  14. An irregular lattice method for elastic wave propagation

    NASA Astrophysics Data System (ADS)

    O'Brien, Gareth S.; Bean, Christopher J.

    2011-12-01

    Lattice methods are a class of numerical scheme which represent a medium as a connection of interacting nodes or particles. In the case of modelling seismic wave propagation, the interaction term is determined from Hooke's Law including a bond-bending term. This approach has been shown to model isotropic seismic wave propagation in an elastic or viscoelastic medium by selecting the appropriate underlying lattice structure. To predetermine the material constants, this methodology has been restricted to regular grids, hexagonal or square in 2-D or cubic in 3-D. Here, we present a method for isotropic elastic wave propagation where we can remove this lattice restriction. The methodology is outlined and a relationship between the elastic material properties and an irregular lattice geometry are derived. The numerical method is compared with an analytical solution for wave propagation in an infinite homogeneous body along with comparing the method with a numerical solution for a layered elastic medium. The dispersion properties of this method are derived from a plane wave analysis showing the scheme is more dispersive than a regular lattice method. Therefore, the computational costs of using an irregular lattice are higher. However, by removing the regular lattice structure the anisotropic nature of fracture propagation in such methods can be removed.

  15. The determination of pair-distance distribution by double electron-electron resonance: regularization by the length of distance discretization with Monte Carlo calculations

    NASA Astrophysics Data System (ADS)

    Dzuba, Sergei A.

    2016-08-01

    Pulsed double electron-electron resonance technique (DEER, or PELDOR) is applied to study conformations and aggregation of peptides, proteins, nucleic acids, and other macromolecules. For a pair of spin labels, experimental data allows for the determination of their distance distribution function, P(r). P(r) is derived as a solution of a first-kind Fredholm integral equation, which is an ill-posed problem. Here, we suggest regularization by increasing the distance discretization length to its upper limit where numerical integration still provides agreement with experiment. This upper limit is found to be well above the lower limit for which the solution instability appears because of the ill-posed nature of the problem. For solving the integral equation, Monte Carlo trials of P(r) functions are employed; this method has an obvious advantage of the fulfillment of the non-negativity constraint for P(r). The regularization by the increasing of distance discretization length for the case of overlapping broad and narrow distributions may be employed selectively, with this length being different for different distance ranges. The approach is checked for model distance distributions and for experimental data taken from literature for doubly spin-labeled DNA and peptide antibiotics.

  16. Existence and Regularity of Invariant Measures for the Three Dimensional Stochastic Primitive Equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Glatt-Holtz, Nathan, E-mail: negh@vt.edu; Kukavica, Igor, E-mail: kukavica@usc.edu; Ziane, Mohammed, E-mail: ziane@usc.edu

    2014-05-15

    We establish the continuity of the Markovian semigroup associated with strong solutions of the stochastic 3D Primitive Equations, and prove the existence of an invariant measure. The proof is based on new moment bounds for strong solutions. The invariant measure is supported on strong solutions and is furthermore shown to have higher regularity properties.

  17. Nonlinear stability of Gardner breathers

    NASA Astrophysics Data System (ADS)

    Alejo, Miguel A.

    2018-01-01

    We show that breather solutions of the Gardner equation, a natural generalization of the KdV and mKdV equations, are H2 (R) stable. Through a variational approach, we characterize Gardner breathers as minimizers of a new Lyapunov functional and we study the associated spectral problem, through (i) the analysis of the spectrum of explicit linear systems (spectral stability), and (ii) controlling degenerated directions by using low regularity conservation laws.

  18. Laplace Inversion of Low-Resolution NMR Relaxometry Data Using Sparse Representation Methods

    PubMed Central

    Berman, Paula; Levi, Ofer; Parmet, Yisrael; Saunders, Michael; Wiesman, Zeev

    2013-01-01

    Low-resolution nuclear magnetic resonance (LR-NMR) relaxometry is a powerful tool that can be harnessed for characterizing constituents in complex materials. Conversion of the relaxation signal into a continuous distribution of relaxation components is an ill-posed inverse Laplace transform problem. The most common numerical method implemented today for dealing with this kind of problem is based on L2-norm regularization. However, sparse representation methods via L1 regularization and convex optimization are a relatively new approach for effective analysis and processing of digital images and signals. In this article, a numerical optimization method for analyzing LR-NMR data by including non-negativity constraints and L1 regularization and by applying a convex optimization solver PDCO, a primal-dual interior method for convex objectives, that allows general linear constraints to be treated as linear operators is presented. The integrated approach includes validation of analyses by simulations, testing repeatability of experiments, and validation of the model and its statistical assumptions. The proposed method provides better resolved and more accurate solutions when compared with those suggested by existing tools. © 2013 Wiley Periodicals, Inc. Concepts Magn Reson Part A 42A: 72–88, 2013. PMID:23847452

  19. Laplace Inversion of Low-Resolution NMR Relaxometry Data Using Sparse Representation Methods.

    PubMed

    Berman, Paula; Levi, Ofer; Parmet, Yisrael; Saunders, Michael; Wiesman, Zeev

    2013-05-01

    Low-resolution nuclear magnetic resonance (LR-NMR) relaxometry is a powerful tool that can be harnessed for characterizing constituents in complex materials. Conversion of the relaxation signal into a continuous distribution of relaxation components is an ill-posed inverse Laplace transform problem. The most common numerical method implemented today for dealing with this kind of problem is based on L 2 -norm regularization. However, sparse representation methods via L 1 regularization and convex optimization are a relatively new approach for effective analysis and processing of digital images and signals. In this article, a numerical optimization method for analyzing LR-NMR data by including non-negativity constraints and L 1 regularization and by applying a convex optimization solver PDCO, a primal-dual interior method for convex objectives, that allows general linear constraints to be treated as linear operators is presented. The integrated approach includes validation of analyses by simulations, testing repeatability of experiments, and validation of the model and its statistical assumptions. The proposed method provides better resolved and more accurate solutions when compared with those suggested by existing tools. © 2013 Wiley Periodicals, Inc. Concepts Magn Reson Part A 42A: 72-88, 2013.

  20. Identifing Atmospheric Pollutant Sources Using Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Paes, F. F.; Campos, H. F.; Luz, E. P.; Carvalho, A. R.

    2008-05-01

    The estimation of the area source pollutant strength is a relevant issue for atmospheric environment. This characterizes an inverse problem in the atmospheric pollution dispersion. In the inverse analysis, an area source domain is considered, where the strength of such area source term is assumed unknown. The inverse problem is solved by using a supervised artificial neural network: multi-layer perceptron. The conection weights of the neural network are computed from delta rule - learning process. The neural network inversion is compared with results from standard inverse analysis (regularized inverse solution). In the regularization method, the inverse problem is formulated as a non-linear optimization approach, whose the objective function is given by the square difference between the measured pollutant concentration and the mathematical models, associated with a regularization operator. In our numerical experiments, the forward problem is addressed by a source-receptor scheme, where a regressive Lagrangian model is applied to compute the transition matrix. The second order maximum entropy regularization is used, and the regularization parameter is calculated by the L-curve technique. The objective function is minimized employing a deterministic scheme (a quasi-Newton algorithm) [1] and a stochastic technique (PSO: particle swarm optimization) [2]. The inverse problem methodology is tested with synthetic observational data, from six measurement points in the physical domain. The best inverse solutions were obtained with neural networks. References: [1] D. R. Roberti, D. Anfossi, H. F. Campos Velho, G. A. Degrazia (2005): Estimating Emission Rate and Pollutant Source Location, Ciencia e Natura, p. 131-134. [2] E.F.P. da Luz, H.F. de Campos Velho, J.C. Becceneri, D.R. Roberti (2007): Estimating Atmospheric Area Source Strength Through Particle Swarm Optimization. Inverse Problems, Desing and Optimization Symposium IPDO-2007, April 16-18, Miami (FL), USA, vol 1, p. 354-359.

  1. Local error estimates for discontinuous solutions of nonlinear hyperbolic equations

    NASA Technical Reports Server (NTRS)

    Tadmor, Eitan

    1989-01-01

    Let u(x,t) be the possibly discontinuous entropy solution of a nonlinear scalar conservation law with smooth initial data. Suppose u sub epsilon(x,t) is the solution of an approximate viscosity regularization, where epsilon greater than 0 is the small viscosity amplitude. It is shown that by post-processing the small viscosity approximation u sub epsilon, pointwise values of u and its derivatives can be recovered with an error as close to epsilon as desired. The analysis relies on the adjoint problem of the forward error equation, which in this case amounts to a backward linear transport with discontinuous coefficients. The novelty of this approach is to use a (generalized) E-condition of the forward problem in order to deduce a W(exp 1,infinity) energy estimate for the discontinuous backward transport equation; this, in turn, leads one to an epsilon-uniform estimate on moments of the error u(sub epsilon) - u. This approach does not follow the characteristics and, therefore, applies mutatis mutandis to other approximate solutions such as E-difference schemes.

  2. Alpha models for rotating Navier-Stokes equations in geophysics with nonlinear dispersive regularization

    NASA Astrophysics Data System (ADS)

    Kim, Bong-Sik

    Three dimensional (3D) Navier-Stokes-alpha equations are considered for uniformly rotating geophysical fluid flows (large Coriolis parameter f = 2O). The Navier-Stokes-alpha equations are a nonlinear dispersive regularization of usual Navier-Stokes equations obtained by Lagrangian averaging. The focus is on the existence and global regularity of solutions of the 3D rotating Navier-Stokes-alpha equations and the uniform convergence of these solutions to those of the original 3D rotating Navier-Stokes equations for large Coriolis parameters f as alpha → 0. Methods are based on fast singular oscillating limits and results are obtained for periodic boundary conditions for all domain aspect ratios, including the case of three wave resonances which yields nonlinear "2½-dimensional" limit resonant equations for f → 0. The existence and global regularity of solutions of limit resonant equations is established, uniformly in alpha. Bootstrapping from global regularity of the limit equations, the existence of a regular solution of the full 3D rotating Navier-Stokes-alpha equations for large f for an infinite time is established. Then, the uniform convergence of a regular solution of the 3D rotating Navier-Stokes-alpha equations (alpha ≠ 0) to the one of the original 3D rotating NavierStokes equations (alpha = 0) for f large but fixed as alpha → 0 follows; this implies "shadowing" of trajectories of the limit dynamical systems by those of the perturbed alpha-dynamical systems. All the estimates are uniform in alpha, in contrast with previous estimates in the literature which blow up as alpha → 0. Finally, the existence of global attractors as well as exponential attractors is established for large f and the estimates are uniform in alpha.

  3. Regularity gradient estimates for weak solutions of singular quasi-linear parabolic equations

    NASA Astrophysics Data System (ADS)

    Phan, Tuoc

    2017-12-01

    This paper studies the Sobolev regularity for weak solutions of a class of singular quasi-linear parabolic problems of the form ut -div [ A (x , t , u , ∇u) ] =div [ F ] with homogeneous Dirichlet boundary conditions over bounded spatial domains. Our main focus is on the case that the vector coefficients A are discontinuous and singular in (x , t)-variables, and dependent on the solution u. Global and interior weighted W 1 , p (ΩT , ω)-regularity estimates are established for weak solutions of these equations, where ω is a weight function in some Muckenhoupt class of weights. The results obtained are even new for linear equations, and for ω = 1, because of the singularity of the coefficients in (x , t)-variables.

  4. A multi-resolution approach to electromagnetic modeling.

    NASA Astrophysics Data System (ADS)

    Cherevatova, M.; Egbert, G. D.; Smirnov, M. Yu

    2018-04-01

    We present a multi-resolution approach for three-dimensional magnetotelluric forward modeling. Our approach is motivated by the fact that fine grid resolution is typically required at shallow levels to adequately represent near surface inhomogeneities, topography, and bathymetry, while a much coarser grid may be adequate at depth where the diffusively propagating electromagnetic fields are much smoother. This is especially true for forward modeling required in regularized inversion, where conductivity variations at depth are generally very smooth. With a conventional structured finite-difference grid the fine discretization required to adequately represent rapid variations near the surface are continued to all depths, resulting in higher computational costs. Increasing the computational efficiency of the forward modeling is especially important for solving regularized inversion problems. We implement a multi-resolution finite-difference scheme that allows us to decrease the horizontal grid resolution with depth, as is done with vertical discretization. In our implementation, the multi-resolution grid is represented as a vertical stack of sub-grids, with each sub-grid being a standard Cartesian tensor product staggered grid. Thus, our approach is similar to the octree discretization previously used for electromagnetic modeling, but simpler in that we allow refinement only with depth. The major difficulty arose in deriving the forward modeling operators on interfaces between adjacent sub-grids. We considered three ways of handling the interface layers and suggest a preferable one, which results in similar accuracy as the staggered grid solution, while retaining the symmetry of coefficient matrix. A comparison between multi-resolution and staggered solvers for various models show that multi-resolution approach improves on computational efficiency without compromising the accuracy of the solution.

  5. EIT Imaging Regularization Based on Spectral Graph Wavelets.

    PubMed

    Gong, Bo; Schullcke, Benjamin; Krueger-Ziolek, Sabine; Vauhkonen, Marko; Wolf, Gerhard; Mueller-Lisse, Ullrich; Moeller, Knut

    2017-09-01

    The objective of electrical impedance tomographic reconstruction is to identify the distribution of tissue conductivity from electrical boundary conditions. This is an ill-posed inverse problem usually solved under the finite-element method framework. In previous studies, standard sparse regularization was used for difference electrical impedance tomography to achieve a sparse solution. However, regarding elementwise sparsity, standard sparse regularization interferes with the smoothness of conductivity distribution between neighboring elements and is sensitive to noise. As an effect, the reconstructed images are spiky and depict a lack of smoothness. Such unexpected artifacts are not realistic and may lead to misinterpretation in clinical applications. To eliminate such artifacts, we present a novel sparse regularization method that uses spectral graph wavelet transforms. Single-scale or multiscale graph wavelet transforms are employed to introduce local smoothness on different scales into the reconstructed images. The proposed approach relies on viewing finite-element meshes as undirected graphs and applying wavelet transforms derived from spectral graph theory. Reconstruction results from simulations, a phantom experiment, and patient data suggest that our algorithm is more robust to noise and produces more reliable images.

  6. An interior-point method for total variation regularized positron emission tomography image reconstruction

    NASA Astrophysics Data System (ADS)

    Bai, Bing

    2012-03-01

    There has been a lot of work on total variation (TV) regularized tomographic image reconstruction recently. Many of them use gradient-based optimization algorithms with a differentiable approximation of the TV functional. In this paper we apply TV regularization in Positron Emission Tomography (PET) image reconstruction. We reconstruct the PET image in a Bayesian framework, using Poisson noise model and TV prior functional. The original optimization problem is transformed to an equivalent problem with inequality constraints by adding auxiliary variables. Then we use an interior point method with logarithmic barrier functions to solve the constrained optimization problem. In this method, a series of points approaching the solution from inside the feasible region are found by solving a sequence of subproblems characterized by an increasing positive parameter. We use preconditioned conjugate gradient (PCG) algorithm to solve the subproblems directly. The nonnegativity constraint is enforced by bend line search. The exact expression of the TV functional is used in our calculations. Simulation results show that the algorithm converges fast and the convergence is insensitive to the values of the regularization and reconstruction parameters.

  7. An analytical method for the inverse Cauchy problem of Lame equation in a rectangle

    NASA Astrophysics Data System (ADS)

    Grigor’ev, Yu

    2018-04-01

    In this paper, we present an analytical computational method for the inverse Cauchy problem of Lame equation in the elasticity theory. A rectangular domain is frequently used in engineering structures and we only consider the analytical solution in a two-dimensional rectangle, wherein a missing boundary condition is recovered from the full measurement of stresses and displacements on an accessible boundary. The essence of the method consists in solving three independent Cauchy problems for the Laplace and Poisson equations. For each of them, the Fourier series is used to formulate a first-kind Fredholm integral equation for the unknown function of data. Then, we use a Lavrentiev regularization method, and the termwise separable property of kernel function allows us to obtain a closed-form regularized solution. As a result, for the displacement components, we obtain solutions in the form of a sum of series with three regularization parameters. The uniform convergence and error estimation of the regularized solutions are proved.

  8. Robust design optimization using the price of robustness, robust least squares and regularization methods

    NASA Astrophysics Data System (ADS)

    Bukhari, Hassan J.

    2017-12-01

    In this paper a framework for robust optimization of mechanical design problems and process systems that have parametric uncertainty is presented using three different approaches. Robust optimization problems are formulated so that the optimal solution is robust which means it is minimally sensitive to any perturbations in parameters. The first method uses the price of robustness approach which assumes the uncertain parameters to be symmetric and bounded. The robustness for the design can be controlled by limiting the parameters that can perturb.The second method uses the robust least squares method to determine the optimal parameters when data itself is subjected to perturbations instead of the parameters. The last method manages uncertainty by restricting the perturbation on parameters to improve sensitivity similar to Tikhonov regularization. The methods are implemented on two sets of problems; one linear and the other non-linear. This methodology will be compared with a prior method using multiple Monte Carlo simulation runs which shows that the approach being presented in this paper results in better performance.

  9. On a full Bayesian inference for force reconstruction problems

    NASA Astrophysics Data System (ADS)

    Aucejo, M.; De Smet, O.

    2018-05-01

    In a previous paper, the authors introduced a flexible methodology for reconstructing mechanical sources in the frequency domain from prior local information on both their nature and location over a linear and time invariant structure. The proposed approach was derived from Bayesian statistics, because of its ability in mathematically accounting for experimenter's prior knowledge. However, since only the Maximum a Posteriori estimate was computed, the posterior uncertainty about the regularized solution given the measured vibration field, the mechanical model and the regularization parameter was not assessed. To answer this legitimate question, this paper fully exploits the Bayesian framework to provide, from a Markov Chain Monte Carlo algorithm, credible intervals and other statistical measures (mean, median, mode) for all the parameters of the force reconstruction problem.

  10. An Augmented Lagrangian Filter Method for Real-Time Embedded Optimization

    DOE PAGES

    Chiang, Nai -Yuan; Huang, Rui; Zavala, Victor M.

    2017-04-17

    We present a filter line-search algorithm for nonconvex continuous optimization that combines an augmented Lagrangian function and a constraint violation metric to accept and reject steps. The approach is motivated by real-time optimization applications that need to be executed on embedded computing platforms with limited memory and processor speeds. The proposed method enables primal–dual regularization of the linear algebra system that in turn permits the use of solution strategies with lower computing overheads. We prove that the proposed algorithm is globally convergent and we demonstrate the developments using a nonconvex real-time optimization application for a building heating, ventilation, and airmore » conditioning system. Our numerical tests are performed on a standard processor and on an embedded platform. Lastly, we demonstrate that the approach reduces solution times by a factor of over 1000.« less

  11. An Augmented Lagrangian Filter Method for Real-Time Embedded Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chiang, Nai -Yuan; Huang, Rui; Zavala, Victor M.

    We present a filter line-search algorithm for nonconvex continuous optimization that combines an augmented Lagrangian function and a constraint violation metric to accept and reject steps. The approach is motivated by real-time optimization applications that need to be executed on embedded computing platforms with limited memory and processor speeds. The proposed method enables primal–dual regularization of the linear algebra system that in turn permits the use of solution strategies with lower computing overheads. We prove that the proposed algorithm is globally convergent and we demonstrate the developments using a nonconvex real-time optimization application for a building heating, ventilation, and airmore » conditioning system. Our numerical tests are performed on a standard processor and on an embedded platform. Lastly, we demonstrate that the approach reduces solution times by a factor of over 1000.« less

  12. Geometric effects on electrocapillarity in nanochannels with an overlapped electric double layer.

    PubMed

    Lee, Jung A; Kang, In Seok

    2016-10-01

    Unsteady filling of electrolyte solution inside a nanochannel by the electrocapillarity effect is studied. The filling rate is predicted as a function of the bulk concentration of the electrolyte, the surface potential (or surface charge density), and the cross sectional shape of the channel. For a nanochannel, the average outward normal stress exerted on the cross section of a channel (P[over ¯]_{zz}^{}) can be regarded as a measure of electrocapillarity and it is the driving force of the flow. This electrocapillarity measure is first analyzed by using the solution of the Poisson-Boltzmann equation. From the analysis, it is found that the results for many different cross sectional shapes can be unified with good accuracy if the hydraulic radius is adopted as the characteristic length scale of the problem. Especially in the case of constant surface potential, for both limits of κh→0 and κh→∞, it can be shown theoretically that the electrocapillarity is independent of the cross sectional shape if the hydraulic radius is the same. In order to analyze the geometric effects more systematically, we consider the regular N-polygons with the same hydraulic radius and the rectangles of different aspect ratios. Washburn's approach is then adopted to predict the filling rate of electrolyte solution inside a nanochannel. It is found that the average filling velocity decreases as N increases in the case of regular N-polygons with the same hydraulic radius. This is because the regular N-polygons of the same hydraulic radius share the same inscribing circle.

  13. Alert management for home healthcare based on home automation analysis.

    PubMed

    Truong, T T; de Lamotte, F; Diguet, J-Ph; Said-Hocine, F

    2010-01-01

    Rising healthcare for elder and disabled people can be controlled by offering people autonomy at home by means of information technology. In this paper, we present an original and sensorless alert management solution which performs multimedia and home automation service discrimination and extracts highly regular home activities as sensors for alert management. The results of simulation data, based on real context, allow us to evaluate our approach before application to real data.

  14. Chemical modification of electrolytes for lithium batteries

    NASA Astrophysics Data System (ADS)

    Afanas'ev, Vladimir N.; Grechin, Aleksandr G.

    2002-09-01

    Modern approaches to modifying chemically electrolytes for lithium batteries are analysed with the aim of optimising the charge-transfer processes in liquid-phase and solid (polymeric) media. The main regularities of transport properties of lithium electrolyte solutions containing complex (encapsulated) ions in aprotic solvents and polymers are discussed. The prospects for the development of electrolytic solvosystems with the chain (ionotropic) mechanism of conduction with respect to lithium ions are outlined. The bibliography includes 126 references.

  15. ADAPTIVE FINITE ELEMENT MODELING TECHNIQUES FOR THE POISSON-BOLTZMANN EQUATION

    PubMed Central

    HOLST, MICHAEL; MCCAMMON, JAMES ANDREW; YU, ZEYUN; ZHOU, YOUNGCHENG; ZHU, YUNRONG

    2011-01-01

    We consider the design of an effective and reliable adaptive finite element method (AFEM) for the nonlinear Poisson-Boltzmann equation (PBE). We first examine the two-term regularization technique for the continuous problem recently proposed by Chen, Holst, and Xu based on the removal of the singular electrostatic potential inside biomolecules; this technique made possible the development of the first complete solution and approximation theory for the Poisson-Boltzmann equation, the first provably convergent discretization, and also allowed for the development of a provably convergent AFEM. However, in practical implementation, this two-term regularization exhibits numerical instability. Therefore, we examine a variation of this regularization technique which can be shown to be less susceptible to such instability. We establish a priori estimates and other basic results for the continuous regularized problem, as well as for Galerkin finite element approximations. We show that the new approach produces regularized continuous and discrete problems with the same mathematical advantages of the original regularization. We then design an AFEM scheme for the new regularized problem, and show that the resulting AFEM scheme is accurate and reliable, by proving a contraction result for the error. This result, which is one of the first results of this type for nonlinear elliptic problems, is based on using continuous and discrete a priori L∞ estimates to establish quasi-orthogonality. To provide a high-quality geometric model as input to the AFEM algorithm, we also describe a class of feature-preserving adaptive mesh generation algorithms designed specifically for constructing meshes of biomolecular structures, based on the intrinsic local structure tensor of the molecular surface. All of the algorithms described in the article are implemented in the Finite Element Toolkit (FETK), developed and maintained at UCSD. The stability advantages of the new regularization scheme are demonstrated with FETK through comparisons with the original regularization approach for a model problem. The convergence and accuracy of the overall AFEM algorithm is also illustrated by numerical approximation of electrostatic solvation energy for an insulin protein. PMID:21949541

  16. Partial regularity of viscosity solutions for a class of Kolmogorov equations arising from mathematical finance

    NASA Astrophysics Data System (ADS)

    Rosestolato, M.; Święch, A.

    2017-02-01

    We study value functions which are viscosity solutions of certain Kolmogorov equations. Using PDE techniques we prove that they are C 1 + α regular on special finite dimensional subspaces. The problem has origins in hedging derivatives of risky assets in mathematical finance.

  17. Nonnegative least-squares image deblurring: improved gradient projection approaches

    NASA Astrophysics Data System (ADS)

    Benvenuto, F.; Zanella, R.; Zanni, L.; Bertero, M.

    2010-02-01

    The least-squares approach to image deblurring leads to an ill-posed problem. The addition of the nonnegativity constraint, when appropriate, does not provide regularization, even if, as far as we know, a thorough investigation of the ill-posedness of the resulting constrained least-squares problem has still to be done. Iterative methods, converging to nonnegative least-squares solutions, have been proposed. Some of them have the 'semi-convergence' property, i.e. early stopping of the iteration provides 'regularized' solutions. In this paper we consider two of these methods: the projected Landweber (PL) method and the iterative image space reconstruction algorithm (ISRA). Even if they work well in many instances, they are not frequently used in practice because, in general, they require a large number of iterations before providing a sensible solution. Therefore, the main purpose of this paper is to refresh these methods by increasing their efficiency. Starting from the remark that PL and ISRA require only the computation of the gradient of the functional, we propose the application to these algorithms of special acceleration techniques that have been recently developed in the area of the gradient methods. In particular, we propose the application of efficient step-length selection rules and line-search strategies. Moreover, remarking that ISRA is a scaled gradient algorithm, we evaluate its behaviour in comparison with a recent scaled gradient projection (SGP) method for image deblurring. Numerical experiments demonstrate that the accelerated methods still exhibit the semi-convergence property, with a considerable gain both in the number of iterations and in the computational time; in particular, SGP appears definitely the most efficient one.

  18. Applications of exact traveling wave solutions of Modified Liouville and the Symmetric Regularized Long Wave equations via two new techniques

    NASA Astrophysics Data System (ADS)

    Lu, Dianchen; Seadawy, Aly R.; Ali, Asghar

    2018-06-01

    In this current work, we employ novel methods to find the exact travelling wave solutions of Modified Liouville equation and the Symmetric Regularized Long Wave equation, which are called extended simple equation and exp(-Ψ(ξ))-expansion methods. By assigning the different values to the parameters, different types of the solitary wave solutions are derived from the exact traveling wave solutions, which shows the efficiency and precision of our methods. Some solutions have been represented by graphical. The obtained results have several applications in physical science.

  19. The method of A-harmonic approximation and optimal interior partial regularity for nonlinear elliptic systems under the controllable growth condition

    NASA Astrophysics Data System (ADS)

    Chen, Shuhong; Tan, Zhong

    2007-11-01

    In this paper, we consider the nonlinear elliptic systems under controllable growth condition. We use a new method introduced by Duzaar and Grotowski, for proving partial regularity for weak solutions, based on a generalization of the technique of harmonic approximation. We extend previous partial regularity results under the natural growth condition to the case of the controllable growth condition, and directly establishing the optimal Hölder exponent for the derivative of a weak solution.

  20. Application of Two-Parameter Stabilizing Functions in Solving a Convolution-Type Integral Equation by Regularization Method

    NASA Astrophysics Data System (ADS)

    Maslakov, M. L.

    2018-04-01

    This paper examines the solution of convolution-type integral equations of the first kind by applying the Tikhonov regularization method with two-parameter stabilizing functions. The class of stabilizing functions is expanded in order to improve the accuracy of the resulting solution. The features of the problem formulation for identification and adaptive signal correction are described. A method for choosing regularization parameters in problems of identification and adaptive signal correction is suggested.

  1. Contribution of the GOCE gradiometer components to regional gravity solutions

    NASA Astrophysics Data System (ADS)

    Naeimi, Majid; Bouman, Johannes

    2017-05-01

    The contribution of the GOCE gravity gradients to regional gravity field solutions is investigated in this study. We employ radial basis functions to recover the gravity field on regional scales over Amazon and Himalayas as our test regions. In the first step, four individual solutions based on the more accurate gravity gradient components Txx, Tyy, Tzz and Txz are derived. The Tzz component gives better solution than the other single-component solutions despite the less accuracy of Tzz compared to Txx and Tyy. Furthermore, we determine five more solutions based on several selected combinations of the gravity gradient components including a combined solution using the four gradient components. The Tzz and Tyy components are shown to be the main contributors in all combined solutions whereas the Txz adds the least value to the regional gravity solutions. We also investigate the contribution of the regularization term. We show that the contribution of the regularization significantly decreases as more gravity gradients are included. For the solution using all gravity gradients, regularization term contributes to about 5 per cent of the total solution. Finally, we demonstrate that in our test areas, regional gravity modelling based on GOCE data provide more reliable gravity signal in medium wavelengths as compared to pre-GOCE global gravity field models such as the EGM2008.

  2. Two-Way Regularized Fuzzy Clustering of Multiple Correspondence Analysis.

    PubMed

    Kim, Sunmee; Choi, Ji Yeh; Hwang, Heungsun

    2017-01-01

    Multiple correspondence analysis (MCA) is a useful tool for investigating the interrelationships among dummy-coded categorical variables. MCA has been combined with clustering methods to examine whether there exist heterogeneous subclusters of a population, which exhibit cluster-level heterogeneity. These combined approaches aim to classify either observations only (one-way clustering of MCA) or both observations and variable categories (two-way clustering of MCA). The latter approach is favored because its solutions are easier to interpret by providing explicitly which subgroup of observations is associated with which subset of variable categories. Nonetheless, the two-way approach has been built on hard classification that assumes observations and/or variable categories to belong to only one cluster. To relax this assumption, we propose two-way fuzzy clustering of MCA. Specifically, we combine MCA with fuzzy k-means simultaneously to classify a subgroup of observations and a subset of variable categories into a common cluster, while allowing both observations and variable categories to belong partially to multiple clusters. Importantly, we adopt regularized fuzzy k-means, thereby enabling us to decide the degree of fuzziness in cluster memberships automatically. We evaluate the performance of the proposed approach through the analysis of simulated and real data, in comparison with existing two-way clustering approaches.

  3. Cascades and Dissipative Anomalies in Relativistic Fluid Turbulence

    NASA Astrophysics Data System (ADS)

    Eyink, Gregory L.; Drivas, Theodore D.

    2018-02-01

    We develop a first-principles theory of relativistic fluid turbulence at high Reynolds and Péclet numbers. We follow an exact approach pioneered by Onsager, which we explain as a nonperturbative application of the principle of renormalization-group invariance. We obtain results very similar to those for nonrelativistic turbulence, with hydrodynamic fields in the inertial range described as distributional or "coarse-grained" solutions of the relativistic Euler equations. These solutions do not, however, satisfy the naive conservation laws of smooth Euler solutions but are afflicted with dissipative anomalies in the balance equations of internal energy and entropy. The anomalies are shown to be possible by exactly two mechanisms, local cascade and pressure-work defect. We derive "4 /5 th-law" type expressions for the anomalies, which allow us to characterize the singularities (structure-function scaling exponents) required for their not vanishing. We also investigate the Lorentz covariance of the inertial-range fluxes, which we find to be broken by our coarse-graining regularization but which is restored in the limit where the regularization is removed, similar to relativistic lattice quantum field theory. In the formal limit as speed of light goes to infinity, we recover the results of previous nonrelativistic theory. In particular, anomalous heat input to relativistic internal energy coincides in that limit with anomalous dissipation of nonrelativistic kinetic energy.

  4. Moving mesh finite element simulation for phase-field modeling of brittle fracture and convergence of Newton's iteration

    NASA Astrophysics Data System (ADS)

    Zhang, Fei; Huang, Weizhang; Li, Xianping; Zhang, Shicheng

    2018-03-01

    A moving mesh finite element method is studied for the numerical solution of a phase-field model for brittle fracture. The moving mesh partial differential equation approach is employed to dynamically track crack propagation. Meanwhile, the decomposition of the strain tensor into tensile and compressive components is essential for the success of the phase-field modeling of brittle fracture but results in a non-smooth elastic energy and stronger nonlinearity in the governing equation. This makes the governing equation much more difficult to solve and, in particular, Newton's iteration often fails to converge. Three regularization methods are proposed to smooth out the decomposition of the strain tensor. Numerical examples of fracture propagation under quasi-static load demonstrate that all of the methods can effectively improve the convergence of Newton's iteration for relatively small values of the regularization parameter but without compromising the accuracy of the numerical solution. They also show that the moving mesh finite element method is able to adaptively concentrate the mesh elements around propagating cracks and handle multiple and complex crack systems.

  5. On dynamical systems approaches and methods in f ( R ) cosmology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alho, Artur; Carloni, Sante; Uggla, Claes, E-mail: aalho@math.ist.utl.pt, E-mail: sante.carloni@tecnico.ulisboa.pt, E-mail: claes.uggla@kau.se

    We discuss dynamical systems approaches and methods applied to flat Robertson-Walker models in f ( R )-gravity. We argue that a complete description of the solution space of a model requires a global state space analysis that motivates globally covering state space adapted variables. This is shown explicitly by an illustrative example, f ( R ) = R + α R {sup 2}, α > 0, for which we introduce new regular dynamical systems on global compactly extended state spaces for the Jordan and Einstein frames. This example also allows us to illustrate several local and global dynamical systems techniquesmore » involving, e.g., blow ups of nilpotent fixed points, center manifold analysis, averaging, and use of monotone functions. As a result of applying dynamical systems methods to globally state space adapted dynamical systems formulations, we obtain pictures of the entire solution spaces in both the Jordan and the Einstein frames. This shows, e.g., that due to the domain of the conformal transformation between the Jordan and Einstein frames, not all the solutions in the Jordan frame are completely contained in the Einstein frame. We also make comparisons with previous dynamical systems approaches to f ( R ) cosmology and discuss their advantages and disadvantages.« less

  6. Evasion of No-Hair Theorems and Novel Black-Hole Solutions in Gauss-Bonnet Theories

    NASA Astrophysics Data System (ADS)

    Antoniou, G.; Bakopoulos, A.; Kanti, P.

    2018-03-01

    We consider a general Einstein-scalar-Gauss-Bonnet theory with a coupling function f (ϕ ) . We demonstrate that black-hole solutions appear as a generic feature of this theory since a regular horizon and an asymptotically flat solution may be easily constructed under mild assumptions for f (ϕ ). We show that the existing no-hair theorems are easily evaded, and a large number of regular black-hole solutions with scalar hair are then presented for a plethora of coupling functions f (ϕ ).

  7. Evasion of No-Hair Theorems and Novel Black-Hole Solutions in Gauss-Bonnet Theories.

    PubMed

    Antoniou, G; Bakopoulos, A; Kanti, P

    2018-03-30

    We consider a general Einstein-scalar-Gauss-Bonnet theory with a coupling function f(ϕ). We demonstrate that black-hole solutions appear as a generic feature of this theory since a regular horizon and an asymptotically flat solution may be easily constructed under mild assumptions for f(ϕ). We show that the existing no-hair theorems are easily evaded, and a large number of regular black-hole solutions with scalar hair are then presented for a plethora of coupling functions f(ϕ).

  8. Fully pseudospectral solution of the conformally invariant wave equation near the cylinder at spacelike infinity. III: nonspherical Schwarzschild waves and singularities at null infinity

    NASA Astrophysics Data System (ADS)

    Frauendiener, Jörg; Hennig, Jörg

    2018-03-01

    We extend earlier numerical and analytical considerations of the conformally invariant wave equation on a Schwarzschild background from the case of spherically symmetric solutions, discussed in Frauendiener and Hennig (2017 Class. Quantum Grav. 34 045005), to the case of general, nonsymmetric solutions. A key element of our approach is the modern standard representation of spacelike infinity as a cylinder. With a decomposition into spherical harmonics, we reduce the four-dimensional wave equation to a family of two-dimensional equations. These equations can be used to study the behaviour at the cylinder, where the solutions turn out to have, in general, logarithmic singularities at infinitely many orders. We derive regularity conditions that may be imposed on the initial data, in order to avoid the first singular terms. We then demonstrate that the fully pseudospectral time evolution scheme can be applied to this problem leading to a highly accurate numerical reconstruction of the nonsymmetric solutions. We are particularly interested in the behaviour of the solutions at future null infinity, and we numerically show that the singularities spread to null infinity from the critical set, where the cylinder approaches null infinity. The observed numerical behaviour is consistent with similar logarithmic singularities found analytically on the critical set. Finally, we demonstrate that even solutions with singularities at low orders can be obtained with high accuracy by virtue of a coordinate transformation that converts solutions with logarithmic singularities into smooth solutions.

  9. Progress towards daily "swath" solutions from GRACE

    NASA Astrophysics Data System (ADS)

    Save, H.; Bettadpur, S. V.; Sakumura, C.

    2015-12-01

    The GRACE mission has provided invaluable and the only data of its kind that measures the total water column in the Earth System over the past 13 years. The GRACE solutions available from the project have been monthly average solutions. There have been attempts by several groups to produce shorter time-window solutions with different techniques. There is also an experimental quick-look GRACE solution available from CSR that implements a sliding window approach while applying variable daily data weights. All of these GRACE solutions require special handling for data assimilation. This study explores the possibility of generating a true daily GRACE solution by computing a daily "swath" total water storage (TWS) estimate from GRACE using the Tikhonov regularization and high resolution monthly mascon estimation implemented at CSR. This paper discusses the techniques for computing such a solution and discusses the error and uncertainty characterization. We perform comparisons with official RL05 GRACE solutions and with alternate mascon solutions from CSR to understand the impact on the science results. We evaluate these solutions with emphasis on the temporal characteristics of the signal content and validate them against multiple models and in-situ data sets.

  10. Moment inference from tomograms

    USGS Publications Warehouse

    Day-Lewis, F. D.; Chen, Y.; Singha, K.

    2007-01-01

    Time-lapse geophysical tomography can provide valuable qualitative insights into hydrologic transport phenomena associated with aquifer dynamics, tracer experiments, and engineered remediation. Increasingly, tomograms are used to infer the spatial and/or temporal moments of solute plumes; these moments provide quantitative information about transport processes (e.g., advection, dispersion, and rate-limited mass transfer) and controlling parameters (e.g., permeability, dispersivity, and rate coefficients). The reliability of moments calculated from tomograms is, however, poorly understood because classic approaches to image appraisal (e.g., the model resolution matrix) are not directly applicable to moment inference. Here, we present a semi-analytical approach to construct a moment resolution matrix based on (1) the classic model resolution matrix and (2) image reconstruction from orthogonal moments. Numerical results for radar and electrical-resistivity imaging of solute plumes demonstrate that moment values calculated from tomograms depend strongly on plume location within the tomogram, survey geometry, regularization criteria, and measurement error. Copyright 2007 by the American Geophysical Union.

  11. Moment inference from tomograms

    USGS Publications Warehouse

    Day-Lewis, Frederick D.; Chen, Yongping; Singha, Kamini

    2007-01-01

    Time-lapse geophysical tomography can provide valuable qualitative insights into hydrologic transport phenomena associated with aquifer dynamics, tracer experiments, and engineered remediation. Increasingly, tomograms are used to infer the spatial and/or temporal moments of solute plumes; these moments provide quantitative information about transport processes (e.g., advection, dispersion, and rate-limited mass transfer) and controlling parameters (e.g., permeability, dispersivity, and rate coefficients). The reliability of moments calculated from tomograms is, however, poorly understood because classic approaches to image appraisal (e.g., the model resolution matrix) are not directly applicable to moment inference. Here, we present a semi-analytical approach to construct a moment resolution matrix based on (1) the classic model resolution matrix and (2) image reconstruction from orthogonal moments. Numerical results for radar and electrical-resistivity imaging of solute plumes demonstrate that moment values calculated from tomograms depend strongly on plume location within the tomogram, survey geometry, regularization criteria, and measurement error.

  12. The unsaturated flow in porous media with dynamic capillary pressure

    NASA Astrophysics Data System (ADS)

    Milišić, Josipa-Pina

    2018-05-01

    In this paper we consider a degenerate pseudoparabolic equation for the wetting saturation of an unsaturated two-phase flow in porous media with dynamic capillary pressure-saturation relationship where the relaxation parameter depends on the saturation. Following the approach given in [13] the existence of a weak solution is proved using Galerkin approximation and regularization techniques. A priori estimates needed for passing to the limit when the regularization parameter goes to zero are obtained by using appropriate test-functions, motivated by the fact that considered PDE allows a natural generalization of the classical Kullback entropy. Finally, a special care was given in obtaining an estimate of the mixed-derivative term by combining the information from the capillary pressure with the obtained a priori estimates on the saturation.

  13. Potential estimates for the p-Laplace system with data in divergence form

    NASA Astrophysics Data System (ADS)

    Cianchi, A.; Schwarzacher, S.

    2018-07-01

    A pointwise bound for local weak solutions to the p-Laplace system is established in terms of data on the right-hand side in divergence form. The relevant bound involves a Havin-Maz'ya-Wolff potential of the datum, and is a counterpart for data in divergence form of a classical result of [25], recently extended to systems in [28]. A local bound for oscillations is also provided. These results allow for a unified approach to regularity estimates for broad classes of norms, including Banach function norms (e.g. Lebesgue, Lorentz and Orlicz norms), and norms depending on the oscillation of functions (e.g. Hölder, BMO and, more generally, Campanato type norms). In particular, new regularity properties are exhibited, and well-known results are easily recovered.

  14. Regularized GRACE monthly solutions by constraining the difference between the longitudinal and latitudinal gravity variations

    NASA Astrophysics Data System (ADS)

    Chen, Qiujie; Chen, Wu; Shen, Yunzhong; Zhang, Xingfu; Hsu, Houze

    2016-04-01

    The existing unconstrained Gravity Recovery and Climate Experiment (GRACE) monthly solutions i.e. CSR RL05 from Center for Space Research (CSR), GFZ RL05a from GeoForschungsZentrum (GFZ), JPL RL05 from Jet Propulsion Laboratory (JPL), DMT-1 from Delft Institute of Earth Observation and Space Systems (DEOS), AIUB from Bern University, and Tongji-GRACE01 as well as Tongji-GRACE02 from Tongji University, are dominated by correlated noise (such as north-south stripe errors) in high degree coefficients. To suppress the correlated noise of the unconstrained GRACE solutions, one typical option is to use post-processing filters such as decorrelation filtering and Gaussian smoothing , which are quite effective to reduce the noise and convenient to be implemented. Unlike these post-processing methods, the CNES/GRGS monthly GRACE solutions from Centre National d'Etudes Spatiales (CNES) were developed by using regularization with Kaula rule, whose correlated noise are reduced to such a great extent that no decorrelation filtering is required. Actually, the previous studies demonstrated that the north-south stripes in the GRACE solutions are due to the poor sensitivity of gravity variation in east-west direction. In other words, the longitudinal sampling of GRACE mission is very sparse but the latitudinal sampling of GRACE mission is quite dense, indicating that the recoverability of the longitudinal gravity variation is poor or unstable, leading to the ill-conditioned monthly GRACE solutions. To stabilize the monthly solutions, we constructed the regularization matrices by minimizing the difference between the longitudinal and latitudinal gravity variations and applied them to derive a time series of regularized GRACE monthly solutions named RegTongji RL01 for the period Jan. 2003 to Aug. 2011 in this paper. The signal powers and noise level of RegTongji RL01 were analyzed in this paper, which shows that: (1) No smoothing or decorrelation filtering is required for RegTongji RL01 anymore. (2) The signal powers of RegTongji RL01 are obviously stronger than those of the filtered solutions but the noise levels of the regularized and filtered solutions are consistent, suggesting that RegTongji RL01 has the higher signal-to-noise ratio.

  15. Boundary regularized integral equation formulation of the Helmholtz equation in acoustics.

    PubMed

    Sun, Qiang; Klaseboer, Evert; Khoo, Boo-Cheong; Chan, Derek Y C

    2015-01-01

    A boundary integral formulation for the solution of the Helmholtz equation is developed in which all traditional singular behaviour in the boundary integrals is removed analytically. The numerical precision of this approach is illustrated with calculation of the pressure field owing to radiating bodies in acoustic wave problems. This method facilitates the use of higher order surface elements to represent boundaries, resulting in a significant reduction in the problem size with improved precision. Problems with extreme geometric aspect ratios can also be handled without diminished precision. When combined with the CHIEF method, uniqueness of the solution of the exterior acoustic problem is assured without the need to solve hypersingular integrals.

  16. Boundary regularized integral equation formulation of the Helmholtz equation in acoustics

    PubMed Central

    Sun, Qiang; Klaseboer, Evert; Khoo, Boo-Cheong; Chan, Derek Y. C.

    2015-01-01

    A boundary integral formulation for the solution of the Helmholtz equation is developed in which all traditional singular behaviour in the boundary integrals is removed analytically. The numerical precision of this approach is illustrated with calculation of the pressure field owing to radiating bodies in acoustic wave problems. This method facilitates the use of higher order surface elements to represent boundaries, resulting in a significant reduction in the problem size with improved precision. Problems with extreme geometric aspect ratios can also be handled without diminished precision. When combined with the CHIEF method, uniqueness of the solution of the exterior acoustic problem is assured without the need to solve hypersingular integrals. PMID:26064591

  17. Further investigation on "A multiplicative regularization for force reconstruction"

    NASA Astrophysics Data System (ADS)

    Aucejo, M.; De Smet, O.

    2018-05-01

    We have recently proposed a multiplicative regularization to reconstruct mechanical forces acting on a structure from vibration measurements. This method does not require any selection procedure for choosing the regularization parameter, since the amount of regularization is automatically adjusted throughout an iterative resolution process. The proposed iterative algorithm has been developed with performance and efficiency in mind, but it is actually a simplified version of a full iterative procedure not described in the original paper. The present paper aims at introducing the full resolution algorithm and comparing it with its simplified version in terms of computational efficiency and solution accuracy. In particular, it is shown that both algorithms lead to very similar identified solutions.

  18. Predictive sparse modeling of fMRI data for improved classification, regression, and visualization using the k-support norm.

    PubMed

    Belilovsky, Eugene; Gkirtzou, Katerina; Misyrlis, Michail; Konova, Anna B; Honorio, Jean; Alia-Klein, Nelly; Goldstein, Rita Z; Samaras, Dimitris; Blaschko, Matthew B

    2015-12-01

    We explore various sparse regularization techniques for analyzing fMRI data, such as the ℓ1 norm (often called LASSO in the context of a squared loss function), elastic net, and the recently introduced k-support norm. Employing sparsity regularization allows us to handle the curse of dimensionality, a problem commonly found in fMRI analysis. In this work we consider sparse regularization in both the regression and classification settings. We perform experiments on fMRI scans from cocaine-addicted as well as healthy control subjects. We show that in many cases, use of the k-support norm leads to better predictive performance, solution stability, and interpretability as compared to other standard approaches. We additionally analyze the advantages of using the absolute loss function versus the standard squared loss which leads to significantly better predictive performance for the regularization methods tested in almost all cases. Our results support the use of the k-support norm for fMRI analysis and on the clinical side, the generalizability of the I-RISA model of cocaine addiction. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Basis Expansion Approaches for Regularized Sequential Dictionary Learning Algorithms With Enforced Sparsity for fMRI Data Analysis.

    PubMed

    Seghouane, Abd-Krim; Iqbal, Asif

    2017-09-01

    Sequential dictionary learning algorithms have been successfully applied to functional magnetic resonance imaging (fMRI) data analysis. fMRI data sets are, however, structured data matrices with the notions of temporal smoothness in the column direction. This prior information, which can be converted into a constraint of smoothness on the learned dictionary atoms, has seldomly been included in classical dictionary learning algorithms when applied to fMRI data analysis. In this paper, we tackle this problem by proposing two new sequential dictionary learning algorithms dedicated to fMRI data analysis by accounting for this prior information. These algorithms differ from the existing ones in their dictionary update stage. The steps of this stage are derived as a variant of the power method for computing the SVD. The proposed algorithms generate regularized dictionary atoms via the solution of a left regularized rank-one matrix approximation problem where temporal smoothness is enforced via regularization through basis expansion and sparse basis expansion in the dictionary update stage. Applications on synthetic data experiments and real fMRI data sets illustrating the performance of the proposed algorithms are provided.

  20. Characterization of Window Functions for Regularization of Electrical Capacitance Tomography Image Reconstruction

    NASA Astrophysics Data System (ADS)

    Jiang, Peng; Peng, Lihui; Xiao, Deyun

    2007-06-01

    This paper presents a regularization method by using different window functions as regularization for electrical capacitance tomography (ECT) image reconstruction. Image reconstruction for ECT is a typical ill-posed inverse problem. Because of the small singular values of the sensitivity matrix, the solution is sensitive to the measurement noise. The proposed method uses the spectral filtering properties of different window functions to make the solution stable by suppressing the noise in measurements. The window functions, such as the Hanning window, the cosine window and so on, are modified for ECT image reconstruction. Simulations with respect to five typical permittivity distributions are carried out. The reconstructions are better and some of the contours are clearer than the results from the Tikhonov regularization. Numerical results show that the feasibility of the image reconstruction algorithm using different window functions as regularization.

  1. Algorithms for bilevel optimization

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia; Dennis, J. E., Jr.

    1994-01-01

    General multilevel nonlinear optimization problems arise in design of complex systems and can be used as a means of regularization for multi-criteria optimization problems. Here, for clarity in displaying our ideas, we restrict ourselves to general bi-level optimization problems, and we present two solution approaches. Both approaches use a trust-region globalization strategy, and they can be easily extended to handle the general multilevel problem. We make no convexity assumptions, but we do assume that the problem has a nondegenerate feasible set. We consider necessary optimality conditions for the bi-level problem formulations and discuss results that can be extended to obtain multilevel optimization formulations with constraints at each level.

  2. Enriched reproducing kernel particle method for fractional advection-diffusion equation

    NASA Astrophysics Data System (ADS)

    Ying, Yuping; Lian, Yanping; Tang, Shaoqiang; Liu, Wing Kam

    2018-06-01

    The reproducing kernel particle method (RKPM) has been efficiently applied to problems with large deformations, high gradients and high modal density. In this paper, it is extended to solve a nonlocal problem modeled by a fractional advection-diffusion equation (FADE), which exhibits a boundary layer with low regularity. We formulate this method on a moving least-square approach. Via the enrichment of fractional-order power functions to the traditional integer-order basis for RKPM, leading terms of the solution to the FADE can be exactly reproduced, which guarantees a good approximation to the boundary layer. Numerical tests are performed to verify the proposed approach.

  3. Simple picture for neutrino flavor transformation in supernovae

    NASA Astrophysics Data System (ADS)

    Duan, Huaiyu; Fuller, George M.; Qian, Yong-Zhong

    2007-10-01

    We can understand many recently discovered features of flavor evolution in dense, self-coupled supernova neutrino and antineutrino systems with a simple, physical scheme consisting of two quasistatic solutions. One solution closely resembles the conventional, adiabatic single-neutrino Mikheyev-Smirnov-Wolfenstein (MSW) mechanism, in that neutrinos and antineutrinos remain in mass eigenstates as they evolve in flavor space. The other solution is analogous to the regular precession of a gyroscopic pendulum in flavor space, and has been discussed extensively in recent works. Results of recent numerical studies are best explained with combinations of these solutions in the following general scenario: (1) Near the neutrino sphere, the MSW-like many-body solution obtains. (2) Depending on neutrino vacuum mixing parameters, luminosities, energy spectra, and the matter density profile, collective flavor transformation in the nutation mode develops and drives neutrinos away from the MSW-like evolution and toward regular precession. (3) Neutrino and antineutrino flavors roughly evolve according to the regular precession solution until neutrino densities are low. In the late stage of the precession solution, a stepwise swapping develops in the energy spectra of νe and νμ/ντ. We also discuss some subtle points regarding adiabaticity in flavor transformation in dense-neutrino systems.

  4. Thermodynamic Modeling of the YO(l.5)-ZrO2 System

    NASA Technical Reports Server (NTRS)

    Jacobson, Nathan S.; Liu, Zi-Kui; Kaufman, Larry; Zhang, Fan

    2003-01-01

    The YO1.5-ZrO2 system consists of five solid solutions, one liquid solution, and one intermediate compound. A thermodynamic description of this system is developed, which allows calculation of the phase diagram and thermodynamic properties. Two different solution models are used-a neutral species model with YO1.5 and ZrO2 as the components and a charged species model with Y(+3), Zr(+4), O(-2), and vacancies as components. For each model, regular and sub-regular solution parameters are derived fiom selected equilibrium phase and thermodynamic data.

  5. A network model of successive partitioning-limited solute diffusion through the stratum corneum.

    PubMed

    Schumm, Phillip; Scoglio, Caterina M; van der Merwe, Deon

    2010-02-07

    As the most exposed point of contact with the external environment, the skin is an important barrier to many chemical exposures, including medications, potentially toxic chemicals and cosmetics. Traditional dermal absorption models treat the stratum corneum lipids as a homogenous medium through which solutes diffuse according to Fick's first law of diffusion. This approach does not explain non-linear absorption and irregular distribution patterns within the stratum corneum lipids as observed in experimental data. A network model, based on successive partitioning-limited solute diffusion through the stratum corneum, where the lipid structure is represented by a large, sparse, and regular network where nodes have variable characteristics, offers an alternative, efficient, and flexible approach to dermal absorption modeling that simulates non-linear absorption data patterns. Four model versions are presented: two linear models, which have unlimited node capacities, and two non-linear models, which have limited node capacities. The non-linear model outputs produce absorption to dose relationships that can be best characterized quantitatively by using power equations, similar to the equations used to describe non-linear experimental data.

  6. Sensitivity of rough differential equations: An approach through the Omega lemma

    NASA Astrophysics Data System (ADS)

    Coutin, Laure; Lejay, Antoine

    2018-03-01

    The Itô map gives the solution of a Rough Differential Equation, a generalization of an Ordinary Differential Equation driven by an irregular path, when existence and uniqueness hold. By studying how a path is transformed through the vector field which is integrated, we prove that the Itô map is Hölder or Lipschitz continuous with respect to all its parameters. This result unifies and weakens the hypotheses of the regularity results already established in the literature.

  7. Discrete Regularization for Calibration of Geologic Facies Against Dynamic Flow Data

    NASA Astrophysics Data System (ADS)

    Khaninezhad, Mohammad-Reza; Golmohammadi, Azarang; Jafarpour, Behnam

    2018-04-01

    Subsurface flow model calibration involves many more unknowns than measurements, leading to ill-posed problems with nonunique solutions. To alleviate nonuniqueness, the problem is regularized by constraining the solution space using prior knowledge. In certain sedimentary environments, such as fluvial systems, the contrast in hydraulic properties of different facies types tends to dominate the flow and transport behavior, making the effect of within facies heterogeneity less significant. Hence, flow model calibration in those formations reduces to delineating the spatial structure and connectivity of different lithofacies types and their boundaries. A major difficulty in calibrating such models is honoring the discrete, or piecewise constant, nature of facies distribution. The problem becomes more challenging when complex spatial connectivity patterns with higher-order statistics are involved. This paper introduces a novel formulation for calibration of complex geologic facies by imposing appropriate constraints to recover plausible solutions that honor the spatial connectivity and discreteness of facies models. To incorporate prior connectivity patterns, plausible geologic features are learned from available training models. This is achieved by learning spatial patterns from training data, e.g., k-SVD sparse learning or the traditional Principal Component Analysis. Discrete regularization is introduced as a penalty functions to impose solution discreteness while minimizing the mismatch between observed and predicted data. An efficient gradient-based alternating directions algorithm is combined with variable splitting to minimize the resulting regularized nonlinear least squares objective function. Numerical results show that imposing learned facies connectivity and discreteness as regularization functions leads to geologically consistent solutions that improve facies calibration quality.

  8. Sandia fracture challenge 2: Sandia California's modeling approach

    DOE PAGES

    Karlson, Kyle N.; James W. Foulk, III; Brown, Arthur A.; ...

    2016-03-09

    The second Sandia Fracture Challenge illustrates that predicting the ductile fracture of Ti-6Al-4V subjected to moderate and elevated rates of loading requires thermomechanical coupling, elasto-thermo-poro-viscoplastic constitutive models with the physics of anisotropy and regularized numerical methods for crack initiation and propagation. We detail our initial approach with an emphasis on iterative calibration and systematically increasing complexity to accommodate anisotropy in the context of an isotropic material model. Blind predictions illustrate strengths and weaknesses of our initial approach. We then revisit our findings to illustrate the importance of including anisotropy in the failure process. Furthermore, mesh-independent solutions of continuum damage modelsmore » having both isotropic and anisotropic yields surfaces are obtained through nonlocality and localization elements.« less

  9. Downscaling Satellite Precipitation with Emphasis on Extremes: A Variational ℓ1-Norm Regularization in the Derivative Domain

    NASA Astrophysics Data System (ADS)

    Foufoula-Georgiou, E.; Ebtehaj, A. M.; Zhang, S. Q.; Hou, A. Y.

    2014-05-01

    The increasing availability of precipitation observations from space, e.g., from the Tropical Rainfall Measuring Mission (TRMM) and the forthcoming Global Precipitation Measuring (GPM) Mission, has fueled renewed interest in developing frameworks for downscaling and multi-sensor data fusion that can handle large data sets in computationally efficient ways while optimally reproducing desired properties of the underlying rainfall fields. Of special interest is the reproduction of extreme precipitation intensities and gradients, as these are directly relevant to hazard prediction. In this paper, we present a new formalism for downscaling satellite precipitation observations, which explicitly allows for the preservation of some key geometrical and statistical properties of spatial precipitation. These include sharp intensity gradients (due to high-intensity regions embedded within lower-intensity areas), coherent spatial structures (due to regions of slowly varying rainfall), and thicker-than-Gaussian tails of precipitation gradients and intensities. Specifically, we pose the downscaling problem as a discrete inverse problem and solve it via a regularized variational approach (variational downscaling) where the regularization term is selected to impose the desired smoothness in the solution while allowing for some steep gradients (called ℓ1-norm or total variation regularization). We demonstrate the duality between this geometrically inspired solution and its Bayesian statistical interpretation, which is equivalent to assuming a Laplace prior distribution for the precipitation intensities in the derivative (wavelet) space. When the observation operator is not known, we discuss the effect of its misspecification and explore a previously proposed dictionary-based sparse inverse downscaling methodology to indirectly learn the observation operator from a data base of coincidental high- and low-resolution observations. The proposed method and ideas are illustrated in case studies featuring the downscaling of a hurricane precipitation field.

  10. Downscaling Satellite Precipitation with Emphasis on Extremes: A Variational 1-Norm Regularization in the Derivative Domain

    NASA Technical Reports Server (NTRS)

    Foufoula-Georgiou, E.; Ebtehaj, A. M.; Zhang, S. Q.; Hou, A. Y.

    2013-01-01

    The increasing availability of precipitation observations from space, e.g., from the Tropical Rainfall Measuring Mission (TRMM) and the forthcoming Global Precipitation Measuring (GPM) Mission, has fueled renewed interest in developing frameworks for downscaling and multi-sensor data fusion that can handle large data sets in computationally efficient ways while optimally reproducing desired properties of the underlying rainfall fields. Of special interest is the reproduction of extreme precipitation intensities and gradients, as these are directly relevant to hazard prediction. In this paper, we present a new formalism for downscaling satellite precipitation observations, which explicitly allows for the preservation of some key geometrical and statistical properties of spatial precipitation. These include sharp intensity gradients (due to high-intensity regions embedded within lower-intensity areas), coherent spatial structures (due to regions of slowly varying rainfall),and thicker-than-Gaussian tails of precipitation gradients and intensities. Specifically, we pose the downscaling problem as a discrete inverse problem and solve it via a regularized variational approach (variational downscaling) where the regularization term is selected to impose the desired smoothness in the solution while allowing for some steep gradients(called 1-norm or total variation regularization). We demonstrate the duality between this geometrically inspired solution and its Bayesian statistical interpretation, which is equivalent to assuming a Laplace prior distribution for the precipitation intensities in the derivative (wavelet) space. When the observation operator is not known, we discuss the effect of its misspecification and explore a previously proposed dictionary-based sparse inverse downscaling methodology to indirectly learn the observation operator from a database of coincidental high- and low-resolution observations. The proposed method and ideas are illustrated in case studies featuring the downscaling of a hurricane precipitation field.

  11. Boundary Regularity for the Porous Medium Equation

    NASA Astrophysics Data System (ADS)

    Björn, Anders; Björn, Jana; Gianazza, Ugo; Siljander, Juhana

    2018-05-01

    We study the boundary regularity of solutions to the porous medium equation {u_t = Δ u^m} in the degenerate range {m > 1} . In particular, we show that in cylinders the Dirichlet problem with positive continuous boundary data on the parabolic boundary has a solution which attains the boundary values, provided that the spatial domain satisfies the elliptic Wiener criterion. This condition is known to be optimal, and it is a consequence of our main theorem which establishes a barrier characterization of regular boundary points for general—not necessarily cylindrical—domains in {{R}^{n+1}} . One of our fundamental tools is a new strict comparison principle between sub- and superparabolic functions, which makes it essential for us to study both nonstrict and strict Perron solutions to be able to develop a fruitful boundary regularity theory. Several other comparison principles and pasting lemmas are also obtained. In the process we obtain a rather complete picture of the relation between sub/superparabolic functions and weak sub/supersolutions.

  12. Manifold optimization-based analysis dictionary learning with an ℓ1∕2-norm regularizer.

    PubMed

    Li, Zhenni; Ding, Shuxue; Li, Yujie; Yang, Zuyuan; Xie, Shengli; Chen, Wuhui

    2018-02-01

    Recently there has been increasing attention towards analysis dictionary learning. In analysis dictionary learning, it is an open problem to obtain the strong sparsity-promoting solutions efficiently while simultaneously avoiding the trivial solutions of the dictionary. In this paper, to obtain the strong sparsity-promoting solutions, we employ the ℓ 1∕2 norm as a regularizer. The very recent study on ℓ 1∕2 norm regularization theory in compressive sensing shows that its solutions can give sparser results than using the ℓ 1 norm. We transform a complex nonconvex optimization into a number of one-dimensional minimization problems. Then the closed-form solutions can be obtained efficiently. To avoid trivial solutions, we apply manifold optimization to update the dictionary directly on the manifold satisfying the orthonormality constraint, so that the dictionary can avoid the trivial solutions well while simultaneously capturing the intrinsic properties of the dictionary. The experiments with synthetic and real-world data verify that the proposed algorithm for analysis dictionary learning can not only obtain strong sparsity-promoting solutions efficiently, but also learn more accurate dictionary in terms of dictionary recovery and image processing than the state-of-the-art algorithms. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Well-posedness of characteristic symmetric hyperbolic systems

    NASA Astrophysics Data System (ADS)

    Secchi, Paolo

    1996-06-01

    We consider the initial-boundary-value problem for quasi-linear symmetric hyperbolic systems with characteristic boundary of constant multiplicity. We show the well-posedness in Hadamard's sense (i.e., existence, uniqueness and continuous dependence of solutions on the data) of regular solutions in suitable functions spaces which take into account the loss of regularity in the normal direction to the characteristic boundary.

  14. Krylov subspace iterative methods for boundary element method based near-field acoustic holography.

    PubMed

    Valdivia, Nicolas; Williams, Earl G

    2005-02-01

    The reconstruction of the acoustic field for general surfaces is obtained from the solution of a matrix system that results from a boundary integral equation discretized using boundary element methods. The solution to the resultant matrix system is obtained using iterative regularization methods that counteract the effect of noise on the measurements. These methods will not require the calculation of the singular value decomposition, which can be expensive when the matrix system is considerably large. Krylov subspace methods are iterative methods that have the phenomena known as "semi-convergence," i.e., the optimal regularization solution is obtained after a few iterations. If the iteration is not stopped, the method converges to a solution that generally is totally corrupted by errors on the measurements. For these methods the number of iterations play the role of the regularization parameter. We will focus our attention to the study of the regularizing properties from the Krylov subspace methods like conjugate gradients, least squares QR and the recently proposed Hybrid method. A discussion and comparison of the available stopping rules will be included. A vibrating plate is considered as an example to validate our results.

  15. Analytic regularization of uniform cubic B-spline deformation fields.

    PubMed

    Shackleford, James A; Yang, Qi; Lourenço, Ana M; Shusharina, Nadya; Kandasamy, Nagarajan; Sharp, Gregory C

    2012-01-01

    Image registration is inherently ill-posed, and lacks a unique solution. In the context of medical applications, it is desirable to avoid solutions that describe physically unsound deformations within the patient anatomy. Among the accepted methods of regularizing non-rigid image registration to provide solutions applicable to medical practice is the penalty of thin-plate bending energy. In this paper, we develop an exact, analytic method for computing the bending energy of a three-dimensional B-spline deformation field as a quadratic matrix operation on the spline coefficient values. Results presented on ten thoracic case studies indicate the analytic solution is between 61-1371x faster than a numerical central differencing solution.

  16. Lax Integrability and the Peakon Problem for the Modified Camassa-Holm Equation

    NASA Astrophysics Data System (ADS)

    Chang, Xiangke; Szmigielski, Jacek

    2018-02-01

    Peakons are special weak solutions of a class of nonlinear partial differential equations modelling non-linear phenomena such as the breakdown of regularity and the onset of shocks. We show that the natural concept of weak solutions in the case of the modified Camassa-Holm equation studied in this paper is dictated by the distributional compatibility of its Lax pair and, as a result, it differs from the one proposed and used in the literature based on the concept of weak solutions used for equations of the Burgers type. Subsequently, we give a complete construction of peakon solutions satisfying the modified Camassa-Holm equation in the sense of distributions; our approach is based on solving certain inverse boundary value problem, the solution of which hinges on a combination of classical techniques of analysis involving Stieltjes' continued fractions and multi-point Padé approximations. We propose sufficient conditions needed to ensure the global existence of peakon solutions and analyze the large time asymptotic behaviour whose special features include a formation of pairs of peakons that share asymptotic speeds, as well as Toda-like sorting property.

  17. Learning Robust and Discriminative Subspace With Low-Rank Constraints.

    PubMed

    Li, Sheng; Fu, Yun

    2016-11-01

    In this paper, we aim at learning robust and discriminative subspaces from noisy data. Subspace learning is widely used in extracting discriminative features for classification. However, when data are contaminated with severe noise, the performance of most existing subspace learning methods would be limited. Recent advances in low-rank modeling provide effective solutions for removing noise or outliers contained in sample sets, which motivates us to take advantage of low-rank constraints in order to exploit robust and discriminative subspace for classification. In particular, we present a discriminative subspace learning method called the supervised regularization-based robust subspace (SRRS) approach, by incorporating the low-rank constraint. SRRS seeks low-rank representations from the noisy data, and learns a discriminative subspace from the recovered clean data jointly. A supervised regularization function is designed to make use of the class label information, and therefore to enhance the discriminability of subspace. Our approach is formulated as a constrained rank-minimization problem. We design an inexact augmented Lagrange multiplier optimization algorithm to solve it. Unlike the existing sparse representation and low-rank learning methods, our approach learns a low-dimensional subspace from recovered data, and explicitly incorporates the supervised information. Our approach and some baselines are evaluated on the COIL-100, ALOI, Extended YaleB, FERET, AR, and KinFace databases. The experimental results demonstrate the effectiveness of our approach, especially when the data contain considerable noise or variations.

  18. Second-Order Two-Sided Estimates in Nonlinear Elliptic Problems

    NASA Astrophysics Data System (ADS)

    Cianchi, Andrea; Maz'ya, Vladimir G.

    2018-05-01

    Best possible second-order regularity is established for solutions to p-Laplacian type equations with {p \\in (1, ∞)} and a square-integrable right-hand side. Our results provide a nonlinear counterpart of the classical L 2-coercivity theory for linear problems, which is missing in the existing literature. Both local and global estimates are obtained. The latter apply to solutions to either Dirichlet or Neumann boundary value problems. Minimal regularity on the boundary of the domain is required, although our conclusions are new even for smooth domains. If the domain is convex, no regularity of its boundary is needed at all.

  19. Effects of non-tidal atmospheric loading on a Kalman filter-based terrestrial reference frame

    NASA Astrophysics Data System (ADS)

    Abbondanza, C.; Altamimi, Z.; Chin, T. M.; Collilieux, X.; Dach, R.; Heflin, M. B.; Gross, R. S.; König, R.; Lemoine, F. G.; MacMillan, D. S.; Parker, J. W.; van Dam, T. M.; Wu, X.

    2013-12-01

    The International Terrestrial Reference Frame (ITRF) adopts a piece-wise linear model to parameterize regularized station positions and velocities. The space-geodetic (SG) solutions from VLBI, SLR, GPS and DORIS global networks used as input in the ITRF combination process account for tidal loading deformations, but ignore the non-tidal part. As a result, the non-linear signal observed in the time series of SG-derived station positions in part reflects non-tidal loading displacements not introduced in the SG data reduction. In this analysis, the effect of non-tidal atmospheric loading (NTAL) corrections on the TRF is assessed adopting a Remove/Restore approach: (i) Focusing on the a-posteriori approach, the NTAL model derived from the National Center for Environmental Prediction (NCEP) surface pressure is removed from the SINEX files of the SG solutions used as inputs to the TRF determinations. (ii) Adopting a Kalman-filter based approach, a linear TRF is estimated combining the 4 SG solutions free from NTAL displacements. (iii) Linear fits to the NTAL displacements removed at step (i) are restored to the linear reference frame estimated at (ii). The velocity fields of the (standard) linear reference frame in which the NTAL model has not been removed and the one in which the model has been removed/restored are compared and discussed.

  20. Exact Solution of the Gyration Radius of an Individual's Trajectory for a Simplified Human Regular Mobility Model

    NASA Astrophysics Data System (ADS)

    Yan, Xiao-Yong; Han, Xiao-Pu; Zhou, Tao; Wang, Bing-Hong

    2011-12-01

    We propose a simplified human regular mobility model to simulate an individual's daily travel with three sequential activities: commuting to workplace, going to do leisure activities and returning home. With the assumption that the individual has a constant travel speed and inferior limit of time at home and in work, we prove that the daily moving area of an individual is an ellipse, and finally obtain an exact solution of the gyration radius. The analytical solution captures the empirical observation well.

  1. Investigation of Service Quality of Measurement Reference Points for the Internet Services on Mobile Networks

    NASA Astrophysics Data System (ADS)

    Lipenbergs, E.; Bobrovs, Vj.; Ivanovs, G.

    2016-10-01

    To ensure that end-users and consumers have access to comprehensive, comparable and user-friendly information regarding the Internet access service quality, it is necessary to implement and regularly renew a set of legislative regulatory acts and to provide monitoring of the quality of Internet access services regarding the current European Regulatory Framework. The actual situation regarding the quality of service monitoring solutions in different European countries depends on national regulatory initiatives and public awareness. The service monitoring solutions are implemented using different measurement methodologies and tools. The paper investigates the practical implementations for developing a harmonising approach to quality monitoring in order to obtain objective information on the quality of Internet access services on mobile networks.

  2. On epicardial potential reconstruction using regularization schemes with the L1-norm data term.

    PubMed

    Shou, Guofa; Xia, Ling; Liu, Feng; Jiang, Mingfeng; Crozier, Stuart

    2011-01-07

    The electrocardiographic (ECG) inverse problem is ill-posed and usually solved by regularization schemes. These regularization methods, such as the Tikhonov method, are often based on the L2-norm data and constraint terms. However, L2-norm-based methods inherently provide smoothed inverse solutions that are sensitive to measurement errors, and also lack the capability of localizing and distinguishing multiple proximal cardiac electrical sources. This paper presents alternative regularization schemes employing the L1-norm data term for the reconstruction of epicardial potentials (EPs) from measured body surface potentials (BSPs). During numerical implementation, the iteratively reweighted norm algorithm was applied to solve the L1-norm-related schemes, and measurement noises were considered in the BSP data. The proposed L1-norm data term-based regularization schemes (with L1 and L2 penalty terms of the normal derivative constraint (labelled as L1TV and L1L2)) were compared with the L2-norm data terms (Tikhonov with zero-order and normal derivative constraints, labelled as ZOT and FOT, and the total variation method labelled as L2TV). The studies demonstrated that, with averaged measurement noise, the inverse solutions provided by the L1L2 and FOT algorithms have less relative error values. However, when larger noise occurred in some electrodes (for example, signal lost during measurement), the L1TV and L1L2 methods can obtain more accurate EPs in a robust manner. Therefore the L1-norm data term-based solutions are generally less perturbed by measurement noises, suggesting that the new regularization scheme is promising for providing practical ECG inverse solutions.

  3. Destructive materials thermal characteristics determination with application for spacecraft structures testing

    NASA Astrophysics Data System (ADS)

    Alifanov, O. M.; Budnik, S. A.; Nenarokomov, A. V.; Netelev, A. V.; Titov, D. M.

    2013-04-01

    In many practical situations it is impossible to measure directly thermal and thermokinetic properties of analyzed composite materials. The only way that can often be used to overcome these difficulties is indirect measurements. This type of measurements is usually formulated as the solution of inverse heat transfer problems. Such problems are ill-posed in mathematical sense and their main feature shows itself in the solution instabilities. That is why special regularizing methods are needed to solve them. The general method of iterative regularization is concerned with application to the estimation of materials properties. The objective of this paper is to estimate thermal and thermokinetic properties of advanced materials using the approach based on inverse methods. An experimental-computational system is presented for investigating the thermal and kinetics properties of composite materials by methods of inverse heat transfer problems and which is developed at the Thermal Laboratory of Department Space Systems Engineering, of Moscow Aviation Institute (MAI). The system is aimed at investigating the materials in conditions of unsteady contact and/or radiation heating over a wide range of temperature changes and heating rates in a vacuum, air and inert gas medium.

  4. A new weak Galerkin finite element method for elliptic interface problems

    DOE PAGES

    Mu, Lin; Wang, Junping; Ye, Xiu; ...

    2016-08-26

    We introduce and analyze a new weak Galerkin (WG) finite element method in this paper for solving second order elliptic equations with discontinuous coefficients and interfaces. Comparing with the existing WG algorithm for solving the same type problems, the present WG method has a simpler variational formulation and fewer unknowns. Moreover, the new WG algorithm allows the use of finite element partitions consisting of general polytopal meshes and can be easily generalized to high orders. Optimal order error estimates in both H1 and L2 norms are established for the present WG finite element solutions. We conducted extensive numerical experiments inmore » order to examine the accuracy, flexibility, and robustness of the proposed WG interface approach. In solving regular elliptic interface problems, high order convergences are numerically confirmed by using piecewise polynomial basis functions of high degrees. Moreover, the WG method is shown to be able to accommodate very complicated interfaces, due to its flexibility in choosing finite element partitions. Finally, in dealing with challenging problems with low regularities, the piecewise linear WG method is capable of delivering a second order of accuracy in L∞ norm for both C1 and H2 continuous solutions.« less

  5. A new weak Galerkin finite element method for elliptic interface problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mu, Lin; Wang, Junping; Ye, Xiu

    We introduce and analyze a new weak Galerkin (WG) finite element method in this paper for solving second order elliptic equations with discontinuous coefficients and interfaces. Comparing with the existing WG algorithm for solving the same type problems, the present WG method has a simpler variational formulation and fewer unknowns. Moreover, the new WG algorithm allows the use of finite element partitions consisting of general polytopal meshes and can be easily generalized to high orders. Optimal order error estimates in both H1 and L2 norms are established for the present WG finite element solutions. We conducted extensive numerical experiments inmore » order to examine the accuracy, flexibility, and robustness of the proposed WG interface approach. In solving regular elliptic interface problems, high order convergences are numerically confirmed by using piecewise polynomial basis functions of high degrees. Moreover, the WG method is shown to be able to accommodate very complicated interfaces, due to its flexibility in choosing finite element partitions. Finally, in dealing with challenging problems with low regularities, the piecewise linear WG method is capable of delivering a second order of accuracy in L∞ norm for both C1 and H2 continuous solutions.« less

  6. Compressed modes for variational problems in mathematical physics and compactly supported multiresolution basis for the Laplace operator

    NASA Astrophysics Data System (ADS)

    Ozolins, Vidvuds; Lai, Rongjie; Caflisch, Russel; Osher, Stanley

    2014-03-01

    We will describe a general formalism for obtaining spatially localized (``sparse'') solutions to a class of problems in mathematical physics, which can be recast as variational optimization problems, such as the important case of Schrödinger's equation in quantum mechanics. Sparsity is achieved by adding an L1 regularization term to the variational principle, which is shown to yield solutions with compact support (``compressed modes''). Linear combinations of these modes approximate the eigenvalue spectrum and eigenfunctions in a systematically improvable manner, and the localization properties of compressed modes make them an attractive choice for use with efficient numerical algorithms that scale linearly with the problem size. In addition, we introduce an L1 regularized variational framework for developing a spatially localized basis, compressed plane waves (CPWs), that spans the eigenspace of a differential operator, for instance, the Laplace operator. Our approach generalizes the concept of plane waves to an orthogonal real-space basis with multiresolution capabilities. Supported by NSF Award DMR-1106024 (VO), DOE Contract No. DE-FG02-05ER25710 (RC) and ONR Grant No. N00014-11-1-719 (SO).

  7. Exact Markov chain and approximate diffusion solution for haploid genetic drift with one-way mutation.

    PubMed

    Hössjer, Ola; Tyvand, Peder A; Miloh, Touvia

    2016-02-01

    The classical Kimura solution of the diffusion equation is investigated for a haploid random mating (Wright-Fisher) model, with one-way mutations and initial-value specified by the founder population. The validity of the transient diffusion solution is checked by exact Markov chain computations, using a Jordan decomposition of the transition matrix. The conclusion is that the one-way diffusion model mostly works well, although the rate of convergence depends on the initial allele frequency and the mutation rate. The diffusion approximation is poor for mutation rates so low that the non-fixation boundary is regular. When this happens we perturb the diffusion solution around the non-fixation boundary and obtain a more accurate approximation that takes quasi-fixation of the mutant allele into account. The main application is to quantify how fast a specific genetic variant of the infinite alleles model is lost. We also discuss extensions of the quasi-fixation approach to other models with small mutation rates. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Identification of subsurface structures using electromagnetic data and shape priors

    NASA Astrophysics Data System (ADS)

    Tveit, Svenn; Bakr, Shaaban A.; Lien, Martha; Mannseth, Trond

    2015-03-01

    We consider the inverse problem of identifying large-scale subsurface structures using the controlled source electromagnetic method. To identify structures in the subsurface where the contrast in electric conductivity can be small, regularization is needed to bias the solution towards preserving structural information. We propose to combine two approaches for regularization of the inverse problem. In the first approach we utilize a model-based, reduced, composite representation of the electric conductivity that is highly flexible, even for a moderate number of degrees of freedom. With a low number of parameters, the inverse problem is efficiently solved using a standard, second-order gradient-based optimization algorithm. Further regularization is obtained using structural prior information, available, e.g., from interpreted seismic data. The reduced conductivity representation is suitable for incorporation of structural prior information. Such prior information cannot, however, be accurately modeled with a gaussian distribution. To alleviate this, we incorporate the structural information using shape priors. The shape prior technique requires the choice of kernel function, which is application dependent. We argue for using the conditionally positive definite kernel which is shown to have computational advantages over the commonly applied gaussian kernel for our problem. Numerical experiments on various test cases show that the methodology is able to identify fairly complex subsurface electric conductivity distributions while preserving structural prior information during the inversion.

  9. Soliton solutions to the fifth-order Korteweg-de Vries equation and their applications to surface and internal water waves

    NASA Astrophysics Data System (ADS)

    Khusnutdinova, K. R.; Stepanyants, Y. A.; Tranter, M. R.

    2018-02-01

    We study solitary wave solutions of the fifth-order Korteweg-de Vries equation which contains, besides the traditional quadratic nonlinearity and third-order dispersion, additional terms including cubic nonlinearity and fifth order linear dispersion, as well as two nonlinear dispersive terms. An exact solitary wave solution to this equation is derived, and the dependence of its amplitude, width, and speed on the parameters of the governing equation is studied. It is shown that the derived solution can represent either an embedded or regular soliton depending on the equation parameters. The nonlinear dispersive terms can drastically influence the existence of solitary waves, their nature (regular or embedded), profile, polarity, and stability with respect to small perturbations. We show, in particular, that in some cases embedded solitons can be stable even with respect to interactions with regular solitons. The results obtained are applicable to surface and internal waves in fluids, as well as to waves in other media (plasma, solid waveguides, elastic media with microstructure, etc.).

  10. Higher and lowest order mixed finite element approximation of subsurface flow problems with solutions of low regularity

    NASA Astrophysics Data System (ADS)

    Bause, Markus

    2008-02-01

    In this work we study mixed finite element approximations of Richards' equation for simulating variably saturated subsurface flow and simultaneous reactive solute transport. Whereas higher order schemes have proved their ability to approximate reliably reactive solute transport (cf., e.g. [Bause M, Knabner P. Numerical simulation of contaminant biodegradation by higher order methods and adaptive time stepping. Comput Visual Sci 7;2004:61-78]), the Raviart- Thomas mixed finite element method ( RT0) with a first order accurate flux approximation is popular for computing the underlying water flow field (cf. [Bause M, Knabner P. Computation of variably saturated subsurface flow by adaptive mixed hybrid finite element methods. Adv Water Resour 27;2004:565-581, Farthing MW, Kees CE, Miller CT. Mixed finite element methods and higher order temporal approximations for variably saturated groundwater flow. Adv Water Resour 26;2003:373-394, Starke G. Least-squares mixed finite element solution of variably saturated subsurface flow problems. SIAM J Sci Comput 21;2000:1869-1885, Younes A, Mosé R, Ackerer P, Chavent G. A new formulation of the mixed finite element method for solving elliptic and parabolic PDE with triangular elements. J Comp Phys 149;1999:148-167, Woodward CS, Dawson CN. Analysis of expanded mixed finite element methods for a nonlinear parabolic equation modeling flow into variably saturated porous media. SIAM J Numer Anal 37;2000:701-724]). This combination might be non-optimal. Higher order techniques could increase the accuracy of the flow field calculation and thereby improve the prediction of the solute transport. Here, we analyse the application of the Brezzi- Douglas- Marini element ( BDM1) with a second order accurate flux approximation to elliptic, parabolic and degenerate problems whose solutions lack the regularity that is assumed in optimal order error analyses. For the flow field calculation a superiority of the BDM1 approach to the RT0 one is observed, which however is less significant for the accompanying solute transport.

  11. Robust cardiac motion estimation using ultrafast ultrasound data: a low-rank topology-preserving approach

    NASA Astrophysics Data System (ADS)

    Aviles, Angelica I.; Widlak, Thomas; Casals, Alicia; Nillesen, Maartje M.; Ammari, Habib

    2017-06-01

    Cardiac motion estimation is an important diagnostic tool for detecting heart diseases and it has been explored with modalities such as MRI and conventional ultrasound (US) sequences. US cardiac motion estimation still presents challenges because of complex motion patterns and the presence of noise. In this work, we propose a novel approach to estimate cardiac motion using ultrafast ultrasound data. Our solution is based on a variational formulation characterized by the L 2-regularized class. Displacement is represented by a lattice of b-splines and we ensure robustness, in the sense of eliminating outliers, by applying a maximum likelihood type estimator. While this is an important part of our solution, the main object of this work is to combine low-rank data representation with topology preservation. Low-rank data representation (achieved by finding the k-dominant singular values of a Casorati matrix arranged from the data sequence) speeds up the global solution and achieves noise reduction. On the other hand, topology preservation (achieved by monitoring the Jacobian determinant) allows one to radically rule out distortions while carefully controlling the size of allowed expansions and contractions. Our variational approach is carried out on a realistic dataset as well as on a simulated one. We demonstrate how our proposed variational solution deals with complex deformations through careful numerical experiments. The low-rank constraint speeds up the convergence of the optimization problem while topology preservation ensures a more accurate displacement. Beyond cardiac motion estimation, our approach is promising for the analysis of other organs that exhibit motion.

  12. Bardeen regular black hole with an electric source

    NASA Astrophysics Data System (ADS)

    Rodrigues, Manuel E.; Silva, Marcos V. de S.

    2018-06-01

    If some energy conditions on the stress-energy tensor are violated, is possible construct regular black holes in General Relativity and in alternative theories of gravity. This type of solution has horizons but does not present singularities. The first regular black hole was presented by Bardeen and can be obtained from Einstein equations in the presence of an electromagnetic field. E. Ayon-Beato and A. Garcia reinterpreted the Bardeen metric as a magnetic solution of General Relativity coupled to a nonlinear electrodynamics. In this work, we show that the Bardeen model may also be interpreted as a solution of Einstein equations in the presence of an electric source, whose electric field does not behave as a Coulomb field. We analyzed the asymptotic forms of the Lagrangian for the electric case and also analyzed the energy conditions.

  13. A Discontinuous Petrov-Galerkin Methodology for Adaptive Solutions to the Incompressible Navier-Stokes Equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roberts, Nathan V.; Demkowiz, Leszek; Moser, Robert

    2015-11-15

    The discontinuous Petrov-Galerkin methodology with optimal test functions (DPG) of Demkowicz and Gopalakrishnan [18, 20] guarantees the optimality of the solution in an energy norm, and provides several features facilitating adaptive schemes. Whereas Bubnov-Galerkin methods use identical trial and test spaces, Petrov-Galerkin methods allow these function spaces to differ. In DPG, test functions are computed on the fly and are chosen to realize the supremum in the inf-sup condition; the method is equivalent to a minimum residual method. For well-posed problems with sufficiently regular solutions, DPG can be shown to converge at optimal rates—the inf-sup constants governing the convergence aremore » mesh-independent, and of the same order as those governing the continuous problem [48]. DPG also provides an accurate mechanism for measuring the error, and this can be used to drive adaptive mesh refinements. We employ DPG to solve the steady incompressible Navier-Stokes equations in two dimensions, building on previous work on the Stokes equations, and focusing particularly on the usefulness of the approach for automatic adaptivity starting from a coarse mesh. We apply our approach to a manufactured solution due to Kovasznay as well as the lid-driven cavity flow, backward-facing step, and flow past a cylinder problems.« less

  14. Scalar-vector soliton fiber laser mode-locked by nonlinear polarization rotation.

    PubMed

    Wu, Zhichao; Liu, Deming; Fu, Songnian; Li, Lei; Tang, Ming; Zhao, Luming

    2016-08-08

    We report a passively mode-locked fiber laser by nonlinear polarization rotation (NPR), where both vector and scalar soliton can co-exist within the laser cavity. The mode-locked pulse evolves as a vector soliton in the strong birefringent segment and is transformed into a regular scalar soliton after the polarizer within the laser cavity. The existence of solutions in a polarization-dependent cavity comprising a periodic combination of two distinct nonlinear waves is first demonstrated and likely to be applicable to various other nonlinear systems. For very large local birefringence, our laser approaches the operation regime of vector soliton lasers, while it approaches scalar soliton fiber lasers under the condition of very small birefringence.

  15. Total variation superiorized conjugate gradient method for image reconstruction

    NASA Astrophysics Data System (ADS)

    Zibetti, Marcelo V. W.; Lin, Chuan; Herman, Gabor T.

    2018-03-01

    The conjugate gradient (CG) method is commonly used for the relatively-rapid solution of least squares problems. In image reconstruction, the problem can be ill-posed and also contaminated by noise; due to this, approaches such as regularization should be utilized. Total variation (TV) is a useful regularization penalty, frequently utilized in image reconstruction for generating images with sharp edges. When a non-quadratic norm is selected for regularization, as is the case for TV, then it is no longer possible to use CG. Non-linear CG is an alternative, but it does not share the efficiency that CG shows with least squares and methods such as fast iterative shrinkage-thresholding algorithms (FISTA) are preferred for problems with TV norm. A different approach to including prior information is superiorization. In this paper it is shown that the conjugate gradient method can be superiorized. Five different CG variants are proposed, including preconditioned CG. The CG methods superiorized by the total variation norm are presented and their performance in image reconstruction is demonstrated. It is illustrated that some of the proposed variants of the superiorized CG method can produce reconstructions of superior quality to those produced by FISTA and in less computational time, due to the speed of the original CG for least squares problems. In the Appendix we examine the behavior of one of the superiorized CG methods (we call it S-CG); one of its input parameters is a positive number ɛ. It is proved that, for any given ɛ that is greater than the half-squared-residual for the least squares solution, S-CG terminates in a finite number of steps with an output for which the half-squared-residual is less than or equal to ɛ. Importantly, it is also the case that the output will have a lower value of TV than what would be provided by unsuperiorized CG for the same value ɛ of the half-squared residual.

  16. A Novel Hypercomplex Solution to Kepler's Problem

    NASA Astrophysics Data System (ADS)

    Condurache, C.; Martinuşi, V.

    2007-05-01

    By using a Sundman like regularization, we offer a unified solution to Kepler's problem by using hypercomplex numbers. The fundamental role in this paper is played by the Laplace-Runge-Lenz prime integral and by the hypercomplex numbers algebra. The procedure unifies and generalizes the regularizations offered by Levi-Civita and Kustaanheimo-Stiefel. Closed form hypercomplex expressions for the law of motion and velocity are deduced, together with inedite hypercomplex prime integrals.

  17. Bayesian Recurrent Neural Network for Language Modeling.

    PubMed

    Chien, Jen-Tzung; Ku, Yuan-Chu

    2016-02-01

    A language model (LM) is calculated as the probability of a word sequence that provides the solution to word prediction for a variety of information systems. A recurrent neural network (RNN) is powerful to learn the large-span dynamics of a word sequence in the continuous space. However, the training of the RNN-LM is an ill-posed problem because of too many parameters from a large dictionary size and a high-dimensional hidden layer. This paper presents a Bayesian approach to regularize the RNN-LM and apply it for continuous speech recognition. We aim to penalize the too complicated RNN-LM by compensating for the uncertainty of the estimated model parameters, which is represented by a Gaussian prior. The objective function in a Bayesian classification network is formed as the regularized cross-entropy error function. The regularized model is constructed not only by calculating the regularized parameters according to the maximum a posteriori criterion but also by estimating the Gaussian hyperparameter by maximizing the marginal likelihood. A rapid approximation to a Hessian matrix is developed to implement the Bayesian RNN-LM (BRNN-LM) by selecting a small set of salient outer-products. The proposed BRNN-LM achieves a sparser model than the RNN-LM. Experiments on different corpora show the robustness of system performance by applying the rapid BRNN-LM under different conditions.

  18. Sequential-Optimization-Based Framework for Robust Modeling and Design of Heterogeneous Catalytic Systems

    DOE PAGES

    Rangarajan, Srinivas; Maravelias, Christos T.; Mavrikakis, Manos

    2017-11-09

    Here, we present a general optimization-based framework for (i) ab initio and experimental data driven mechanistic modeling and (ii) optimal catalyst design of heterogeneous catalytic systems. Both cases are formulated as a nonlinear optimization problem that is subject to a mean-field microkinetic model and thermodynamic consistency requirements as constraints, for which we seek sparse solutions through a ridge (L 2 regularization) penalty. The solution procedure involves an iterative sequence of forward simulation of the differential algebraic equations pertaining to the microkinetic model using a numerical tool capable of handling stiff systems, sensitivity calculations using linear algebra, and gradient-based nonlinear optimization.more » A multistart approach is used to explore the solution space, and a hierarchical clustering procedure is implemented for statistically classifying potentially competing solutions. An example of methanol synthesis through hydrogenation of CO and CO 2 on a Cu-based catalyst is used to illustrate the framework. The framework is fast, is robust, and can be used to comprehensively explore the model solution and design space of any heterogeneous catalytic system.« less

  19. Sequential-Optimization-Based Framework for Robust Modeling and Design of Heterogeneous Catalytic Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rangarajan, Srinivas; Maravelias, Christos T.; Mavrikakis, Manos

    Here, we present a general optimization-based framework for (i) ab initio and experimental data driven mechanistic modeling and (ii) optimal catalyst design of heterogeneous catalytic systems. Both cases are formulated as a nonlinear optimization problem that is subject to a mean-field microkinetic model and thermodynamic consistency requirements as constraints, for which we seek sparse solutions through a ridge (L 2 regularization) penalty. The solution procedure involves an iterative sequence of forward simulation of the differential algebraic equations pertaining to the microkinetic model using a numerical tool capable of handling stiff systems, sensitivity calculations using linear algebra, and gradient-based nonlinear optimization.more » A multistart approach is used to explore the solution space, and a hierarchical clustering procedure is implemented for statistically classifying potentially competing solutions. An example of methanol synthesis through hydrogenation of CO and CO 2 on a Cu-based catalyst is used to illustrate the framework. The framework is fast, is robust, and can be used to comprehensively explore the model solution and design space of any heterogeneous catalytic system.« less

  20. Analysis of the iteratively regularized Gauss-Newton method under a heuristic rule

    NASA Astrophysics Data System (ADS)

    Jin, Qinian; Wang, Wei

    2018-03-01

    The iteratively regularized Gauss-Newton method is one of the most prominent regularization methods for solving nonlinear ill-posed inverse problems when the data is corrupted by noise. In order to produce a useful approximate solution, this iterative method should be terminated properly. The existing a priori and a posteriori stopping rules require accurate information on the noise level, which may not be available or reliable in practical applications. In this paper we propose a heuristic selection rule for this regularization method, which requires no information on the noise level. By imposing certain conditions on the noise, we derive a posteriori error estimates on the approximate solutions under various source conditions. Furthermore, we establish a convergence result without using any source condition. Numerical results are presented to illustrate the performance of our heuristic selection rule.

  1. Reconstruction Of The Permittivity Profile Of A Stratified Dielectric Layer

    NASA Astrophysics Data System (ADS)

    Vogelzang, E.; Ferwerda, H. A.; Yevick, D.

    1985-03-01

    A numerical procedure is given for the reconstruction of the permittivity profile of a dielectric slab on a perfect conductor. Profiles not supporting guided modes are reconstructed from the complex reflection amplitude for TE-polarized, monochromatic plane waves incident from different directions using the Marchenko theory. The contribution of guided modes is incorporated in the reconstruction procedure through the Gelfand-Levitan equations. An advantage of our approach is that a unique solution for the permittivity profile is obtained without the use of complicated regularization techniques. Some illustrative numerical examples are presented.

  2. Force sensing using 3D displacement measurements in linear elastic bodies

    NASA Astrophysics Data System (ADS)

    Feng, Xinzeng; Hui, Chung-Yuen

    2016-07-01

    In cell traction microscopy, the mechanical forces exerted by a cell on its environment is usually determined from experimentally measured displacement by solving an inverse problem in elasticity. In this paper, an innovative numerical method is proposed which finds the "optimal" traction to the inverse problem. When sufficient regularization is applied, we demonstrate that the proposed method significantly improves the widely used approach using Green's functions. Motivated by real cell experiments, the equilibrium condition of a slowly migrating cell is imposed as a set of equality constraints on the unknown traction. Our validation benchmarks demonstrate that the numeric solution to the constrained inverse problem well recovers the actual traction when the optimal regularization parameter is used. The proposed method can thus be applied to study general force sensing problems, which utilize displacement measurements to sense inaccessible forces in linear elastic bodies with a priori constraints.

  3. On the local well-posedness and a Prodi-Serrin-type regularity criterion of the three-dimensional MHD-Boussinesq system without thermal diffusion

    NASA Astrophysics Data System (ADS)

    Larios, Adam; Pei, Yuan

    2017-07-01

    We prove a Prodi-Serrin-type global regularity condition for the three-dimensional Magnetohydrodynamic-Boussinesq system (3D MHD-Boussinesq) without thermal diffusion, in terms of only two velocity and two magnetic components. To the best of our knowledge, this is the first Prodi-Serrin-type criterion for such a 3D hydrodynamic system which is not fully dissipative, and indicates that such an approach may be successful on other systems. In addition, we provide a constructive proof of the local well-posedness of solutions to the fully dissipative 3D MHD-Boussinesq system, and also the fully inviscid, irresistive, non-diffusive MHD-Boussinesq equations. We note that, as a special case, these results include the 3D non-diffusive Boussinesq system and the 3D MHD equations. Moreover, they can be extended without difficulty to include the case of a Coriolis rotational term.

  4. Parallel architectures for iterative methods on adaptive, block structured grids

    NASA Technical Reports Server (NTRS)

    Gannon, D.; Vanrosendale, J.

    1983-01-01

    A parallel computer architecture well suited to the solution of partial differential equations in complicated geometries is proposed. Algorithms for partial differential equations contain a great deal of parallelism. But this parallelism can be difficult to exploit, particularly on complex problems. One approach to extraction of this parallelism is the use of special purpose architectures tuned to a given problem class. The architecture proposed here is tuned to boundary value problems on complex domains. An adaptive elliptic algorithm which maps effectively onto the proposed architecture is considered in detail. Two levels of parallelism are exploited by the proposed architecture. First, by making use of the freedom one has in grid generation, one can construct grids which are locally regular, permitting a one to one mapping of grids to systolic style processor arrays, at least over small regions. All local parallelism can be extracted by this approach. Second, though there may be a regular global structure to the grids constructed, there will be parallelism at this level. One approach to finding and exploiting this parallelism is to use an architecture having a number of processor clusters connected by a switching network. The use of such a network creates a highly flexible architecture which automatically configures to the problem being solved.

  5. Accuracy of AFM force distance curves via direct solution of the Euler-Bernoulli equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eppell, Steven J., E-mail: steven.eppell@case.edu; Liu, Yehe; Zypman, Fredy R.

    2016-03-15

    In an effort to improve the accuracy of force-separation curves obtained from atomic force microscope data, we compare force-separation curves computed using two methods to solve the Euler-Bernoulli equation. A recently introduced method using a direct sequential forward solution, Causal Time-Domain Analysis, is compared against a previously introduced Tikhonov Regularization method. Using the direct solution as a benchmark, it is found that the regularization technique is unable to reproduce accurate curve shapes. Using L-curve analysis and adjusting the regularization parameter, λ, to match either the depth or the full width at half maximum of the force curves, the two techniquesmore » are contrasted. Matched depths result in full width at half maxima that are off by an average of 27% and matched full width at half maxima produce depths that are off by an average of 109%.« less

  6. Space structures insulating material's thermophysical and radiation properties estimation

    NASA Astrophysics Data System (ADS)

    Nenarokomov, A. V.; Alifanov, O. M.; Titov, D. M.

    2007-11-01

    In many practical situations in aerospace technology it is impossible to measure directly such properties of analyzed materials (for example, composites) as thermal and radiation characteristics. The only way that can often be used to overcome these difficulties is indirect measurements. This type of measurement is usually formulated as the solution of inverse heat transfer problems. Such problems are ill-posed in mathematical sense and their main feature shows itself in the solution instabilities. That is why special regularizing methods are needed to solve them. The experimental methods of identification of the mathematical models of heat transfer based on solving the inverse problems are one of the modern effective solving manners. The objective of this paper is to estimate thermal and radiation properties of advanced materials using the approach based on inverse methods.

  7. Optimal boundary regularity for a singular Monge-Ampère equation

    NASA Astrophysics Data System (ADS)

    Jian, Huaiyu; Li, You

    2018-06-01

    In this paper we study the optimal global regularity for a singular Monge-Ampère type equation which arises from a few geometric problems. We find that the global regularity does not depend on the smoothness of domain, but it does depend on the convexity of the domain. We introduce (a , η) type to describe the convexity. As a result, we show that the more convex is the domain, the better is the regularity of the solution. In particular, the regularity is the best near angular points.

  8. Parsec's astrometry direct approaches .

    NASA Astrophysics Data System (ADS)

    Andrei, A. H.

    Parallaxes - and hence the fundamental establishment of stellar distances - rank among the oldest, keyest, and hardest of astronomical determinations. Arguably amongst the most essential too. The direct approach to obtain trigonometric parallaxes, using a constrained set of equations to derive positions, proper motions, and parallaxes, has been labeled as risky. Properly so, because the axis of the parallactic apparent ellipse is smaller than one arcsec even for the nearest stars, and just a fraction of its perimeter can be followed. Thus the classical approach is of linearizing the description by locking the solution to a set of precise positions of the Earth at the instants of observation, rather than to the dynamics of its orbit, and of adopting a close examination of the never many points available. In the PARSEC program the parallaxes of 143 brown dwarfs were aimed at. Five years of observation of the fields were taken with the WIFI camera at the ESO 2.2m telescope, in Chile. The goal is to provide a statistically significant number of trigonometric parallaxes to BD sub-classes from L0 to T7. Taking advantage of the large, regularly spaced, quantity of observations, here we take the risky approach to fit an ellipse in ecliptical observed coordinates and derive the parallaxes. We also combine the solutions from different centroiding methods, widely proven in prior astrometric investigations. As each of those methods assess diverse properties of the PSFs, they are taken as independent measurements, and combined into a weighted least-square general solution.

  9. Ca-Rich Carbonate Melts: A Regular-Solution Model, with Applications to Carbonatite Magma + Vapor Equilibria and Carbonate Lavas on Venus

    NASA Technical Reports Server (NTRS)

    Treiman, Allan H.

    1995-01-01

    A thermochemical model of the activities of species in carbonate-rich melts would be useful in quantifying chemical equilibria between carbonatite magmas and vapors and in extrapolating liquidus equilibria to unexplored PTX. A regular-solution model of Ca-rich carbonate melts is developed here, using the fact that they are ionic liquids, and can be treated (to a first approximation) as interpenetrating regular solutions of cations and of anions. Thermochemical data on systems of alkali metal cations with carbonate and other anions are drawn from the literature; data on systems with alkaline earth (and other) cations and carbonate (and other) anions are derived here from liquidus phase equilibria. The model is validated in that all available data (at 1 kbar) are consistent with single values for the melting temperature and heat of fusion for calcite, and all liquidi are consistent with the liquids acting as regular solutions. At 1 kbar, the metastable congruent melting temperature of calcite (CaCO3) is inferred to be 1596 K, with (Delta)bar-H(sub fus)(calcite) = 31.5 +/- 1 kJ/mol. Regular solution interaction parameters (W) for Ca(2+) and alkali metal cations are in the range -3 to -12 kJ/sq mol; W for Ca(2+)-Ba(2+) is approximately -11 kJ/sq mol; W for Ca(2+)-Mg(2+) is approximately -40 kJ/sq mol, and W for Ca(2+)-La(3+) is approximately +85 kJ/sq mol. Solutions of carbonate and most anions (including OH(-), F(-), and SO4(2-)) are nearly ideal, with W between 0(ideal) and -2.5 kJ/sq mol. The interaction of carbonate and phosphate ions is strongly nonideal, which is consistent with the suggestion of carbonate-phosphate liquid immiscibility. Interaction of carbonate and sulfide ions is also nonideal and suggestive of carbonate-sulfide liquid immiscibility. Solution of H2O, for all but the most H2O-rich compositions, can be modeled as a disproportionation to hydronium (H3O(+)) and hydroxyl (OH(-)) ions with W for Ca(2+)-H3O(+) (approximately) equals 33 kJ/sq mol. The regular-solution model of carbonate melts can be applied to problems of carbonatite magma + vapor equilibria and of extrapolating liquidus equilibria to unstudied systems. Calculations on one carbonatite (the Husereau dike, Oka complex, Quebec, Canada) show that the anion solution of its magma contained an OH mole fraction of (approximately) 0.07, although the vapor in equilibrium with the magma had P(H2O) = 8.5 x P(CO2). F in carbonatite systems is calculated to be strongly partitioned into the magma (as F(-)) relative to coexisting vapor. In the Husereau carbonatite magma, the anion solution contained an F(-) mole fraction of (approximately) 6 x 10(exp -5).

  10. Chemical interactions and thermodynamic studies in aluminum alloy/molten salt systems

    NASA Astrophysics Data System (ADS)

    Narayanan, Ramesh

    The recycling of aluminum and aluminum alloys such as Used Beverage Container (UBC) is done under a cover of molten salt flux based on (NaCl-KCl+fluorides). The reactions of aluminum alloys with molten salt fluxes have been investigated. Thermodynamic calculations are performed in the alloy/salt flux systems which allow quantitative predictions of the equilibrium compositions. There is preferential reaction of Mg in Al-Mg alloy with molten salt fluxes, especially those containing fluorides like NaF. An exchange reaction between Al-Mg alloy and molten salt flux has been demonstrated. Mg from the Al-Mg alloy transfers into the salt flux while Na from the salt flux transfers into the metal. Thermodynamic calculations indicated that the amount of Na in metal increases as the Mg content in alloy and/or NaF content in the reacting flux increases. This is an important point because small amounts of Na have a detrimental effect on the mechanical properties of the Al-Mg alloy. The reactions of Al alloys with molten salt fluxes result in the formation of bluish purple colored "streamers". It was established that the streamer is liquid alkali metal (Na and K in the case of NaCl-KCl-NaF systems) dissipating into the melt. The melts in which such streamers were observed are identified. The metal losses occurring due to reactions have been quantified, both by thermodynamic calculations and experimentally. A computer program has been developed to calculate ternary phase diagrams in molten salt systems from the constituting binary phase diagrams, based on a regular solution model. The extent of deviation of the binary systems from regular solution has been quantified. The systems investigated in which good agreement was found between the calculated and experimental phase diagrams included NaF-KF-LiF, NaCl-NaF-NaI and KNOsb3-TINOsb3-LiNOsb3. Furthermore, an insight has been provided on the interrelationship between the regular solution parameters and the topology of the phase diagram. The isotherms are flat (i.e. no skewness) when the regular solution parameters are zero. When the regular solution parameters are non-zero, the isotherms are skewed. A regular solution model is not adequate to accurately model the molten salt systems used in recycling like NaCl-KCl-LiF and NaCl-KCl-NaF.

  11. s-SMOOTH: Sparsity and Smoothness Enhanced EEG Brain Tomography

    PubMed Central

    Li, Ying; Qin, Jing; Hsin, Yue-Loong; Osher, Stanley; Liu, Wentai

    2016-01-01

    EEG source imaging enables us to reconstruct current density in the brain from the electrical measurements with excellent temporal resolution (~ ms). The corresponding EEG inverse problem is an ill-posed one that has infinitely many solutions. This is due to the fact that the number of EEG sensors is usually much smaller than that of the potential dipole locations, as well as noise contamination in the recorded signals. To obtain a unique solution, regularizations can be incorporated to impose additional constraints on the solution. An appropriate choice of regularization is critically important for the reconstruction accuracy of a brain image. In this paper, we propose a novel Sparsity and SMOOthness enhanced brain TomograpHy (s-SMOOTH) method to improve the reconstruction accuracy by integrating two recently proposed regularization techniques: Total Generalized Variation (TGV) regularization and ℓ1−2 regularization. TGV is able to preserve the source edge and recover the spatial distribution of the source intensity with high accuracy. Compared to the relevant total variation (TV) regularization, TGV enhances the smoothness of the image and reduces staircasing artifacts. The traditional TGV defined on a 2D image has been widely used in the image processing field. In order to handle 3D EEG source images, we propose a voxel-based Total Generalized Variation (vTGV) regularization that extends the definition of second-order TGV from 2D planar images to 3D irregular surfaces such as cortex surface. In addition, the ℓ1−2 regularization is utilized to promote sparsity on the current density itself. We demonstrate that ℓ1−2 regularization is able to enhance sparsity and accelerate computations than ℓ1 regularization. The proposed model is solved by an efficient and robust algorithm based on the difference of convex functions algorithm (DCA) and the alternating direction method of multipliers (ADMM). Numerical experiments using synthetic data demonstrate the advantages of the proposed method over other state-of-the-art methods in terms of total reconstruction accuracy, localization accuracy and focalization degree. The application to the source localization of event-related potential data further demonstrates the performance of the proposed method in real-world scenarios. PMID:27965529

  12. PARSEC's Astrometry - The Risky Approach

    NASA Astrophysics Data System (ADS)

    Andrei, A. H.

    2015-10-01

    Parallaxes - and hence the fundamental establishment of stellar distances - rank among the oldest, most direct, and hardest of astronomical determinations. Arguably amongst the most essential too. The direct approach to obtain trigonometric parallaxes, using a constrained set of equations to derive positions, proper motions, and parallaxes, has been labelled as risky. Properly so, because the axis of the parallactic apparent ellipse is smaller than one arcsec even for the nearest stars, and just a fraction of its perimeter can be followed. Thus the classical approach is of linearizing the description by locking the solution to a set of precise positions of the Earth at the instants of observation, rather than to the dynamics of its orbit, and of adopting a close examination of the few observations available. In the PARSEC program the parallaxes of 143 brown dwarfs were planned. Five years of observation of the fields were taken with the WFI camera at the ESO 2.2m telescope in Chile. The goal is to provide a statistically significant number of trigonometric parallaxes for BD sub-classes from L0 to T7. Taking advantage of the large, regularly spaced, quantity of observations, here we take the risky approach to fit an ellipse to the observed ecliptic coordinates and derive the parallaxes. We also combine the solutions from different centroiding methods, widely proven in prior astrometric investigations. As each of those methods assess diverse properties of the PSFs, they are taken as independent measurements, and combined into a weighted least-squares general solution. The results obtained compare well with the literature and with the classical approach.

  13. The numerical calculation of laminar boundary-layer separation

    NASA Technical Reports Server (NTRS)

    Klineberg, J. M.; Steger, J. L.

    1974-01-01

    Iterative finite-difference techniques are developed for integrating the boundary-layer equations, without approximation, through a region of reversed flow. The numerical procedures are used to calculate incompressible laminar separated flows and to investigate the conditions for regular behavior at the point of separation. Regular flows are shown to be characterized by an integrable saddle-type singularity that makes it difficult to obtain numerical solutions which pass continuously into the separated region. The singularity is removed and continuous solutions ensured by specifying the wall shear distribution and computing the pressure gradient as part of the solution. Calculated results are presented for several separated flows and the accuracy of the method is verified. A computer program listing and complete solution case are included.

  14. On convergence and convergence rates for Ivanov and Morozov regularization and application to some parameter identification problems in elliptic PDEs

    NASA Astrophysics Data System (ADS)

    Kaltenbacher, Barbara; Klassen, Andrej

    2018-05-01

    In this paper we provide a convergence analysis of some variational methods alternative to the classical Tikhonov regularization, namely Ivanov regularization (also called the method of quasi solutions) with some versions of the discrepancy principle for choosing the regularization parameter, and Morozov regularization (also called the method of the residuals). After motivating nonequivalence with Tikhonov regularization by means of an example, we prove well-definedness of the Ivanov and the Morozov method, convergence in the sense of regularization, as well as convergence rates under variational source conditions. Finally, we apply these results to some linear and nonlinear parameter identification problems in elliptic boundary value problems.

  15. Regularity for Fully Nonlinear Elliptic Equations with Oblique Boundary Conditions

    NASA Astrophysics Data System (ADS)

    Li, Dongsheng; Zhang, Kai

    2018-06-01

    In this paper, we obtain a series of regularity results for viscosity solutions of fully nonlinear elliptic equations with oblique derivative boundary conditions. In particular, we derive the pointwise C α, C 1,α and C 2,α regularity. As byproducts, we also prove the A-B-P maximum principle, Harnack inequality, uniqueness and solvability of the equations.

  16. A practical method to assess model sensitivity and parameter uncertainty in C cycle models

    NASA Astrophysics Data System (ADS)

    Delahaies, Sylvain; Roulstone, Ian; Nichols, Nancy

    2015-04-01

    The carbon cycle combines multiple spatial and temporal scales, from minutes to hours for the chemical processes occurring in plant cells to several hundred of years for the exchange between the atmosphere and the deep ocean and finally to millennia for the formation of fossil fuels. Together with our knowledge of the transformation processes involved in the carbon cycle, many Earth Observation systems are now available to help improving models and predictions using inverse modelling techniques. A generic inverse problem consists in finding a n-dimensional state vector x such that h(x) = y, for a given N-dimensional observation vector y, including random noise, and a given model h. The problem is well posed if the three following conditions hold: 1) there exists a solution, 2) the solution is unique and 3) the solution depends continuously on the input data. If at least one of these conditions is violated the problem is said ill-posed. The inverse problem is often ill-posed, a regularization method is required to replace the original problem with a well posed problem and then a solution strategy amounts to 1) constructing a solution x, 2) assessing the validity of the solution, 3) characterizing its uncertainty. The data assimilation linked ecosystem carbon (DALEC) model is a simple box model simulating the carbon budget allocation for terrestrial ecosystems. Intercomparison experiments have demonstrated the relative merit of various inverse modelling strategies (MCMC, ENKF) to estimate model parameters and initial carbon stocks for DALEC using eddy covariance measurements of net ecosystem exchange of CO2 and leaf area index observations. Most results agreed on the fact that parameters and initial stocks directly related to fast processes were best estimated with narrow confidence intervals, whereas those related to slow processes were poorly estimated with very large uncertainties. While other studies have tried to overcome this difficulty by adding complementary data streams or by considering longer observation windows no systematic analysis has been carried out so far to explain the large differences among results. We consider adjoint based methods to investigate inverse problems using DALEC and various data streams. Using resolution matrices we study the nature of the inverse problems (solution existence, uniqueness and stability) and show how standard regularization techniques affect resolution and stability properties. Instead of using standard prior information as a penalty term in the cost function to regularize the problems we constraint the parameter space using ecological balance conditions and inequality constraints. The efficiency and rapidity of this approach allows us to compute ensembles of solutions to the inverse problems from which we can establish the robustness of the variational method and obtain non Gaussian posterior distributions for the model parameters and initial carbon stocks.

  17. The Space-Wise Global Gravity Model from GOCE Nominal Mission Data

    NASA Astrophysics Data System (ADS)

    Gatti, A.; Migliaccio, F.; Reguzzoni, M.; Sampietro, D.; Sanso, F.

    2011-12-01

    In the framework of the GOCE data analysis, the space-wise approach implements a multi-step collocation solution for the estimation of a global geopotential model in terms of spherical harmonic coefficients and their error covariance matrix. The main idea is to use the collocation technique to exploit the spatial correlation of the gravity field in the GOCE data reduction. In particular the method consists of an along-track Wiener filter, a collocation gridding at satellite altitude and a spherical harmonic analysis by integration. All these steps are iterated, also to account for the rotation between local orbital and gradiometer reference frame. Error covariances are computed by Montecarlo simulations. The first release of the space-wise approach was presented at the ESA Living Planet Symposium in July 2010. This model was based on only two months of GOCE data and partially contained a priori information coming from other existing gravity models, especially at low degrees and low orders. A second release was distributed after the 4th International GOCE User Workshop in May 2011. In this solution, based on eight months of GOCE data, all the dependencies from external gravity information were removed thus giving rise to a GOCE-only space-wise model. However this model showed an over-regularization at the highest degrees of the spherical harmonic expansion due to the combination technique of intermediate solutions (based on about two months of data). In this work a new space-wise solution is presented. It is based on all nominal mission data from November 2009 to mid April 2011, and its main novelty is that the intermediate solutions are now computed in such a way to avoid over-regularization in the final solution. Beyond the spherical harmonic coefficients of the global model and their error covariance matrix, the space-wise approach is able to deliver as by-products a set of spherical grids of potential and of its second derivatives at mean satellite altitude. These grids have an information content that is very similar to the original along-orbit data, but they are much easier to handle. In addition they are estimated by local least-squares collocation and therefore, although computed by a unique global covariance function, they could yield more information at local level than the spherical harmonic coefficients of the global model. For this reason these grids seem to be useful for local geophysical investigations. The estimated grids with their estimated errors are presented in this work together with proposals on possible future improvements. A test to compare the different information contents of the along-orbit data, the gridded data and the spherical harmonic coefficients is also shown.

  18. Regularization of moving boundaries in a laplacian field by a mixed Dirichlet-Neumann boundary condition: exact results.

    PubMed

    Meulenbroek, Bernard; Ebert, Ute; Schäfer, Lothar

    2005-11-04

    The dynamics of ionization fronts that generate a conducting body are in the simplest approximation equivalent to viscous fingering without regularization. Going beyond this approximation, we suggest that ionization fronts can be modeled by a mixed Dirichlet-Neumann boundary condition. We derive exact uniformly propagating solutions of this problem in 2D and construct a single partial differential equation governing small perturbations of these solutions. For some parameter value, this equation can be solved analytically, which shows rigorously that the uniformly propagating solution is linearly convectively stable and that the asymptotic relaxation is universal and exponential in time.

  19. A note on the regularity of solutions of infinite dimensional Riccati equations

    NASA Technical Reports Server (NTRS)

    Burns, John A.; King, Belinda B.

    1994-01-01

    This note is concerned with the regularity of solutions of algebraic Riccati equations arising from infinite dimensional LQR and LQG control problems. We show that distributed parameter systems described by certain parabolic partial differential equations often have a special structure that smoothes solutions of the corresponding Riccati equation. This analysis is motivated by the need to find specific representations for Riccati operators that can be used in the development of computational schemes for problems where the input and output operators are not Hilbert-Schmidt. This situation occurs in many boundary control problems and in certain distributed control problems associated with optimal sensor/actuator placement.

  20. Structure-Function Network Mapping and Its Assessment via Persistent Homology

    PubMed Central

    2017-01-01

    Understanding the relationship between brain structure and function is a fundamental problem in network neuroscience. This work deals with the general method of structure-function mapping at the whole-brain level. We formulate the problem as a topological mapping of structure-function connectivity via matrix function, and find a stable solution by exploiting a regularization procedure to cope with large matrices. We introduce a novel measure of network similarity based on persistent homology for assessing the quality of the network mapping, which enables a detailed comparison of network topological changes across all possible thresholds, rather than just at a single, arbitrary threshold that may not be optimal. We demonstrate that our approach can uncover the direct and indirect structural paths for predicting functional connectivity, and our network similarity measure outperforms other currently available methods. We systematically validate our approach with (1) a comparison of regularized vs. non-regularized procedures, (2) a null model of the degree-preserving random rewired structural matrix, (3) different network types (binary vs. weighted matrices), and (4) different brain parcellation schemes (low vs. high resolutions). Finally, we evaluate the scalability of our method with relatively large matrices (2514x2514) of structural and functional connectivity obtained from 12 healthy human subjects measured non-invasively while at rest. Our results reveal a nonlinear structure-function relationship, suggesting that the resting-state functional connectivity depends on direct structural connections, as well as relatively parsimonious indirect connections via polysynaptic pathways. PMID:28046127

  1. Automatic Generation of Data Types for Classification of Deep Web Sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ngu, A H; Buttler, D J; Critchlow, T J

    2005-02-14

    A Service Class Description (SCD) is an effective meta-data based approach for discovering Deep Web sources whose data exhibit some regular patterns. However, it is tedious and error prone to create an SCD description manually. Moreover, a manually created SCD is not adaptive to the frequent changes of Web sources. It requires its creator to identify all the possible input and output types of a service a priori. In many domains, it is impossible to exhaustively list all the possible input and output data types of a source in advance. In this paper, we describe machine learning approaches for automaticmore » generation of the data types of an SCD. We propose two different approaches for learning data types of a class of Web sources. The Brute-Force Learner is able to generate data types that can achieve high recall, but with low precision. The Clustering-based Learner generates data types that have a high precision rate, but with a lower recall rate. We demonstrate the feasibility of these two learning-based solutions for automatic generation of data types for citation Web sources and presented a quantitative evaluation of these two solutions.« less

  2. Brans-Dicke Theory with Λ>0: Black Holes and Large Scale Structures.

    PubMed

    Bhattacharya, Sourav; Dialektopoulos, Konstantinos F; Romano, Antonio Enea; Tomaras, Theodore N

    2015-10-30

    A step-by-step approach is followed to study cosmic structures in the context of Brans-Dicke theory with positive cosmological constant Λ and parameter ω. First, it is shown that regular stationary black-hole solutions not only have constant Brans-Dicke field ϕ, but can exist only for ω=∞, which forces the theory to coincide with the general relativity. Generalizations of the theory in order to evade this black-hole no-hair theorem are presented. It is also shown that in the absence of a stationary cosmological event horizon in the asymptotic region, a stationary black-hole horizon can support a nontrivial Brans-Dicke hair. Even more importantly, it is shown next that the presence of a stationary cosmological event horizon rules out any regular stationary solution, appropriate for the description of a star. Thus, to describe a star one has to assume that there is no such stationary horizon in the faraway asymptotic region. Under this implicit assumption generic spherical cosmic structures are studied perturbatively and it is shown that only for ω>0 or ω≲-5 their predicted maximum sizes are consistent with observations. We also point out how, many of the conclusions of this work differ qualitatively from the Λ=0 spacetimes.

  3. Low-rank separated representation surrogates of high-dimensional stochastic functions: Application in Bayesian inference

    NASA Astrophysics Data System (ADS)

    Validi, AbdoulAhad

    2014-03-01

    This study introduces a non-intrusive approach in the context of low-rank separated representation to construct a surrogate of high-dimensional stochastic functions, e.g., PDEs/ODEs, in order to decrease the computational cost of Markov Chain Monte Carlo simulations in Bayesian inference. The surrogate model is constructed via a regularized alternative least-square regression with Tikhonov regularization using a roughening matrix computing the gradient of the solution, in conjunction with a perturbation-based error indicator to detect optimal model complexities. The model approximates a vector of a continuous solution at discrete values of a physical variable. The required number of random realizations to achieve a successful approximation linearly depends on the function dimensionality. The computational cost of the model construction is quadratic in the number of random inputs, which potentially tackles the curse of dimensionality in high-dimensional stochastic functions. Furthermore, this vector-valued separated representation-based model, in comparison to the available scalar-valued case, leads to a significant reduction in the cost of approximation by an order of magnitude equal to the vector size. The performance of the method is studied through its application to three numerical examples including a 41-dimensional elliptic PDE and a 21-dimensional cavity flow.

  4. Regularity of Solutions of the Nonlinear Sigma Model with Gravitino

    NASA Astrophysics Data System (ADS)

    Jost, Jürgen; Keßler, Enno; Tolksdorf, Jürgen; Wu, Ruijun; Zhu, Miaomiao

    2018-02-01

    We propose a geometric setup to study analytic aspects of a variant of the super symmetric two-dimensional nonlinear sigma model. This functional extends the functional of Dirac-harmonic maps by gravitino fields. The system of Euler-Lagrange equations of the two-dimensional nonlinear sigma model with gravitino is calculated explicitly. The gravitino terms pose additional analytic difficulties to show smoothness of its weak solutions which are overcome using Rivière's regularity theory and Riesz potential theory.

  5. Nonminimal Wu-Yang wormhole

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balakin, A. B.; Zayats, A. E.; Sushkov, S. V.

    2007-04-15

    We discuss exact solutions of a three-parameter nonminimal Einstein-Yang-Mills model, which describe the wormholes of a new type. These wormholes are considered to be supported by the SU(2)-symmetric Yang-Mills field, nonminimally coupled to gravity, the Wu-Yang ansatz for the gauge field being used. We distinguish between regular solutions, describing traversable nonminimal Wu-Yang wormholes, and black wormholes possessing one or two event horizons. The relation between the asymptotic mass of the regular traversable Wu-Yang wormhole and its throat radius is analyzed.

  6. Hermite regularization of the lattice Boltzmann method for open source computational aeroacoustics.

    PubMed

    Brogi, F; Malaspinas, O; Chopard, B; Bonadonna, C

    2017-10-01

    The lattice Boltzmann method (LBM) is emerging as a powerful engineering tool for aeroacoustic computations. However, the LBM has been shown to present accuracy and stability issues in the medium-low Mach number range, which is of interest for aeroacoustic applications. Several solutions have been proposed but are often too computationally expensive, do not retain the simplicity and the advantages typical of the LBM, or are not described well enough to be usable by the community due to proprietary software policies. An original regularized collision operator is proposed, based on the expansion of Hermite polynomials, that greatly improves the accuracy and stability of the LBM without significantly altering its algorithm. The regularized LBM can be easily coupled with both non-reflective boundary conditions and a multi-level grid strategy, essential ingredients for aeroacoustic simulations. Excellent agreement was found between this approach and both experimental and numerical data on two different benchmarks: the laminar, unsteady flow past a 2D cylinder and the 3D turbulent jet. Finally, most of the aeroacoustic computations with LBM have been done with commercial software, while here the entire theoretical framework is implemented using an open source library (palabos).

  7. The effect of regularization in motion compensated PET image reconstruction: a realistic numerical 4D simulation study.

    PubMed

    Tsoumpas, C; Polycarpou, I; Thielemans, K; Buerger, C; King, A P; Schaeffter, T; Marsden, P K

    2013-03-21

    Following continuous improvement in PET spatial resolution, respiratory motion correction has become an important task. Two of the most common approaches that utilize all detected PET events to motion-correct PET data are the reconstruct-transform-average method (RTA) and motion-compensated image reconstruction (MCIR). In RTA, separate images are reconstructed for each respiratory frame, subsequently transformed to one reference frame and finally averaged to produce a motion-corrected image. In MCIR, the projection data from all frames are reconstructed by including motion information in the system matrix so that a motion-corrected image is reconstructed directly. Previous theoretical analyses have explained why MCIR is expected to outperform RTA. It has been suggested that MCIR creates less noise than RTA because the images for each separate respiratory frame will be severely affected by noise. However, recent investigations have shown that in the unregularized case RTA images can have fewer noise artefacts, while MCIR images are more quantitatively accurate but have the common salt-and-pepper noise. In this paper, we perform a realistic numerical 4D simulation study to compare the advantages gained by including regularization within reconstruction for RTA and MCIR, in particular using the median-root-prior incorporated in the ordered subsets maximum a posteriori one-step-late algorithm. In this investigation we have demonstrated that MCIR with proper regularization parameters reconstructs lesions with less bias and root mean square error and similar CNR and standard deviation to regularized RTA. This finding is reproducible for a variety of noise levels (25, 50, 100 million counts), lesion sizes (8 mm, 14 mm diameter) and iterations. Nevertheless, regularized RTA can also be a practical solution for motion compensation as a proper level of regularization reduces both bias and mean square error.

  8. Topics in Bethe Ansatz

    NASA Astrophysics Data System (ADS)

    Wang, Chunguang

    Integrable quantum spin chains have close connections to integrable quantum field. theories, modern condensed matter physics, string and Yang-Mills theories. Bethe. ansatz is one of the most important approaches for solving quantum integrable spin. chains. At the heart of the algebraic structure of integrable quantum spin chains is. the quantum Yang-Baxter equation and the boundary Yang-Baxter equation. This. thesis focuses on four topics in Bethe ansatz. The Bethe equations for the isotropic periodic spin-1/2 Heisenberg chain with N. sites have solutions containing ±i/2 that are singular: both the corresponding energy and the algebraic Bethe ansatz vector are divergent. Such solutions must be carefully regularized. We consider a regularization involving a parameter that can be. determined using a generalization of the Bethe equations. These generalized Bethe. equations provide a practical way of determining which singular solutions correspond. to eigenvectors of the model. The Bethe equations for the periodic XXX and XXZ spin chains admit singular. solutions, for which the corresponding eigenvalues and eigenvectors are ill-defined. We use a twist regularization to derive conditions for such singular solutions to bephysical, in which case they correspond to genuine eigenvalues and eigenvectors of. the Hamiltonian. We analyze the ground state of the open spin-1/2 isotropic quantum spin chain. with a non-diagonal boundary term using a recently proposed Bethe ansatz solution. As the coefficient of the non-diagonal boundary term tends to zero, the Bethe roots. split evenly into two sets: those that remain finite, and those that become infinite. We. argue that the former satisfy conventional Bethe equations, while the latter satisfy a. generalization of the Richardson-Gaudin equations. We derive an expression for the. leading correction to the boundary energy in terms of the boundary parameters. We argue that the Hamiltonians for A(2) 2n open quantum spin chains corresponding. to two choices of integrable boundary conditions have the symmetries Uq(Bn) and. Uq(Cn), respectively. The deformation of Cn is novel, with a nonstandard coproduct. We find a formula for the Dynkin labels of the Bethe states (which determine the degeneracies of the corresponding eigenvalues) in terms of the numbers of Bethe roots of. each type. With the help of this formula, we verify numerically (for a generic value of. the anisotropy parameter) that the degeneracies and multiplicities of the spectra implied by the quantum group symmetries are completely described by the Bethe ansatz.

  9. Augmenting Space Technology Program Management with Secure Cloud & Mobile Services

    NASA Technical Reports Server (NTRS)

    Hodson, Robert F.; Munk, Christopher; Helble, Adelle; Press, Martin T.; George, Cory; Johnson, David

    2017-01-01

    The National Aeronautics and Space Administration (NASA) Game Changing Development (GCD) program manages technology projects across all NASA centers and reports to NASA headquarters regularly on progress. Program stakeholders expect an up-to-date, accurate status and often have questions about the program's portfolio that requires a timely response. Historically, reporting, data collection, and analysis were done with manual processes that were inefficient and prone to error. To address these issues, GCD set out to develop a new business automation solution. In doing this, the program wanted to leverage the latest information technology platforms and decided to utilize traditional systems along with new cloud-based web services and gaming technology for a novel and interactive user environment. The team also set out to develop a mobile solution for anytime information access. This paper discusses a solution to these challenging goals and how the GCD team succeeded in developing and deploying such a system. The architecture and approach taken has proven to be effective and robust and can serve as a model for others looking to develop secure interactive mobile business solutions for government or enterprise business automation.

  10. A regularization method for extrapolation of solar potential magnetic fields

    NASA Technical Reports Server (NTRS)

    Gary, G. A.; Musielak, Z. E.

    1992-01-01

    The mathematical basis of a Tikhonov regularization method for extrapolating the chromospheric-coronal magnetic field using photospheric vector magnetograms is discussed. The basic techniques show that the Cauchy initial value problem can be formulated for potential magnetic fields. The potential field analysis considers a set of linear, elliptic partial differential equations. It is found that, by introducing an appropriate smoothing of the initial data of the Cauchy potential problem, an approximate Fourier integral solution is found, and an upper bound to the error in the solution is derived. This specific regularization technique, which is a function of magnetograph measurement sensitivities, provides a method to extrapolate the potential magnetic field above an active region into the chromosphere and low corona.

  11. Cosmological rotating black holes in five-dimensional fake supergravity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nozawa, Masato; Maeda, Kei-ichi; Waseda Research Institute for Science and Engineering, Okubo 3-4-1, Shinjuku, Tokyo 169-8555

    2011-01-15

    In recent series of papers, we found an arbitrary dimensional, time-evolving, and spatially inhomogeneous solution in Einstein-Maxwell-dilaton gravity with particular couplings. Similar to the supersymmetric case, the solution can be arbitrarily superposed in spite of nontrivial time-dependence, since the metric is specified by a set of harmonic functions. When each harmonic has a single point source at the center, the solution describes a spherically symmetric black hole with regular Killing horizons and the spacetime approaches asymptotically to the Friedmann-Lemaitre-Robertson-Walker (FLRW) cosmology. We discuss in this paper that in 5 dimensions, this equilibrium condition traces back to the first-order 'Killing spinor'more » equation in 'fake supergravity' coupled to arbitrary U(1) gauge fields and scalars. We present a five-dimensional, asymptotically FLRW, rotating black-hole solution admitting a nontrivial 'Killing spinor', which is a spinning generalization of our previous solution. We argue that the solution admits nondegenerate and rotating Killing horizons in contrast with the supersymmetric solutions. It is shown that the present pseudo-supersymmetric solution admits closed timelike curves around the central singularities. When only one harmonic is time-dependent, the solution oxidizes to 11 dimensions and realizes the dynamically intersecting M2/M2/M2-branes in a rotating Kasner universe. The Kaluza-Klein-type black holes are also discussed.« less

  12. Remarks on regular black holes

    NASA Astrophysics Data System (ADS)

    Nicolini, Piero; Smailagic, Anais; Spallucci, Euro

    Recently, it has been claimed by Chinaglia and Zerbini that the curvature singularity is present even in the so-called regular black hole solutions of the Einstein equations. In this brief note, we show that this criticism is devoid of any physical content.

  13. On solvability of boundary value problems for hyperbolic fourth-order equations with nonlocal boundary conditions of integral type

    NASA Astrophysics Data System (ADS)

    Popov, Nikolay S.

    2017-11-01

    Solvability of some initial-boundary value problems for linear hyperbolic equations of the fourth order is studied. A condition on the lateral boundary in these problems relates the values of a solution or the conormal derivative of a solution to the values of some integral operator applied to a solution. Nonlocal boundary-value problems for one-dimensional hyperbolic second-order equations with integral conditions on the lateral boundary were considered in the articles by A.I. Kozhanov. Higher-dimensional hyperbolic equations of higher order with integral conditions on the lateral boundary were not studied earlier. The existence and uniqueness theorems of regular solutions are proven. The method of regularization and the method of continuation in a parameter are employed to establish solvability.

  14. Exact solutions of unsteady Korteweg-de Vries and time regularized long wave equations.

    PubMed

    Islam, S M Rayhanul; Khan, Kamruzzaman; Akbar, M Ali

    2015-01-01

    In this paper, we implement the exp(-Φ(ξ))-expansion method to construct the exact traveling wave solutions for nonlinear evolution equations (NLEEs). Here we consider two model equations, namely the Korteweg-de Vries (KdV) equation and the time regularized long wave (TRLW) equation. These equations play significant role in nonlinear sciences. We obtained four types of explicit function solutions, namely hyperbolic, trigonometric, exponential and rational function solutions of the variables in the considered equations. It has shown that the applied method is quite efficient and is practically well suited for the aforementioned problems and so for the other NLEEs those arise in mathematical physics and engineering fields. PACS numbers: 02.30.Jr, 02.70.Wz, 05.45.Yv, 94.05.Fq.

  15. Control for well-posedness about a class of non-Newtonian incompressible porous medium fluid equations

    NASA Astrophysics Data System (ADS)

    Deng, Shuxian; Ge, Xinxin

    2017-10-01

    Considering the non-Newtonian fluid equation of incompressible porous media, using the properties of operator semigroup and measure space and the principle of squeezed image, Fourier analysis and a priori estimate in the measurement space are used to discuss the non-compressible porous media, the properness of the solution of the equation, its gradual behavior and its topological properties. Through the diffusion regularization method and the compressed limit compact method, we study the overall decay rate of the solution of the equation in a certain space when the initial value is sufficient. The decay estimation of the solution of the incompressible seepage equation is obtained, and the asymptotic behavior of the solution is obtained by using the double regularization model and the Duhamel principle.

  16. Long-time stability effects of quadrature and artificial viscosity on nodal discontinuous Galerkin methods for gas dynamics

    NASA Astrophysics Data System (ADS)

    Durant, Bradford; Hackl, Jason; Balachandar, Sivaramakrishnan

    2017-11-01

    Nodal discontinuous Galerkin schemes present an attractive approach to robust high-order solution of the equations of fluid mechanics, but remain accompanied by subtle challenges in their consistent stabilization. The effect of quadrature choices (full mass matrix vs spectral elements), over-integration to manage aliasing errors, and explicit artificial viscosity on the numerical solution of a steady homentropic vortex are assessed over a wide range of resolutions and polynomial orders using quadrilateral elements. In both stagnant and advected vortices in periodic and non-periodic domains the need arises for explicit stabilization beyond the numerical surface fluxes of discontinuous Galerkin spectral elements. Artificial viscosity via the entropy viscosity method is assessed as a stabilizing mechanism. It is shown that the regularity of the artificial viscosity field is essential to its use for long-time stabilization of small-scale features in nodal discontinuous Galerkin solutions of the Euler equations of gas dynamics. Supported by the Department of Energy Predictive Science Academic Alliance Program Contract DE-NA0002378.

  17. On the membrane approximation in isothermal film casting

    NASA Astrophysics Data System (ADS)

    Hagen, Thomas

    2014-08-01

    In this work, a one-dimensional model for isothermal film casting is studied. Film casting is an important engineering process to manufacture thin films and sheets from a highly viscous polymer melt. The model equations account for variations in film width and film thickness, and arise from thinness and kinematic assumptions for the free liquid film. The first aspect of our study is a rigorous discussion of the existence and uniqueness of stationary solutions. This objective is approached via the argument principle, exploiting the homotopy invariance of a family of analytic functions. As our second objective, we analyze the linearization of the governing equations about stationary solutions. It is shown that solutions for the associated boundary-initial value problem are given by a strongly continuous semigroup of bounded linear operators. To reach this result, we cast the relevant Cauchy problem in a more accessible form. These transformed equations allow us insight into the regularity of the semigroup, thus yielding the validity of the spectral mapping theorem for the semigroup and the spectrally determined growth property.

  18. General phase regularized reconstruction using phase cycling.

    PubMed

    Ong, Frank; Cheng, Joseph Y; Lustig, Michael

    2018-07-01

    To develop a general phase regularized image reconstruction method, with applications to partial Fourier imaging, water-fat imaging and flow imaging. The problem of enforcing phase constraints in reconstruction was studied under a regularized inverse problem framework. A general phase regularized reconstruction algorithm was proposed to enable various joint reconstruction of partial Fourier imaging, water-fat imaging and flow imaging, along with parallel imaging (PI) and compressed sensing (CS). Since phase regularized reconstruction is inherently non-convex and sensitive to phase wraps in the initial solution, a reconstruction technique, named phase cycling, was proposed to render the overall algorithm invariant to phase wraps. The proposed method was applied to retrospectively under-sampled in vivo datasets and compared with state of the art reconstruction methods. Phase cycling reconstructions showed reduction of artifacts compared to reconstructions without phase cycling and achieved similar performances as state of the art results in partial Fourier, water-fat and divergence-free regularized flow reconstruction. Joint reconstruction of partial Fourier + water-fat imaging + PI + CS, and partial Fourier + divergence-free regularized flow imaging + PI + CS were demonstrated. The proposed phase cycling reconstruction provides an alternative way to perform phase regularized reconstruction, without the need to perform phase unwrapping. It is robust to the choice of initial solutions and encourages the joint reconstruction of phase imaging applications. Magn Reson Med 80:112-125, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  19. Space-Time Discrete KPZ Equation

    NASA Astrophysics Data System (ADS)

    Cannizzaro, G.; Matetski, K.

    2018-03-01

    We study a general family of space-time discretizations of the KPZ equation and show that they converge to its solution. The approach we follow makes use of basic elements of the theory of regularity structures (Hairer in Invent Math 198(2):269-504, 2014) as well as its discrete counterpart (Hairer and Matetski in Discretizations of rough stochastic PDEs, 2015. arXiv:1511.06937). Since the discretization is in both space and time and we allow non-standard discretization for the product, the methods mentioned above have to be suitably modified in order to accommodate the structure of the models under study.

  20. A multi-frequency iterative imaging method for discontinuous inverse medium problem

    NASA Astrophysics Data System (ADS)

    Zhang, Lei; Feng, Lixin

    2018-06-01

    The inverse medium problem with discontinuous refractive index is a kind of challenging inverse problem. We employ the primal dual theory and fast solution of integral equations, and propose a new iterative imaging method. The selection criteria of regularization parameter is given by the method of generalized cross-validation. Based on multi-frequency measurements of the scattered field, a recursive linearization algorithm has been presented with respect to the frequency from low to high. We also discuss the initial guess selection strategy by semi-analytical approaches. Numerical experiments are presented to show the effectiveness of the proposed method.

  1. Image Reconstruction from Under sampled Fourier Data Using the Polynomial Annihilation Transform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Archibald, Richard K.; Gelb, Anne; Platte, Rodrigo

    Fourier samples are collected in a variety of applications including magnetic resonance imaging and synthetic aperture radar. The data are typically under-sampled and noisy. In recent years, l 1 regularization has received considerable attention in designing image reconstruction algorithms from under-sampled and noisy Fourier data. The underlying image is assumed to have some sparsity features, that is, some measurable features of the image have sparse representation. The reconstruction algorithm is typically designed to solve a convex optimization problem, which consists of a fidelity term penalized by one or more l 1 regularization terms. The Split Bregman Algorithm provides a fastmore » explicit solution for the case when TV is used for the l1l1 regularization terms. Due to its numerical efficiency, it has been widely adopted for a variety of applications. A well known drawback in using TV as an l 1 regularization term is that the reconstructed image will tend to default to a piecewise constant image. This issue has been addressed in several ways. Recently, the polynomial annihilation edge detection method was used to generate a higher order sparsifying transform, and was coined the “polynomial annihilation (PA) transform.” This paper adapts the Split Bregman Algorithm for the case when the PA transform is used as the l 1 regularization term. In so doing, we achieve a more accurate image reconstruction method from under-sampled and noisy Fourier data. Our new method compares favorably to the TV Split Bregman Algorithm, as well as to the popular TGV combined with shearlet approach.« less

  2. 21 CFR 606.65 - Supplies and reagents.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... solutions shall be tested on a regularly scheduled basis by methods described in the Standard Operating Procedures Manual to determine their capacity to perform as required: Reagent or solution Frequency of...

  3. 21 CFR 606.65 - Supplies and reagents.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... solutions shall be tested on a regularly scheduled basis by methods described in the Standard Operating Procedures Manual to determine their capacity to perform as required: Reagent or solution Frequency of...

  4. 21 CFR 606.65 - Supplies and reagents.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... solutions shall be tested on a regularly scheduled basis by methods described in the Standard Operating Procedures Manual to determine their capacity to perform as required: Reagent or solution Frequency of...

  5. 21 CFR 606.65 - Supplies and reagents.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... solutions shall be tested on a regularly scheduled basis by methods described in the Standard Operating Procedures Manual to determine their capacity to perform as required: Reagent or solution Frequency of...

  6. MRI reconstruction with joint global regularization and transform learning.

    PubMed

    Tanc, A Korhan; Eksioglu, Ender M

    2016-10-01

    Sparsity based regularization has been a popular approach to remedy the measurement scarcity in image reconstruction. Recently, sparsifying transforms learned from image patches have been utilized as an effective regularizer for the Magnetic Resonance Imaging (MRI) reconstruction. Here, we infuse additional global regularization terms to the patch-based transform learning. We develop an algorithm to solve the resulting novel cost function, which includes both patchwise and global regularization terms. Extensive simulation results indicate that the introduced mixed approach has improved MRI reconstruction performance, when compared to the algorithms which use either of the patchwise transform learning or global regularization terms alone. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Spatio Temporal EEG Source Imaging with the Hierarchical Bayesian Elastic Net and Elitist Lasso Models

    PubMed Central

    Paz-Linares, Deirel; Vega-Hernández, Mayrim; Rojas-López, Pedro A.; Valdés-Hernández, Pedro A.; Martínez-Montes, Eduardo; Valdés-Sosa, Pedro A.

    2017-01-01

    The estimation of EEG generating sources constitutes an Inverse Problem (IP) in Neuroscience. This is an ill-posed problem due to the non-uniqueness of the solution and regularization or prior information is needed to undertake Electrophysiology Source Imaging. Structured Sparsity priors can be attained through combinations of (L1 norm-based) and (L2 norm-based) constraints such as the Elastic Net (ENET) and Elitist Lasso (ELASSO) models. The former model is used to find solutions with a small number of smooth nonzero patches, while the latter imposes different degrees of sparsity simultaneously along different dimensions of the spatio-temporal matrix solutions. Both models have been addressed within the penalized regression approach, where the regularization parameters are selected heuristically, leading usually to non-optimal and computationally expensive solutions. The existing Bayesian formulation of ENET allows hyperparameter learning, but using the computationally intensive Monte Carlo/Expectation Maximization methods, which makes impractical its application to the EEG IP. While the ELASSO have not been considered before into the Bayesian context. In this work, we attempt to solve the EEG IP using a Bayesian framework for ENET and ELASSO models. We propose a Structured Sparse Bayesian Learning algorithm based on combining the Empirical Bayes and the iterative coordinate descent procedures to estimate both the parameters and hyperparameters. Using realistic simulations and avoiding the inverse crime we illustrate that our methods are able to recover complicated source setups more accurately and with a more robust estimation of the hyperparameters and behavior under different sparsity scenarios than classical LORETA, ENET and LASSO Fusion solutions. We also solve the EEG IP using data from a visual attention experiment, finding more interpretable neurophysiological patterns with our methods. The Matlab codes used in this work, including Simulations, Methods, Quality Measures and Visualization Routines are freely available in a public website. PMID:29200994

  8. Spatio Temporal EEG Source Imaging with the Hierarchical Bayesian Elastic Net and Elitist Lasso Models.

    PubMed

    Paz-Linares, Deirel; Vega-Hernández, Mayrim; Rojas-López, Pedro A; Valdés-Hernández, Pedro A; Martínez-Montes, Eduardo; Valdés-Sosa, Pedro A

    2017-01-01

    The estimation of EEG generating sources constitutes an Inverse Problem (IP) in Neuroscience. This is an ill-posed problem due to the non-uniqueness of the solution and regularization or prior information is needed to undertake Electrophysiology Source Imaging. Structured Sparsity priors can be attained through combinations of (L1 norm-based) and (L2 norm-based) constraints such as the Elastic Net (ENET) and Elitist Lasso (ELASSO) models. The former model is used to find solutions with a small number of smooth nonzero patches, while the latter imposes different degrees of sparsity simultaneously along different dimensions of the spatio-temporal matrix solutions. Both models have been addressed within the penalized regression approach, where the regularization parameters are selected heuristically, leading usually to non-optimal and computationally expensive solutions. The existing Bayesian formulation of ENET allows hyperparameter learning, but using the computationally intensive Monte Carlo/Expectation Maximization methods, which makes impractical its application to the EEG IP. While the ELASSO have not been considered before into the Bayesian context. In this work, we attempt to solve the EEG IP using a Bayesian framework for ENET and ELASSO models. We propose a Structured Sparse Bayesian Learning algorithm based on combining the Empirical Bayes and the iterative coordinate descent procedures to estimate both the parameters and hyperparameters. Using realistic simulations and avoiding the inverse crime we illustrate that our methods are able to recover complicated source setups more accurately and with a more robust estimation of the hyperparameters and behavior under different sparsity scenarios than classical LORETA, ENET and LASSO Fusion solutions. We also solve the EEG IP using data from a visual attention experiment, finding more interpretable neurophysiological patterns with our methods. The Matlab codes used in this work, including Simulations, Methods, Quality Measures and Visualization Routines are freely available in a public website.

  9. Regularization Paths for Cox's Proportional Hazards Model via Coordinate Descent.

    PubMed

    Simon, Noah; Friedman, Jerome; Hastie, Trevor; Tibshirani, Rob

    2011-03-01

    We introduce a pathwise algorithm for the Cox proportional hazards model, regularized by convex combinations of ℓ 1 and ℓ 2 penalties (elastic net). Our algorithm fits via cyclical coordinate descent, and employs warm starts to find a solution along a regularization path. We demonstrate the efficacy of our algorithm on real and simulated data sets, and find considerable speedup between our algorithm and competing methods.

  10. Global gradient estimates for divergence-type elliptic problems involving general nonlinear operators

    NASA Astrophysics Data System (ADS)

    Cho, Yumi

    2018-05-01

    We study nonlinear elliptic problems with nonstandard growth and ellipticity related to an N-function. We establish global Calderón-Zygmund estimates of the weak solutions in the framework of Orlicz spaces over bounded non-smooth domains. Moreover, we prove a global regularity result for asymptotically regular problems which are getting close to the regular problems considered, when the gradient variable goes to infinity.

  11. Projected regression method for solving Fredholm integral equations arising in the analytic continuation problem of quantum physics

    NASA Astrophysics Data System (ADS)

    Arsenault, Louis-François; Neuberg, Richard; Hannah, Lauren A.; Millis, Andrew J.

    2017-11-01

    We present a supervised machine learning approach to the inversion of Fredholm integrals of the first kind as they arise, for example, in the analytic continuation problem of quantum many-body physics. The approach provides a natural regularization for the ill-conditioned inverse of the Fredholm kernel, as well as an efficient and stable treatment of constraints. The key observation is that the stability of the forward problem permits the construction of a large database of outputs for physically meaningful inputs. Applying machine learning to this database generates a regression function of controlled complexity, which returns approximate solutions for previously unseen inputs; the approximate solutions are then projected onto the subspace of functions satisfying relevant constraints. Under standard error metrics the method performs as well or better than the Maximum Entropy method for low input noise and is substantially more robust to increased input noise. We suggest that the methodology will be similarly effective for other problems involving a formally ill-conditioned inversion of an integral operator, provided that the forward problem can be efficiently solved.

  12. The Fast Multipole Method and Fourier Convolution for the Solution of Acoustic Scattering on Regular Volumetric Grids

    PubMed Central

    Hesford, Andrew J.; Waag, Robert C.

    2010-01-01

    The fast multipole method (FMM) is applied to the solution of large-scale, three-dimensional acoustic scattering problems involving inhomogeneous objects defined on a regular grid. The grid arrangement is especially well suited to applications in which the scattering geometry is not known a priori and is reconstructed on a regular grid using iterative inverse scattering algorithms or other imaging techniques. The regular structure of unknown scattering elements facilitates a dramatic reduction in the amount of storage and computation required for the FMM, both of which scale linearly with the number of scattering elements. In particular, the use of fast Fourier transforms to compute Green's function convolutions required for neighboring interactions lowers the often-significant cost of finest-level FMM computations and helps mitigate the dependence of FMM cost on finest-level box size. Numerical results demonstrate the efficiency of the composite method as the number of scattering elements in each finest-level box is increased. PMID:20835366

  13. The fast multipole method and Fourier convolution for the solution of acoustic scattering on regular volumetric grids

    NASA Astrophysics Data System (ADS)

    Hesford, Andrew J.; Waag, Robert C.

    2010-10-01

    The fast multipole method (FMM) is applied to the solution of large-scale, three-dimensional acoustic scattering problems involving inhomogeneous objects defined on a regular grid. The grid arrangement is especially well suited to applications in which the scattering geometry is not known a priori and is reconstructed on a regular grid using iterative inverse scattering algorithms or other imaging techniques. The regular structure of unknown scattering elements facilitates a dramatic reduction in the amount of storage and computation required for the FMM, both of which scale linearly with the number of scattering elements. In particular, the use of fast Fourier transforms to compute Green's function convolutions required for neighboring interactions lowers the often-significant cost of finest-level FMM computations and helps mitigate the dependence of FMM cost on finest-level box size. Numerical results demonstrate the efficiency of the composite method as the number of scattering elements in each finest-level box is increased.

  14. The Fast Multipole Method and Fourier Convolution for the Solution of Acoustic Scattering on Regular Volumetric Grids.

    PubMed

    Hesford, Andrew J; Waag, Robert C

    2010-10-20

    The fast multipole method (FMM) is applied to the solution of large-scale, three-dimensional acoustic scattering problems involving inhomogeneous objects defined on a regular grid. The grid arrangement is especially well suited to applications in which the scattering geometry is not known a priori and is reconstructed on a regular grid using iterative inverse scattering algorithms or other imaging techniques. The regular structure of unknown scattering elements facilitates a dramatic reduction in the amount of storage and computation required for the FMM, both of which scale linearly with the number of scattering elements. In particular, the use of fast Fourier transforms to compute Green's function convolutions required for neighboring interactions lowers the often-significant cost of finest-level FMM computations and helps mitigate the dependence of FMM cost on finest-level box size. Numerical results demonstrate the efficiency of the composite method as the number of scattering elements in each finest-level box is increased.

  15. Comment on "Construction of regular black holes in general relativity"

    NASA Astrophysics Data System (ADS)

    Bronnikov, Kirill A.

    2017-12-01

    We claim that the paper by Zhong-Ying Fan and Xiaobao Wang on nonlinear electrodynamics coupled to general relativity [Phys. Rev. D 94,124027 (2016)], although correct in general, in some respects repeats previously obtained results without giving proper references. There is also an important point missing in this paper, which is necessary for understanding the physics of the system: in solutions with an electric charge, a regular center requires a non-Maxwell behavior of Lagrangian function L (f ) , (f =Fμ νFμ ν) at small f . Therefore, in all electric regular black hole solutions with a Reissner-Nordström asymptotic, the Lagrangian L (f ) is different in different parts of space, and the electromagnetic field behaves in a singular way at surfaces where L (f ) suffers branching.

  16. Assessment of chemical exchange in tryptophan-albumin solution through (19)F multicomponent transverse relaxation dispersion analysis.

    PubMed

    Lin, Ping-Chang

    2015-06-01

    A number of NMR methods possess the capability of probing chemical exchange dynamics in solution. However, certain drawbacks limit the applications of these NMR approaches, particularly, to a complex system. Here, we propose a procedure that integrates the regularized nonnegative least squares (NNLS) analysis of multiexponential T2 relaxation into Carr-Purcell-Meiboom-Gill (CPMG) relaxation dispersion experiments to probe chemical exchange in a multicompartmental system. The proposed procedure was validated through analysis of (19)F T2 relaxation data of 6-fluoro-DL-tryptophan in a two-compartment solution with and without bovine serum albumin. Given the regularized NNLS analysis of a T2 relaxation curve acquired, for example, at the CPMG frequency υ CPMG  = 125, the nature of two distinct peaks in the associated T2 distribution spectrum indicated 6-fluoro-DL-tryptophan either retaining the free state, with geometric mean */multiplicative standard deviation (MSD) = 1851.2 ms */1.51, or undergoing free/albumin-bound interconversion, with geometric mean */MSD = 236.8 ms */1.54, in the two-compartment system. Quantities of the individual tryptophan species were accurately reflected by the associated T2 peak areas, with an interconversion state-to-free state ratio of 0.45 ± 0.11. Furthermore, the CPMG relaxation dispersion analysis estimated the exchange rate between the free and albumin-bound states in this fluorinated tryptophan analog and the corresponding dissociation constant of the fluorinated tryptophan-albumin complex in the chemical-exchanging, two-compartment system.

  17. A Flexible and Efficient Method for Solving Ill-Posed Linear Integral Equations of the First Kind for Noisy Data

    NASA Astrophysics Data System (ADS)

    Antokhin, I. I.

    2017-06-01

    We propose an efficient and flexible method for solving Fredholm and Abel integral equations of the first kind, frequently appearing in astrophysics. These equations present an ill-posed problem. Our method is based on solving them on a so-called compact set of functions and/or using Tikhonov's regularization. Both approaches are non-parametric and do not require any theoretic model, apart from some very loose a priori constraints on the unknown function. The two approaches can be used independently or in a combination. The advantage of the method, apart from its flexibility, is that it gives uniform convergence of the approximate solution to the exact one, as the errors of input data tend to zero. Simulated and astrophysical examples are presented.

  18. Assessment and Calibration of Terrestrial Water Storage in North America with GRACE Level-1B Inter-satellite Residuals

    NASA Astrophysics Data System (ADS)

    Loomis, B.; Luthcke, S. B.

    2016-12-01

    The global time-variable gravity products from GRACE continue to provide unique and important measurements of vertically integrated terrestrial water storage (TWS). Despite substantial improvements in recent years to the quality of the GRACE solutions and analysis techniques, significant disagreements can still exist between various approaches to compute basin scale TWS. Applying the GRACE spherical harmonic solutions to TWS analysis requires the selection, design, and implementation of one of a wide variety of available filters. It is common to then estimate and apply a set of scale factors to these filtered solutions in an attempt to restore lost signal. The advent of global mascon solutions, such as those produced by our group at NASA GSFC, are an important advancement in time-variable gravity estimation. This method applies data-driven regularization at the normal equation level, resulting in improved estimates of regional TWS. Though mascons are a valuable product, the design of the constraint matrix, the global minimization of observation residuals, and the arc-specific parameters, all introduce the possibility that localized basin scale signals are not perfectly recovered. The precise inter-satellite ranging instrument provides the primary observation set for the GRACE gravity solutions. Recently, we have developed an approach to analyze and calibrate basin scale TWS estimates directly from the inter-satellite observation residuals. To summarize, we compute the range-acceleration residuals for two different forward models by executing separate runs of our Level-1B processing system. We then quantify the linear relationship that exists between the modeled mass and the residual differences, defining a simple differential correction procedure that is applied to the modeled signals. This new calibration procedure does not require the computationally expensive formation and inversion of normal equations, and it eliminates any influence the solution technique may have on the determined regional time series of TWS. We apply this calibration approach to sixteen drainage basins that cover North America and present new measurements of TWS determined directly from the Level-1B range-acceleration residuals. Lastly, we compare these new solutions to other GRACE solutions and independent datasets.

  19. Handwashing with soap or alcoholic solutions? A randomized clinical trial of its effectiveness.

    PubMed

    Zaragoza, M; Sallés, M; Gomez, J; Bayas, J M; Trilla, A

    1999-06-01

    The effectiveness of an alcoholic solution compared with the standard hygienic handwashing procedure during regular work in clinical wards and intensive care units of a large public university hospital in Barcelona was assessed. A prospective, randomized clinical trial with crossover design, paired data, and blind evaluation was done. Eligible health care workers (HCWs) included permanent and temporary HCWs of wards and intensive care units. From each category, a random sample of persons was selected. HCWs were randomly assigned to regular handwashing (liquid soap and water) or handwashing with the alcoholic solution by using a crossover design. The number of colony-forming units on agar plates from hands printing in 3 different samples was counted. A total of 47 HCWs were included. The average reduction in the number of colony-forming units from samples before handwashing to samples after handwashing was 49.6% for soap and water and 88.2% for the alcoholic solution. When both methods were compared, the average number of colony-forming units recovered after the procedure showed a statistically significant difference in favor of the alcoholic solution (P <.001). The alcoholic solution was well tolerated by HCWs. Overall acceptance rate was classified as "good" by 72% of HCWs after 2 weeks use. Of all HCWs included, 9.3% stated that the use of the alcoholic solution worsened minor pre-existing skin conditions. Although the regular use of hygienic soap and water handwashing procedures is the gold standard, the use of alcoholic solutions is effective and safe and deserves more attention, especially in situations in which the handwashing compliance rate is hampered by architectural problems (lack of sinks) or nursing work overload.

  20. Regularity estimates up to the boundary for elliptic systems of difference equations

    NASA Technical Reports Server (NTRS)

    Strikwerda, J. C.; Wade, B. A.; Bube, K. P.

    1986-01-01

    Regularity estimates up to the boundary for solutions of elliptic systems of finite difference equations were proved. The regularity estimates, obtained for boundary fitted coordinate systems on domains with smooth boundary, involve discrete Sobolev norms and are proved using pseudo-difference operators to treat systems with variable coefficients. The elliptic systems of difference equations and the boundary conditions which are considered are very general in form. The regularity of a regular elliptic system of difference equations was proved equivalent to the nonexistence of eigensolutions. The regularity estimates obtained are analogous to those in the theory of elliptic systems of partial differential equations, and to the results of Gustafsson, Kreiss, and Sundstrom (1972) and others for hyperbolic difference equations.

  1. Blind calibration of radio interferometric arrays using sparsity constraints and its implications for self-calibration

    NASA Astrophysics Data System (ADS)

    Chiarucci, Simone; Wijnholds, Stefan J.

    2018-02-01

    Blind calibration, i.e. calibration without a priori knowledge of the source model, is robust to the presence of unknown sources such as transient phenomena or (low-power) broad-band radio frequency interference that escaped detection. In this paper, we present a novel method for blind calibration of a radio interferometric array assuming that the observed field only contains a small number of discrete point sources. We show the huge computational advantage over previous blind calibration methods and we assess its statistical efficiency and robustness to noise and the quality of the initial estimate. We demonstrate the method on actual data from a Low-Frequency Array low-band antenna station showing that our blind calibration is able to recover the same gain solutions as the regular calibration approach, as expected from theory and simulations. We also discuss the implications of our findings for the robustness of regular self-calibration to poor starting models.

  2. On a model of electromagnetic field propagation in ferroelectric media

    NASA Astrophysics Data System (ADS)

    Picard, Rainer

    2007-04-01

    The Maxwell system in an anisotropic, inhomogeneous medium with non-linear memory effect produced by a Maxwell type system for the polarization is investigated under low regularity assumptions on data and domain. The particular form of memory in the system is motivated by a model for electromagnetic wave propagation in ferromagnetic materials suggested by Greenberg, MacCamy and Coffman [J.M. Greenberg, R.C. MacCamy, C.V. Coffman, On the long-time behavior of ferroelectric systems, Phys. D 134 (1999) 362-383]. To avoid unnecessary regularity requirements the problem is approached as a system of space-time operator equation in the framework of extrapolation spaces (Sobolev lattices), a theoretical framework developed in [R. Picard, Evolution equations as space-time operator equations, Math. Anal. Appl. 173 (2) (1993) 436-458; R. Picard, Evolution equations as operator equations in lattices of Hilbert spaces, Glasnik Mat. 35 (2000) 111-136]. A solution theory for a large class of ferromagnetic materials confined to an arbitrary open set (with suitably generalized boundary conditions) is obtained.

  3. Numerical Differentiation of Noisy, Nonsmooth Data

    DOE PAGES

    Chartrand, Rick

    2011-01-01

    We consider the problem of differentiating a function specified by noisy data. Regularizing the differentiation process avoids the noise amplification of finite-difference methods. We use total-variation regularization, which allows for discontinuous solutions. The resulting simple algorithm accurately differentiates noisy functions, including those which have a discontinuous derivative.

  4. Spectral Regularization Algorithms for Learning Large Incomplete Matrices.

    PubMed

    Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert

    2010-03-01

    We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 10(6) × 10(6) incomplete matrix with 10(5) observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques.

  5. Spectral Regularization Algorithms for Learning Large Incomplete Matrices

    PubMed Central

    Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert

    2010-01-01

    We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 106 × 106 incomplete matrix with 105 observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques. PMID:21552465

  6. Weight-matrix structured regularization provides optimal generalized least-squares estimate in diffuse optical tomography.

    PubMed

    Yalavarthy, Phaneendra K; Pogue, Brian W; Dehghani, Hamid; Paulsen, Keith D

    2007-06-01

    Diffuse optical tomography (DOT) involves estimation of tissue optical properties using noninvasive boundary measurements. The image reconstruction procedure is a nonlinear, ill-posed, and ill-determined problem, so overcoming these difficulties requires regularization of the solution. While the methods developed for solving the DOT image reconstruction procedure have a long history, there is less direct evidence on the optimal regularization methods, or exploring a common theoretical framework for techniques which uses least-squares (LS) minimization. A generalized least-squares (GLS) method is discussed here, which takes into account the variances and covariances among the individual data points and optical properties in the image into a structured weight matrix. It is shown that most of the least-squares techniques applied in DOT can be considered as special cases of this more generalized LS approach. The performance of three minimization techniques using the same implementation scheme is compared using test problems with increasing noise level and increasing complexity within the imaging field. Techniques that use spatial-prior information as constraints can be also incorporated into the GLS formalism. It is also illustrated that inclusion of spatial priors reduces the image error by at least a factor of 2. The improvement of GLS minimization is even more apparent when the noise level in the data is high (as high as 10%), indicating that the benefits of this approach are important for reconstruction of data in a routine setting where the data variance can be known based upon the signal to noise properties of the instruments.

  7. Electrophysiology of neurones of the inferior mesenteric ganglion of the cat.

    PubMed Central

    Julé, Y; Szurszewski, J H

    1983-01-01

    Intracellular recordings were obtained from cells in vitro in the inferior mesenteric ganglia of the cat. Neurones could be classified into three types: non-spontaneous, irregular discharging and regular discharging neurones. Non-spontaneous neurones had a stable resting membrane potential and responded with action potentials to indirect preganglionic nerve stimulation and to intracellular injection of depolarizing current. Irregular discharging neurones were characterized by a discharge of excitatory post-synaptic potentials (e.p.s.p.s.) which sometimes gave rise to action potentials. This activity was abolished by hexamethonium bromide, chlorisondamine and d-tubocurarine chloride. Tetrodotoxin and a low Ca2+ -high Mg2+ solution also blocked on-going activity in irregular discharging neurones. Regular discharging neurones were characterized by a rhythmic discharge of action potentials. Each action potential was preceded by a gradual depolarization of the intracellularly recorded membrane potential. Intracellular injection of hyperpolarizing current abolished the regular discharge of action potential. No synaptic potentials were observed during hyperpolarization of the membrane potential. Nicotinic, muscarinic and adrenergic receptor blocking drugs did not modify the discharge of action potentials in regular discharging neurones. A low Ca2+ -high Mg2+ solution also had no effect on the regular discharge of action potentials. Interpolation of an action potential between spontaneous action potentials in regular discharging neurones reset the rhythm of discharge. It is suggested that regular discharging neurones were endogenously active and that these neurones provided synaptic input to irregular discharging neurones. PMID:6140310

  8. Electrophysiology of neurones of the inferior mesenteric ganglion of the cat.

    PubMed

    Julé, Y; Szurszewski, J H

    1983-11-01

    Intracellular recordings were obtained from cells in vitro in the inferior mesenteric ganglia of the cat. Neurones could be classified into three types: non-spontaneous, irregular discharging and regular discharging neurones. Non-spontaneous neurones had a stable resting membrane potential and responded with action potentials to indirect preganglionic nerve stimulation and to intracellular injection of depolarizing current. Irregular discharging neurones were characterized by a discharge of excitatory post-synaptic potentials (e.p.s.p.s.) which sometimes gave rise to action potentials. This activity was abolished by hexamethonium bromide, chlorisondamine and d-tubocurarine chloride. Tetrodotoxin and a low Ca2+ -high Mg2+ solution also blocked on-going activity in irregular discharging neurones. Regular discharging neurones were characterized by a rhythmic discharge of action potentials. Each action potential was preceded by a gradual depolarization of the intracellularly recorded membrane potential. Intracellular injection of hyperpolarizing current abolished the regular discharge of action potential. No synaptic potentials were observed during hyperpolarization of the membrane potential. Nicotinic, muscarinic and adrenergic receptor blocking drugs did not modify the discharge of action potentials in regular discharging neurones. A low Ca2+ -high Mg2+ solution also had no effect on the regular discharge of action potentials. Interpolation of an action potential between spontaneous action potentials in regular discharging neurones reset the rhythm of discharge. It is suggested that regular discharging neurones were endogenously active and that these neurones provided synaptic input to irregular discharging neurones.

  9. Non-Cartesian MRI Reconstruction With Automatic Regularization Via Monte-Carlo SURE

    PubMed Central

    Weller, Daniel S.; Nielsen, Jon-Fredrik; Fessler, Jeffrey A.

    2013-01-01

    Magnetic resonance image (MRI) reconstruction from undersampled k-space data requires regularization to reduce noise and aliasing artifacts. Proper application of regularization however requires appropriate selection of associated regularization parameters. In this work, we develop a data-driven regularization parameter adjustment scheme that minimizes an estimate (based on the principle of Stein’s unbiased risk estimate—SURE) of a suitable weighted squared-error measure in k-space. To compute this SURE-type estimate, we propose a Monte-Carlo scheme that extends our previous approach to inverse problems (e.g., MRI reconstruction) involving complex-valued images. Our approach depends only on the output of a given reconstruction algorithm and does not require knowledge of its internal workings, so it is capable of tackling a wide variety of reconstruction algorithms and nonquadratic regularizers including total variation and those based on the ℓ1-norm. Experiments with simulated and real MR data indicate that the proposed approach is capable of providing near mean squared-error (MSE) optimal regularization parameters for single-coil undersampled non-Cartesian MRI reconstruction. PMID:23591478

  10. Decomposing Large Inverse Problems with an Augmented Lagrangian Approach: Application to Joint Inversion of Body-Wave Travel Times and Surface-Wave Dispersion Measurements

    NASA Astrophysics Data System (ADS)

    Reiter, D. T.; Rodi, W. L.

    2015-12-01

    Constructing 3D Earth models through the joint inversion of large geophysical data sets presents numerous theoretical and practical challenges, especially when diverse types of data and model parameters are involved. Among the challenges are the computational complexity associated with large data and model vectors and the need to unify differing model parameterizations, forward modeling methods and regularization schemes within a common inversion framework. The challenges can be addressed in part by decomposing the inverse problem into smaller, simpler inverse problems that can be solved separately, providing one knows how to merge the separate inversion results into an optimal solution of the full problem. We have formulated an approach to the decomposition of large inverse problems based on the augmented Lagrangian technique from optimization theory. As commonly done, we define a solution to the full inverse problem as the Earth model minimizing an objective function motivated, for example, by a Bayesian inference formulation. Our decomposition approach recasts the minimization problem equivalently as the minimization of component objective functions, corresponding to specified data subsets, subject to the constraints that the minimizing models be equal. A standard optimization algorithm solves the resulting constrained minimization problems by alternating between the separate solution of the component problems and the updating of Lagrange multipliers that serve to steer the individual solution models toward a common model solving the full problem. We are applying our inversion method to the reconstruction of the·crust and upper-mantle seismic velocity structure across Eurasia.· Data for the inversion comprise a large set of P and S body-wave travel times·and fundamental and first-higher mode Rayleigh-wave group velocities.

  11. Solutions of differential equations with regular coefficients by the methods of Richmond and Runge-Kutta

    NASA Technical Reports Server (NTRS)

    Cockrell, C. R.

    1989-01-01

    Numerical solutions of the differential equation which describe the electric field within an inhomogeneous layer of permittivity, upon which a perpendicularly-polarized plane wave is incident, are considered. Richmond's method and the Runge-Kutta method are compared for linear and exponential profiles of permittivities. These two approximate solutions are also compared with the exact solutions.

  12. Thick de Sitter brane solutions in higher dimensions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dzhunushaliev, Vladimir; Department of Physics and Microelectronic Engineering, Kyrgyz-Russian Slavic University, Bishkek, Kievskaya Str. 44, 720021, Kyrgyz Republic; Folomeev, Vladimir

    2009-01-15

    We present thick de Sitter brane solutions which are supported by two interacting phantom scalar fields in five-, six-, and seven-dimensional spacetime. It is shown that for all cases regular solutions with anti-de Sitter asymptotic (5D problem) and a flat asymptotic far from the brane (6D and 7D cases) exist. We also discuss the stability of our solutions.

  13. Adaptive regularization of the NL-means: application to image and video denoising.

    PubMed

    Sutour, Camille; Deledalle, Charles-Alban; Aujol, Jean-François

    2014-08-01

    Image denoising is a central problem in image processing and it is often a necessary step prior to higher level analysis such as segmentation, reconstruction, or super-resolution. The nonlocal means (NL-means) perform denoising by exploiting the natural redundancy of patterns inside an image; they perform a weighted average of pixels whose neighborhoods (patches) are close to each other. This reduces significantly the noise while preserving most of the image content. While it performs well on flat areas and textures, it suffers from two opposite drawbacks: it might over-smooth low-contrasted areas or leave a residual noise around edges and singular structures. Denoising can also be performed by total variation minimization-the Rudin, Osher and Fatemi model-which leads to restore regular images, but it is prone to over-smooth textures, staircasing effects, and contrast losses. We introduce in this paper a variational approach that corrects the over-smoothing and reduces the residual noise of the NL-means by adaptively regularizing nonlocal methods with the total variation. The proposed regularized NL-means algorithm combines these methods and reduces both of their respective defaults by minimizing an adaptive total variation with a nonlocal data fidelity term. Besides, this model adapts to different noise statistics and a fast solution can be obtained in the general case of the exponential family. We develop this model for image denoising and we adapt it to video denoising with 3D patches.

  14. Effects of regular and whitening dentifrices on remineralization of bovine enamel in vitro.

    PubMed

    Kielbassa, Andrej M; Tschoppe, Peter; Hellwig, Elmar; Wrbas, Karl-Thomas

    2009-02-01

    To compare in vitro the remineralizing effects of different regular dentifrices and whitening dentifrices (containing pyrophosphates) on predemineralized enamel. Specimens from 84 bovine incisors were embedded in epoxy resin, partly covered with nail varnish, and demineralized in a lactic acid solution (37 degrees C, pH 5.0, 8 days). Parts of the demineralized areas were covered with nail varnish, and specimens were randomly assigned to 6 groups. Subsequently, specimens were exposed to a remineralizing solution (37 degrees C, pH 7.0, 60 days) and brushed 3 times a day (1:3 slurry with remineralizing solution) with 1 of 3 regular dentifrices designed for anticaries (group 1, amine; group 2, sodium fluoride) or periodontal (group 3, amine/stannous fluoride) purposes or whitening dentifrice containing pyrophosphates (group 4, sodium fluoride). An experimental dentifrice (group 5, without pyrophosphates/fluorides) and a whitening dentifrice (group 6, monofluorophosphate) served as controls. Mineral loss and lesion depths were evaluated from contact microradiographs, and intergroup comparisons were performed using the closed-test procedure (alpha =.05). Compared to baseline, specimens brushed with the dentifrices containing stannous/amine fluorides revealed significant mineral gains and lesion depth reductions (P < .05). Concerning the reacquired mineral, the whitening dentifrice performed worse than the regular dentifrices (P > .05), while mineral gain, as well as lesion depth, reduction was negligible with the control groups. Dentifrices containing pyrophosphates perform worse than regular dentifrices but do not necessarily affect remineralization. Unless remineralizing efficacy is proven, whitening dentifrices should be recommended only after deliberate consideration in caries-prone patients.

  15. Three-dimensional finite elements for the analysis of soil contamination using a multiple-porosity approach

    NASA Astrophysics Data System (ADS)

    El-Zein, Abbas; Carter, John P.; Airey, David W.

    2006-06-01

    A three-dimensional finite-element model of contaminant migration in fissured clays or contaminated sand which includes multiple sources of non-equilibrium processes is proposed. The conceptual framework can accommodate a regular network of fissures in 1D, 2D or 3D and immobile solutions in the macro-pores of aggregated topsoils, as well as non-equilibrium sorption. A Galerkin weighted-residual statement for the three-dimensional form of the equations in the Laplace domain is formulated. Equations are discretized using linear and quadratic prism elements. The system of algebraic equations is solved in the Laplace domain and solution is inverted to the time domain numerically. The model is validated and its scope is illustrated through the analysis of three problems: a waste repository deeply buried in fissured clay, a storage tank leaking into sand and a sanitary landfill leaching into fissured clay over a sand aquifer.

  16. Distributed computing for macromolecular crystallography

    PubMed Central

    Krissinel, Evgeny; Uski, Ville; Lebedev, Andrey; Ballard, Charles

    2018-01-01

    Modern crystallographic computing is characterized by the growing role of automated structure-solution pipelines, which represent complex expert systems utilizing a number of program components, decision makers and databases. They also require considerable computational resources and regular database maintenance, which is increasingly more difficult to provide at the level of individual desktop-based CCP4 setups. On the other hand, there is a significant growth in data processed in the field, which brings up the issue of centralized facilities for keeping both the data collected and structure-solution projects. The paradigm of distributed computing and data management offers a convenient approach to tackling these problems, which has become more attractive in recent years owing to the popularity of mobile devices such as tablets and ultra-portable laptops. In this article, an overview is given of developments by CCP4 aimed at bringing distributed crystallographic computations to a wide crystallographic community. PMID:29533240

  17. Distributed computing for macromolecular crystallography.

    PubMed

    Krissinel, Evgeny; Uski, Ville; Lebedev, Andrey; Winn, Martyn; Ballard, Charles

    2018-02-01

    Modern crystallographic computing is characterized by the growing role of automated structure-solution pipelines, which represent complex expert systems utilizing a number of program components, decision makers and databases. They also require considerable computational resources and regular database maintenance, which is increasingly more difficult to provide at the level of individual desktop-based CCP4 setups. On the other hand, there is a significant growth in data processed in the field, which brings up the issue of centralized facilities for keeping both the data collected and structure-solution projects. The paradigm of distributed computing and data management offers a convenient approach to tackling these problems, which has become more attractive in recent years owing to the popularity of mobile devices such as tablets and ultra-portable laptops. In this article, an overview is given of developments by CCP4 aimed at bringing distributed crystallographic computations to a wide crystallographic community.

  18. Sparse Bayesian Inference and the Temperature Structure of the Solar Corona

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warren, Harry P.; Byers, Jeff M.; Crump, Nicholas A.

    Measuring the temperature structure of the solar atmosphere is critical to understanding how it is heated to high temperatures. Unfortunately, the temperature of the upper atmosphere cannot be observed directly, but must be inferred from spectrally resolved observations of individual emission lines that span a wide range of temperatures. Such observations are “inverted” to determine the distribution of plasma temperatures along the line of sight. This inversion is ill posed and, in the absence of regularization, tends to produce wildly oscillatory solutions. We introduce the application of sparse Bayesian inference to the problem of inferring the temperature structure of themore » solar corona. Within a Bayesian framework a preference for solutions that utilize a minimum number of basis functions can be encoded into the prior and many ad hoc assumptions can be avoided. We demonstrate the efficacy of the Bayesian approach by considering a test library of 40 assumed temperature distributions.« less

  19. A Computational Study of Shear Layer Receptivity

    NASA Astrophysics Data System (ADS)

    Barone, Matthew; Lele, Sanjiva

    2002-11-01

    The receptivity of two-dimensional, compressible shear layers to local and external excitation sources is examined using a computational approach. The family of base flows considered consists of a laminar supersonic stream separated from nearly quiescent fluid by a thin, rigid splitter plate with a rounded trailing edge. The linearized Euler and linearized Navier-Stokes equations are solved numerically in the frequency domain. The flow solver is based on a high order finite difference scheme, coupled with an overset mesh technique developed for computational aeroacoustics applications. Solutions are obtained for acoustic plane wave forcing near the most unstable shear layer frequency, and are compared to the existing low frequency theory. An adjoint formulation to the present problem is developed, and adjoint equation calculations are performed using the same numerical methods as for the regular equation sets. Solutions to the adjoint equations are used to shed light on the mechanisms which control the receptivity of finite-width compressible shear layers.

  20. An efficient and flexible Abel-inversion method for noisy data

    NASA Astrophysics Data System (ADS)

    Antokhin, Igor I.

    2016-12-01

    We propose an efficient and flexible method for solving the Abel integral equation of the first kind, frequently appearing in many fields of astrophysics, physics, chemistry, and applied sciences. This equation represents an ill-posed problem, thus solving it requires some kind of regularization. Our method is based on solving the equation on a so-called compact set of functions and/or using Tikhonov's regularization. A priori constraints on the unknown function, defining a compact set, are very loose and can be set using simple physical considerations. Tikhonov's regularization in itself does not require any explicit a priori constraints on the unknown function and can be used independently of such constraints or in combination with them. Various target degrees of smoothness of the unknown function may be set, as required by the problem at hand. The advantage of the method, apart from its flexibility, is that it gives uniform convergence of the approximate solution to the exact solution, as the errors of input data tend to zero. The method is illustrated on several simulated models with known solutions. An example of astrophysical application of the method is also given.

  1. Black-hole solutions with scalar hair in Einstein-scalar-Gauss-Bonnet theories

    NASA Astrophysics Data System (ADS)

    Antoniou, G.; Bakopoulos, A.; Kanti, P.

    2018-04-01

    In the context of the Einstein-scalar-Gauss-Bonnet theory, with a general coupling function between the scalar field and the quadratic Gauss-Bonnet term, we investigate the existence of regular black-hole solutions with scalar hair. Based on a previous theoretical analysis, which studied the evasion of the old and novel no-hair theorems, we consider a variety of forms for the coupling function (exponential, even and odd polynomial, inverse polynomial, and logarithmic) that, in conjunction with the profile of the scalar field, satisfy a basic constraint. Our numerical analysis then always leads to families of regular, asymptotically flat black-hole solutions with nontrivial scalar hair. The solution for the scalar field and the profile of the corresponding energy-momentum tensor, depending on the value of the coupling constant, may exhibit a nonmonotonic behavior, an unusual feature that highlights the limitations of the existing no-hair theorems. We also determine and study in detail the scalar charge, horizon area, and entropy of our solutions.

  2. Sinc-interpolants in the energy plane for regular solution, Jost function, and its zeros of quantum scattering

    NASA Astrophysics Data System (ADS)

    Annaby, M. H.; Asharabi, R. M.

    2018-01-01

    In a remarkable note of Chadan [Il Nuovo Cimento 39, 697-703 (1965)], the author expanded both the regular wave function and the Jost function of the quantum scattering problem using an interpolation theorem of Valiron [Bull. Sci. Math. 49, 181-192 (1925)]. These expansions have a very slow rate of convergence, and applying them to compute the zeros of the Jost function, which lead to the important bound states, gives poor convergence rates. It is our objective in this paper to introduce several efficient interpolation techniques to compute the regular wave solution as well as the Jost function and its zeros approximately. This work continues and improves the results of Chadan and other related studies remarkably. Several worked examples are given with illustrations and comparisons with existing methods.

  3. A comparison of image restoration approaches applied to three-dimensional confocal and wide-field fluorescence microscopy.

    PubMed

    Verveer, P. J; Gemkow, M. J; Jovin, T. M

    1999-01-01

    We have compared different image restoration approaches for fluorescence microscopy. The most widely used algorithms were classified with a Bayesian theory according to the assumed noise model and the type of regularization imposed. We considered both Gaussian and Poisson models for the noise in combination with Tikhonov regularization, entropy regularization, Good's roughness and without regularization (maximum likelihood estimation). Simulations of fluorescence confocal imaging were used to examine the different noise models and regularization approaches using the mean squared error criterion. The assumption of a Gaussian noise model yielded only slightly higher errors than the Poisson model. Good's roughness was the best choice for the regularization. Furthermore, we compared simulated confocal and wide-field data. In general, restored confocal data are superior to restored wide-field data, but given sufficient higher signal level for the wide-field data the restoration result may rival confocal data in quality. Finally, a visual comparison of experimental confocal and wide-field data is presented.

  4. Black hole solution in the framework of arctan-electrodynamics

    NASA Astrophysics Data System (ADS)

    Kruglov, S. I.

    An arctan-electrodynamics coupled with the gravitational field is investigated. We obtain the regular black hole solution that at r →∞ gives corrections to the Reissner-Nordström solution. The corrections to Coulomb’s law at r →∞ are found. We evaluate the mass of the black hole that is a function of the dimensional parameter β introduced in the model. The magnetically charged black hole was investigated and we have obtained the magnetic mass of the black hole and the metric function at r →∞. The regular black hole solution is obtained at r → 0 with the de Sitter core. We show that there is no singularity of the Ricci scalar for electrically and magnetically charged black holes. Restrictions on the electric and magnetic fields are found that follow from the requirement of the absence of superluminal sound speed and the requirement of a classical stability.

  5. A Faith-Based and Cultural Approach to Promoting Self-Efficacy and Regular Exercise in Older African American Women

    ERIC Educational Resources Information Center

    Quinn, Mary Ellen; Guion, W. Kent

    2010-01-01

    The health benefits of regular exercise are well documented, yet there has been limited success in the promotion of regular exercise in older African American women. Based on theoretical and evidence-based findings, the authors recommend a behavioral self-efficacy approach to guide exercise interventions in this high-risk population. Interventions…

  6. Regional regularization method for ECT based on spectral transformation of Laplacian

    NASA Astrophysics Data System (ADS)

    Guo, Z. H.; Kan, Z.; Lv, D. C.; Shao, F. Q.

    2016-10-01

    Image reconstruction in electrical capacitance tomography is an ill-posed inverse problem, and regularization techniques are usually used to solve the problem for suppressing noise. An anisotropic regional regularization algorithm for electrical capacitance tomography is constructed using a novel approach called spectral transformation. Its function is derived and applied to the weighted gradient magnitude of the sensitivity of Laplacian as a regularization term. With the optimum regional regularizer, the a priori knowledge on the local nonlinearity degree of the forward map is incorporated into the proposed online reconstruction algorithm. Simulation experimentations were performed to verify the capability of the new regularization algorithm to reconstruct a superior quality image over two conventional Tikhonov regularization approaches. The advantage of the new algorithm for improving performance and reducing shape distortion is demonstrated with the experimental data.

  7. Lq -Lp optimization for multigrid fluorescence tomography of small animals using simplified spherical harmonics

    NASA Astrophysics Data System (ADS)

    Edjlali, Ehsan; Bérubé-Lauzière, Yves

    2018-01-01

    We present the first Lq -Lp optimization scheme for fluorescence tomographic imaging. This is then applied to small animal imaging. Fluorescence tomography is an ill-posed, and in full generality, a nonlinear problem that seeks to image the 3D concentration distribution of a fluorescent agent inside a biological tissue. Standard candidates for regularization to deal with the ill-posedness of the image reconstruction problem include L1 and L2 regularization. In this work, a general Lq -Lp regularization framework (Lq discrepancy function - Lp regularization term) is introduced for fluorescence tomographic imaging. A method to calculate the gradient for this general framework is developed which allows evaluating the performance of different cost functions/regularization schemes in solving the fluorescence tomographic problem. The simplified spherical harmonics approximation is used to accurately model light propagation inside the tissue. Furthermore, a multigrid mesh is utilized to decrease the dimension of the inverse problem and reduce the computational cost of the solution. The inverse problem is solved iteratively using an lm-BFGS quasi-Newton optimization method. The simulations are performed under different scenarios of noisy measurements. These are carried out on the Digimouse numerical mouse model with the kidney being the target organ. The evaluation of the reconstructed images is performed both qualitatively and quantitatively using several metrics including QR, RMSE, CNR, and TVE under rigorous conditions. The best reconstruction results under different scenarios are obtained with an L1.5 -L1 scheme with premature termination of the optimization process. This is in contrast to approaches commonly found in the literature relying on L2 -L2 schemes.

  8. Twisting singular solutions of Betheʼs equations

    NASA Astrophysics Data System (ADS)

    Nepomechie, Rafael I.; Wang, Chunguang

    2014-12-01

    The Bethe equations for the periodic XXX and XXZ spin chains admit singular solutions, for which the corresponding eigenvalues and eigenvectors are ill-defined. We use a twist regularization to derive conditions for such singular solutions to be physical, in which case they correspond to genuine eigenvalues and eigenvectors of the Hamiltonian.

  9. A Note on Weak Solutions of Conservation Laws and Energy/Entropy Conservation

    NASA Astrophysics Data System (ADS)

    Gwiazda, Piotr; Michálek, Martin; Świerczewska-Gwiazda, Agnieszka

    2018-03-01

    A common feature of systems of conservation laws of continuum physics is that they are endowed with natural companion laws which are in such cases most often related to the second law of thermodynamics. This observation easily generalizes to any symmetrizable system of conservation laws; they are endowed with nontrivial companion conservation laws, which are immediately satisfied by classical solutions. Not surprisingly, weak solutions may fail to satisfy companion laws, which are then often relaxed from equality to inequality and overtake the role of physical admissibility conditions for weak solutions. We want to answer the question: what is a critical regularity of weak solutions to a general system of conservation laws to satisfy an associated companion law as an equality? An archetypal example of such a result was derived for the incompressible Euler system in the context of Onsager's conjecture in the early nineties. This general result can serve as a simple criterion to numerous systems of mathematical physics to prescribe the regularity of solutions needed for an appropriate companion law to be satisfied.

  10. A multi-resolution approach to electromagnetic modelling

    NASA Astrophysics Data System (ADS)

    Cherevatova, M.; Egbert, G. D.; Smirnov, M. Yu

    2018-07-01

    We present a multi-resolution approach for 3-D magnetotelluric forward modelling. Our approach is motivated by the fact that fine-grid resolution is typically required at shallow levels to adequately represent near surface inhomogeneities, topography and bathymetry, while a much coarser grid may be adequate at depth where the diffusively propagating electromagnetic fields are much smoother. With a conventional structured finite difference grid, the fine discretization required to adequately represent rapid variations near the surface is continued to all depths, resulting in higher computational costs. Increasing the computational efficiency of the forward modelling is especially important for solving regularized inversion problems. We implement a multi-resolution finite difference scheme that allows us to decrease the horizontal grid resolution with depth, as is done with vertical discretization. In our implementation, the multi-resolution grid is represented as a vertical stack of subgrids, with each subgrid being a standard Cartesian tensor product staggered grid. Thus, our approach is similar to the octree discretization previously used for electromagnetic modelling, but simpler in that we allow refinement only with depth. The major difficulty arose in deriving the forward modelling operators on interfaces between adjacent subgrids. We considered three ways of handling the interface layers and suggest a preferable one, which results in similar accuracy as the staggered grid solution, while retaining the symmetry of coefficient matrix. A comparison between multi-resolution and staggered solvers for various models shows that multi-resolution approach improves on computational efficiency without compromising the accuracy of the solution.

  11. Regular expansion solutions for small Peclet number heat or mass transfer in concentrated two-phase particulate systems

    NASA Technical Reports Server (NTRS)

    Yaron, I.

    1974-01-01

    Steady state heat or mass transfer in concentrated ensembles of drops, bubbles or solid spheres in uniform, slow viscous motion, is investigated. Convective effects at small Peclet numbers are taken into account by expanding the nondimensional temperature or concentration in powers of the Peclet number. Uniformly valid solutions are obtained, which reflect the effects of dispersed phase content and rate of internal circulation within the fluid particles. The dependence of the range of Peclet and Reynolds numbers, for which regular expansions are valid, on particle concentration is discussed.

  12. Constrained Perturbation Regularization Approach for Signal Estimation Using Random Matrix Theory

    NASA Astrophysics Data System (ADS)

    Suliman, Mohamed; Ballal, Tarig; Kammoun, Abla; Al-Naffouri, Tareq Y.

    2016-12-01

    In this supplementary appendix we provide proofs and additional extensive simulations that complement the analysis of the main paper (constrained perturbation regularization approach for signal estimation using random matrix theory).

  13. GRACE L1b inversion through a self-consistent modified radial basis function approach

    NASA Astrophysics Data System (ADS)

    Yang, Fan; Kusche, Juergen; Rietbroek, Roelof; Eicker, Annette

    2016-04-01

    Implementing a regional geopotential representation such as mascons or, more general, RBFs (radial basis functions) has been widely accepted as an efficient and flexible approach to recover the gravity field from GRACE (Gravity Recovery and Climate Experiment), especially at higher latitude region like Greenland. This is since RBFs allow for regionally specific regularizations over areas which have sufficient and dense GRACE observations. Although existing RBF solutions show a better resolution than classical spherical harmonic solutions, the applied regularizations cause spatial leakage which should be carefully dealt with. It has been shown that leakage is a main error source which leads to an evident underestimation of yearly trend of ice-melting over Greenland. Unlike some popular post-processing techniques to mitigate leakage signals, this study, for the first time, attempts to reduce the leakage directly in the GRACE L1b inversion by constructing an innovative modified (MRBF) basis in place of the standard RBFs to retrieve a more realistic temporal gravity signal along the coastline. Our point of departure is that the surface mass loading associated with standard RBF is smooth but disregards physical consistency between continental mass and passive ocean response. In this contribution, based on earlier work by Clarke et al.(2007), a physically self-consistent MRBF representation is constructed from standard RBFs, with the help of the sea level equation: for a given standard RBF basis, the corresponding MRBF basis is first obtained by keeping the surface load over the continent unchanged, but imposing global mass conservation and equilibrium response of the oceans. Then, the updated set of MRBFs as well as standard RBFs are individually employed as the basis function to determine the temporal gravity field from GRACE L1b data. In this way, in the MRBF GRACE solution, the passive (e.g. ice melting and land hydrology response) sea level is automatically separated from ocean dynamic effects, and our hypothesis is that in this way we improve the partitioning of the GRACE signals into land and ocean contributions along the coastline. In particular, we inspect the ice-melting over Greenland from real GRACE data, and we evaluate the ability of the MRBF approach to recover true mass variations along the coastline. Finally, using independent measurements from multiple techniques including GPS vertical motion and altimetry, a validation will be presented to quantify to what extent it is possible to reduce the leakage through the MRBF approach.

  14. Black holes in vector-tensor theories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heisenberg, Lavinia; Kase, Ryotaro; Tsujikawa, Shinji

    We study static and spherically symmetric black hole (BH) solutions in second-order generalized Proca theories with nonminimal vector field derivative couplings to the Ricci scalar, the Einstein tensor, and the double dual Riemann tensor. We find concrete Lagrangians which give rise to exact BH solutions by imposing two conditions of the two identical metric components and the constant norm of the vector field. These exact solutions are described by either Reissner-Nordström (RN), stealth Schwarzschild, or extremal RN solutions with a non-trivial longitudinal mode of the vector field. We then numerically construct BH solutions without imposing these conditions. For cubic andmore » quartic Lagrangians with power-law couplings which encompass vector Galileons as the specific cases, we show the existence of BH solutions with the difference between two non-trivial metric components. The quintic-order power-law couplings do not give rise to non-trivial BH solutions regular throughout the horizon exterior. The sixth-order and intrinsic vector-mode couplings can lead to BH solutions with a secondary hair. For all the solutions, the vector field is regular at least at the future or past horizon. The deviation from General Relativity induced by the Proca hair can be potentially tested by future measurements of gravitational waves in the nonlinear regime of gravity.« less

  15. Hip-hop solutions of the 2N-body problem

    NASA Astrophysics Data System (ADS)

    Barrabés, Esther; Cors, Josep Maria; Pinyol, Conxita; Soler, Jaume

    2006-05-01

    Hip-hop solutions of the 2N-body problem with equal masses are shown to exist using an analytic continuation argument. These solutions are close to planar regular 2N-gon relative equilibria with small vertical oscillations. For fixed N, an infinity of these solutions are three-dimensional choreographies, with all the bodies moving along the same closed curve in the inertial frame.

  16. Mechanical properties of regular porous biomaterials made from truncated cube repeating unit cells: Analytical solutions and computational models.

    PubMed

    Hedayati, R; Sadighi, M; Mohammadi-Aghdam, M; Zadpoor, A A

    2016-03-01

    Additive manufacturing (AM) has enabled fabrication of open-cell porous biomaterials based on repeating unit cells. The micro-architecture of the porous biomaterials and, thus, their physical properties could then be precisely controlled. Due to their many favorable properties, porous biomaterials manufactured using AM are considered as promising candidates for bone substitution as well as for several other applications in orthopedic surgery. The mechanical properties of such porous structures including static and fatigue properties are shown to be strongly dependent on the type of the repeating unit cell based on which the porous biomaterial is built. In this paper, we study the mechanical properties of porous biomaterials made from a relatively new unit cell, namely truncated cube. We present analytical solutions that relate the dimensions of the repeating unit cell to the elastic modulus, Poisson's ratio, yield stress, and buckling load of those porous structures. We also performed finite element modeling to predict the mechanical properties of the porous structures. The analytical solution and computational results were found to be in agreement with each other. The mechanical properties estimated using both the analytical and computational techniques were somewhat higher than the experimental data reported in one of our recent studies on selective laser melted Ti-6Al-4V porous biomaterials. In addition to porosity, the elastic modulus and Poisson's ratio of the porous structures were found to be strongly dependent on the ratio of the length of the inclined struts to that of the uninclined (i.e. vertical or horizontal) struts, α, in the truncated cube unit cell. The geometry of the truncated cube unit cell approaches the octahedral and cube unit cells when α respectively approaches zero and infinity. Consistent with those geometrical observations, the analytical solutions presented in this study approached those of the octahedral and cube unit cells when α approached respectively 0 and infinity. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Dynamics from a mathematical model of a two-state gas laser

    NASA Astrophysics Data System (ADS)

    Kleanthous, Antigoni; Hua, Tianshu; Manai, Alexandre; Yawar, Kamran; Van Gorder, Robert A.

    2018-05-01

    Motivated by recent work in the area, we consider the behavior of solutions to a nonlinear PDE model of a two-state gas laser. We first review the derivation of the two-state gas laser model, before deriving a non-dimensional model given in terms of coupled nonlinear partial differential equations. We then classify the steady states of this system, in order to determine the possible long-time asymptotic solutions to this model, as well as corresponding stability results, showing that the only uniform steady state (the zero motion state) is unstable, while a linear profile in space is stable. We then provide numerical simulations for the full unsteady model. We show for a wide variety of initial conditions that the solutions tend toward the stable linear steady state profiles. We also consider traveling wave solutions, and determine the unique wave speed (in terms of the other model parameters) which allows wave-like solutions to exist. Despite some similarities between the model and the inviscid Burger's equation, the solutions we obtain are much more regular than the solutions to the inviscid Burger's equation, with no evidence of shock formation or loss of regularity.

  18. Seeded Growth Route to Noble Calcium Carbonate Nanocrystal.

    PubMed

    Islam, Aminul; Teo, Siow Hwa; Rahman, M Aminur; Taufiq-Yap, Yun Hin

    2015-01-01

    A solution-phase route has been considered as the most promising route to synthesize noble nanostructures. A majority of their synthesis approaches of calcium carbonate (CaCO3) are based on either using fungi or the CO2 bubbling methods. Here, we approached the preparation of nano-precipitated calcium carbonate single crystal from salmacis sphaeroides in the presence of zwitterionic or cationic biosurfactants without external source of CO2. The calcium carbonate crystals were rhombohedron structure and regularly shaped with side dimension ranging from 33-41 nm. The high degree of morphological control of CaCO3 nanocrystals suggested that surfactants are capable of strongly interacting with the CaCO3 surface and control the nucleation and growth direction of calcium carbonate nanocrystals. Finally, the mechanism of formation of nanocrystals in light of proposed routes was also discussed.

  19. Seeded Growth Route to Noble Calcium Carbonate Nanocrystal

    PubMed Central

    Islam, Aminul; Teo, Siow Hwa; Rahman, M. Aminur; Taufiq-Yap, Yun Hin

    2015-01-01

    A solution-phase route has been considered as the most promising route to synthesize noble nanostructures. A majority of their synthesis approaches of calcium carbonate (CaCO3) are based on either using fungi or the CO2 bubbling methods. Here, we approached the preparation of nano-precipitated calcium carbonate single crystal from salmacis sphaeroides in the presence of zwitterionic or cationic biosurfactants without external source of CO2. The calcium carbonate crystals were rhombohedron structure and regularly shaped with side dimension ranging from 33–41 nm. The high degree of morphological control of CaCO3 nanocrystals suggested that surfactants are capable of strongly interacting with the CaCO3 surface and control the nucleation and growth direction of calcium carbonate nanocrystals. Finally, the mechanism of formation of nanocrystals in light of proposed routes was also discussed. PMID:26700479

  20. A Probabilistic Approach to Interior Regularity of Fully Nonlinear Degenerate Elliptic Equations in Smooth Domains

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou Wei, E-mail: zhoux123@umn.edu

    2013-06-15

    We consider the value function of a stochastic optimal control of degenerate diffusion processes in a domain D. We study the smoothness of the value function, under the assumption of the non-degeneracy of the diffusion term along the normal to the boundary and an interior condition weaker than the non-degeneracy of the diffusion term. When the diffusion term, drift term, discount factor, running payoff and terminal payoff are all in the class of C{sup 1,1}( D-bar ) , the value function turns out to be the unique solution in the class of C{sub loc}{sup 1,1}(D) Intersection C{sup 0,1}( D-bar )more » to the associated degenerate Bellman equation with Dirichlet boundary data. Our approach is probabilistic.« less

  1. A regularity condition and temporal asymptotics for chemotaxis-fluid equations

    NASA Astrophysics Data System (ADS)

    Chae, Myeongju; Kang, Kyungkeun; Lee, Jihoon; Lee, Ki-Ahm

    2018-02-01

    We consider two dimensional chemotaxis equations coupled to the Navier-Stokes equations. We present a new localized regularity criterion that is localized in a neighborhood at each point. Secondly, we establish temporal decays of the regular solutions under the assumption that the initial mass of biological cell density is sufficiently small. Both results are improvements of previously known results given in Chae et al (2013 Discrete Continuous Dyn. Syst. A 33 2271-97) and Chae et al (2014 Commun. PDE 39 1205-35)

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Webb-Robertson, Bobbie-Jo M.; Wiberg, Holli K.; Matzke, Melissa M.

    In this review, we apply selected imputation strategies to label-free liquid chromatography–mass spectrometry (LC–MS) proteomics datasets to evaluate the accuracy with respect to metrics of variance and classification. We evaluate several commonly used imputation approaches for individual merits and discuss the caveats of each approach with respect to the example LC–MS proteomics data. In general, local similarity-based approaches, such as the regularized expectation maximization and least-squares adaptive algorithms, yield the best overall performances with respect to metrics of accuracy and robustness. However, no single algorithm consistently outperforms the remaining approaches, and in some cases, performing classification without imputation sometimes yieldedmore » the most accurate classification. Thus, because of the complex mechanisms of missing data in proteomics, which also vary from peptide to protein, no individual method is a single solution for imputation. In summary, on the basis of the observations in this review, the goal for imputation in the field of computational proteomics should be to develop new approaches that work generically for this data type and new strategies to guide users in the selection of the best imputation for their dataset and analysis objectives.« less

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rasouli, C.; Abbasi Davani, F.; Rokrok, B.

    Plasma confinement using external magnetic field is one of the successful ways leading to the controlled nuclear fusion. Development and validation of the solution process for plasma equilibrium in the experimental toroidal fusion devices is the main subject of this work. Solution of the nonlinear 2D stationary problem as posed by the Grad-Shafranov equation gives quantitative information about plasma equilibrium inside the vacuum chamber of hot fusion devices. This study suggests solving plasma equilibrium equation which is essential in toroidal nuclear fusion devices, using a mesh-free method in a condition that the plasma boundary is unknown. The Grad-Shafranov equation hasmore » been solved numerically by the point interpolation collocation mesh-free method. Important features of this approach include truly mesh free, simple mathematical relationships between points and acceptable precision in comparison with the parametric results. The calculation process has been done by using the regular and irregular nodal distribution and support domains with different points. The relative error between numerical and analytical solution is discussed for several test examples such as small size Damavand tokamak, ITER-like equilibrium, NSTX-like equilibrium, and typical Spheromak.« less

  4. Description of waves in inhomogeneous domains using Heun's equation

    NASA Astrophysics Data System (ADS)

    Bednarik, M.; Cervenka, M.

    2018-04-01

    There are a number of model equations describing electromagnetic, acoustic or quantum waves in inhomogeneous domains and some of them are of the same type from the mathematical point of view. This isomorphism enables us to use a unified approach to solving the corresponding equations. In this paper, the inhomogeneity is represented by a trigonometric spatial distribution of a parameter determining the properties of an inhomogeneous domain. From the point of view of modeling, this trigonometric parameter function can be smoothly connected to neighboring constant-parameter regions. For this type of distribution, exact local solutions of the model equations are represented by the local Heun functions. As the interval for which the solution is sought includes two regular singular points. For this reason, a method is proposed which resolves this problem only based on the local Heun functions. Further, the transfer matrix for the considered inhomogeneous domain is determined by means of the proposed method. As an example of the applicability of the presented solutions the transmission coefficient is calculated for the locally periodic structure which is given by an array of asymmetric barriers.

  5. Lipschitz regularity results for nonlinear strictly elliptic equations and applications

    NASA Astrophysics Data System (ADS)

    Ley, Olivier; Nguyen, Vinh Duc

    2017-10-01

    Most of Lipschitz regularity results for nonlinear strictly elliptic equations are obtained for a suitable growth power of the nonlinearity with respect to the gradient variable (subquadratic for instance). For equations with superquadratic growth power in gradient, one usually uses weak Bernstein-type arguments which require regularity and/or convex-type assumptions on the gradient nonlinearity. In this article, we obtain new Lipschitz regularity results for a large class of nonlinear strictly elliptic equations with possibly arbitrary growth power of the Hamiltonian with respect to the gradient variable using some ideas coming from Ishii-Lions' method. We use these bounds to solve an ergodic problem and to study the regularity and the large time behavior of the solution of the evolution equation.

  6. The rotation axis for stationary and axisymmetric space-times

    NASA Astrophysics Data System (ADS)

    van den Bergh, N.; Wils, P.

    1985-03-01

    A set of 'extended' regularity conditions is discussed which have to be satisfied on the rotation axis if the latter is assumed to be also an axis of symmetry. For a wide class of energy-momentum tensors these conditions can only hold at the origin of the Weyl canonical coordinate. For static and cylindrically symmetric space-times the conditions can be derived from the regularity of the Riemann tetrad coefficients on the axis. For stationary space-times, however, the extended conditions do not necessarily hold, even when 'elementary flatness' is satisfied and when there are no curvature singularities on the axis. The result by Davies and Caplan (1971) for cylindrically symmetric stationary Einstein-Maxwell fields is generalized by proving that only Minkowski space-time and a particular magnetostatic solution possess a regular axis of rotation. Further, several sets of solutions for neutral and charged, rigidly and differentially rotating dust are discussed.

  7. Filtering techniques for efficient inversion of two-dimensional Nuclear Magnetic Resonance data

    NASA Astrophysics Data System (ADS)

    Bortolotti, V.; Brizi, L.; Fantazzini, P.; Landi, G.; Zama, F.

    2017-10-01

    The inversion of two-dimensional Nuclear Magnetic Resonance (NMR) data requires the solution of a first kind Fredholm integral equation with a two-dimensional tensor product kernel and lower bound constraints. For the solution of this ill-posed inverse problem, the recently presented 2DUPEN algorithm [V. Bortolotti et al., Inverse Problems, 33(1), 2016] uses multiparameter Tikhonov regularization with automatic choice of the regularization parameters. In this work, I2DUPEN, an improved version of 2DUPEN that implements Mean Windowing and Singular Value Decomposition filters, is deeply tested. The reconstruction problem with filtered data is formulated as a compressed weighted least squares problem with multi-parameter Tikhonov regularization. Results on synthetic and real 2D NMR data are presented with the main purpose to deeper analyze the separate and combined effects of these filtering techniques on the reconstructed 2D distribution.

  8. Singular Value Decomposition Method to Determine Distance Distributions in Pulsed Dipolar Electron Spin Resonance.

    PubMed

    Srivastava, Madhur; Freed, Jack H

    2017-11-16

    Regularization is often utilized to elicit the desired physical results from experimental data. The recent development of a denoising procedure yielding about 2 orders of magnitude in improvement in SNR obviates the need for regularization, which achieves a compromise between canceling effects of noise and obtaining an estimate of the desired physical results. We show how singular value decomposition (SVD) can be employed directly on the denoised data, using pulse dipolar electron spin resonance experiments as an example. Such experiments are useful in measuring distances and their distributions, P(r) between spin labels on proteins. In noise-free model cases exact results are obtained, but even a small amount of noise (e.g., SNR = 850 after denoising) corrupts the solution. We develop criteria that precisely determine an optimum approximate solution, which can readily be automated. This method is applicable to any signal that is currently processed with regularization of its SVD analysis.

  9. Multidimensional deconvolution of optical microscope and ultrasound imaging using adaptive least-mean-square (LMS) inverse filtering

    NASA Astrophysics Data System (ADS)

    Sapia, Mark Angelo

    2000-11-01

    Three-dimensional microscope images typically suffer from reduced resolution due to the effects of convolution, optical aberrations and out-of-focus blurring. Two- dimensional ultrasound images are also degraded by convolutional bluffing and various sources of noise. Speckle noise is a major problem in ultrasound images. In microscopy and ultrasound, various methods of digital filtering have been used to improve image quality. Several methods of deconvolution filtering have been used to improve resolution by reversing the convolutional effects, many of which are based on regularization techniques and non-linear constraints. The technique discussed here is a unique linear filter for deconvolving 3D fluorescence microscopy or 2D ultrasound images. The process is to solve for the filter completely in the spatial-domain using an adaptive algorithm to converge to an optimum solution for de-blurring and resolution improvement. There are two key advantages of using an adaptive solution: (1)it efficiently solves for the filter coefficients by taking into account all sources of noise and degraded resolution at the same time, and (2)achieves near-perfect convergence to the ideal linear deconvolution filter. This linear adaptive technique has other advantages such as avoiding artifacts of frequency-domain transformations and concurrent adaptation to suppress noise. Ultimately, this approach results in better signal-to-noise characteristics with virtually no edge-ringing. Many researchers have not adopted linear techniques because of poor convergence, noise instability and negative valued data in the results. The methods presented here overcome many of these well-documented disadvantages and provide results that clearly out-perform other linear methods and may also out-perform regularization and constrained algorithms. In particular, the adaptive solution is most responsible for overcoming the poor performance associated with linear techniques. This linear adaptive approach to deconvolution is demonstrated with results of restoring blurred phantoms for both microscopy and ultrasound and restoring 3D microscope images of biological cells and 2D ultrasound images of human subjects (courtesy of General Electric and Diasonics, Inc.).

  10. Bianchi type-I magnetized cosmological models for the Einstein-Boltzmann equation with the cosmological constant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ayissi, Raoul Domingo, E-mail: raoulayissi@yahoo.fr; Noutchegueme, Norbert, E-mail: nnoutch@yahoo.fr

    Global solutions regular for the Einstein-Boltzmann equation on a magnetized Bianchi type-I cosmological model with the cosmological constant are investigated. We suppose that the metric is locally rotationally symmetric. The Einstein-Boltzmann equation has been already considered by some authors. But, in general Bancel and Choquet-Bruhat [Ann. Henri Poincaré XVIII(3), 263 (1973); Commun. Math. Phys. 33, 83 (1973)], they proved only the local existence, and in the case of the nonrelativistic Boltzmann equation. Mucha [Global existence of solutions of the Einstein-Boltzmann equation in the spatially homogeneous case. Evolution equation, existence, regularity and singularities (Banach Center Publications, Institute of Mathematics, Polish Academymore » of Science, 2000), Vol. 52] obtained a global existence result, for the relativistic Boltzmann equation coupled with the Einstein equations and using the Yosida operator, but confusing unfortunately with the nonrelativistic case. Noutchegueme and Dongho [Classical Quantum Gravity 23, 2979 (2006)] and Noutchegueme, Dongho, and Takou [Gen. Relativ. Gravitation 37, 2047 (2005)], have obtained a global solution in time, but still using the Yosida operator and considering only the uncharged case. Noutchegueme and Ayissi [Adv. Stud. Theor. Phys. 4, 855 (2010)] also proved a global existence of solutions to the Maxwell-Boltzmann system using the characteristic method. In this paper, we obtain using a method totally different from those used in the works of Noutchegueme and Dongho [Classical Quantum Gravity 23, 2979 (2006)], Noutchegueme, Dongho, and Takou [Gen. Relativ. Gravitation 37, 2047 (2005)], Noutchegueme and Ayissi [Adv. Stud. Theor. Phys. 4, 855 (2010)], and Mucha [Global existence of solutions of the Einstein-Boltzmann equation in the spatially homogeneous case. Evolution equation, existence, regularity and singularities (Banach Center Publications, Institute of Mathematics, Polish Academy of Science, 2000), Vol. 52] the global in time existence and uniqueness of a regular solution to the Einstein-Maxwell-Boltzmann system with the cosmological constant. We define and we use the weighted Sobolev separable spaces for the Boltzmann equation; some special spaces for the Einstein equations, then we clearly display all the proofs leading to the global existence theorems.« less

  11. Bianchi type-I magnetized cosmological models for the Einstein-Boltzmann equation with the cosmological constant

    NASA Astrophysics Data System (ADS)

    Ayissi, Raoul Domingo; Noutchegueme, Norbert

    2015-01-01

    Global solutions regular for the Einstein-Boltzmann equation on a magnetized Bianchi type-I cosmological model with the cosmological constant are investigated. We suppose that the metric is locally rotationally symmetric. The Einstein-Boltzmann equation has been already considered by some authors. But, in general Bancel and Choquet-Bruhat [Ann. Henri Poincaré XVIII(3), 263 (1973); Commun. Math. Phys. 33, 83 (1973)], they proved only the local existence, and in the case of the nonrelativistic Boltzmann equation. Mucha [Global existence of solutions of the Einstein-Boltzmann equation in the spatially homogeneous case. Evolution equation, existence, regularity and singularities (Banach Center Publications, Institute of Mathematics, Polish Academy of Science, 2000), Vol. 52] obtained a global existence result, for the relativistic Boltzmann equation coupled with the Einstein equations and using the Yosida operator, but confusing unfortunately with the nonrelativistic case. Noutchegueme and Dongho [Classical Quantum Gravity 23, 2979 (2006)] and Noutchegueme, Dongho, and Takou [Gen. Relativ. Gravitation 37, 2047 (2005)], have obtained a global solution in time, but still using the Yosida operator and considering only the uncharged case. Noutchegueme and Ayissi [Adv. Stud. Theor. Phys. 4, 855 (2010)] also proved a global existence of solutions to the Maxwell-Boltzmann system using the characteristic method. In this paper, we obtain using a method totally different from those used in the works of Noutchegueme and Dongho [Classical Quantum Gravity 23, 2979 (2006)], Noutchegueme, Dongho, and Takou [Gen. Relativ. Gravitation 37, 2047 (2005)], Noutchegueme and Ayissi [Adv. Stud. Theor. Phys. 4, 855 (2010)], and Mucha [Global existence of solutions of the Einstein-Boltzmann equation in the spatially homogeneous case. Evolution equation, existence, regularity and singularities (Banach Center Publications, Institute of Mathematics, Polish Academy of Science, 2000), Vol. 52] the global in time existence and uniqueness of a regular solution to the Einstein-Maxwell-Boltzmann system with the cosmological constant. We define and we use the weighted Sobolev separable spaces for the Boltzmann equation; some special spaces for the Einstein equations, then we clearly display all the proofs leading to the global existence theorems.

  12. Assessing the impact of non-tidal atmospheric loading on a Kalman filter-based terrestrial reference frame

    NASA Astrophysics Data System (ADS)

    Abbondanza, Claudio; Altamimi, Zuheir; Chin, Toshio; Collilieux, Xavier; Dach, Rolf; Gross, Richard; Heflin, Michael; König, Rolf; Lemoine, Frank; Macmillan, Dan; Parker, Jay; van Dam, Tonie; Wu, Xiaoping

    2014-05-01

    The International Terrestrial Reference Frame (ITRF) adopts a piece-wise linear model to parameterize regularized station positions and velocities. The space-geodetic (SG) solutions from VLBI, SLR, GPS and DORIS used as input in the ITRF combination process account for tidal loading deformations, but ignore the non-tidal part. As a result, the non-linear signal observed in the time series of SG-derived station positions in part reflects non-tidal loading displacements not introduced in the SG data reduction. In this analysis, we assess the impact of non-tidal atmospheric loading (NTAL) corrections on the TRF computation. Focusing on the a-posteriori approach, (i) the NTAL model derived from the National Centre for Environmental Prediction (NCEP) surface pressure is removed from the SINEX files of the SG solutions used as inputs to the TRF determinations; (ii) adopting a Kalman-filter based approach, two distinct linear TRFs are estimated combining the 4 SG solutions with (corrected TRF solution) and without the NTAL displacements (standard TRF solution). Linear fits (offset and atmospheric velocity) of the NTAL displacements removed during step (i) are estimated accounting for the station position discontinuities introduced in the SG solutions and adopting different weighting strategies. The NTAL-derived (atmospheric) velocity fields are compared to those obtained from the TRF reductions during step (ii). The consistency between the atmospheric and the TRF-derived velocity fields is examined. We show how the presence of station position discontinuities in SG solutions degrades the agreement between the velocity fields and compare the effect of different weighting structure adopted while estimating the linear fits to the NTAL displacements. Finally, we evaluate the effect of restoring the atmospheric velocities determined through the linear fits of the NTAL displacements to the single-technique linear reference frames obtained by stacking the standard SG SINEX files. Differences between the velocity fields obtained restoring the NTAL displacements and the standard stacked linear reference frames are discussed.

  13. Fast Algorithms for Earth Mover Distance Based on Optimal Transport and L1 Regularization II

    DTIC Science & Technology

    2016-09-01

    of optimal transport, the EMD problem can be reformulated as a familiar L1 minimization. We use a regularization which gives us a unique solution for...plays a central role in many applications, including image processing, computer vision and statistics etc. [13, 17, 20, 24]. The EMD is a metric defined

  14. Regularities of the sorption of 1,2,3,4-tetrahydroquinoline derivatives under conditions of reversed phase HPLC

    NASA Astrophysics Data System (ADS)

    Nekrasova, N. A.; Kurbatova, S. V.; Zemtsova, M. N.

    2016-12-01

    Regularities of the sorption of 1,2,3,4-tetrahydroquinoline derivatives on octadecylsilyl silica gel and porous graphitic carbon from aqueous acetonitrile solutions were investigated. The effect the molecular structure and physicochemical parameters of the sorbates have on their retention characteristics under conditions of reversed phase HPLC are analyzed.

  15. Comparison of quantitative myocardial perfusion imaging CT to fluorescent microsphere-based flow from high-resolution cryo-images

    NASA Astrophysics Data System (ADS)

    Eck, Brendan L.; Fahmi, Rachid; Levi, Jacob; Fares, Anas; Wu, Hao; Li, Yuemeng; Vembar, Mani; Dhanantwari, Amar; Bezerra, Hiram G.; Wilson, David L.

    2016-03-01

    Myocardial perfusion imaging using CT (MPI-CT) has the potential to provide quantitative measures of myocardial blood flow (MBF) which can aid the diagnosis of coronary artery disease. We evaluated the quantitative accuracy of MPI-CT in a porcine model of balloon-induced LAD coronary artery ischemia guided by fractional flow reserve (FFR). We quantified MBF at baseline (FFR=1.0) and under moderate ischemia (FFR=0.7) using MPI-CT and compared to fluorescent microsphere-based MBF from high-resolution cryo-images. Dynamic, contrast-enhanced CT images were obtained using a spectral detector CT (Philips Healthcare). Projection-based mono-energetic images were reconstructed and processed to obtain MBF. Three MBF quantification approaches were evaluated: singular value decomposition (SVD) with fixed Tikhonov regularization (ThSVD), SVD with regularization determined by the L-Curve criterion (LSVD), and Johnson-Wilson parameter estimation (JW). The three approaches over-estimated MBF compared to cryo-images. JW produced the most accurate MBF, with average error 33.3+/-19.2mL/min/100g, whereas LSVD and ThSVD had greater over-estimation, 59.5+/-28.3mL/min/100g and 78.3+/-25.6 mL/min/100g, respectively. Relative blood flow as assessed by a flow ratio of LAD-to-remote myocardium was strongly correlated between JW and cryo-imaging, with R2=0.97, compared to R2=0.88 and 0.78 for LSVD and ThSVD, respectively. We assessed tissue impulse response functions (IRFs) from each approach for sources of error. While JW was constrained to physiologic solutions, both LSVD and ThSVD produced IRFs with non-physiologic properties due to noise. The L-curve provided noise-adaptive regularization but did not eliminate non-physiologic IRF properties or optimize for MBF accuracy. These findings suggest that model-based MPI-CT approaches may be more appropriate for quantitative MBF estimation and that cryo-imaging can support the development of MPI-CT by providing spatial distributions of MBF.

  16. A simple homogeneous model for regular and irregular metallic wire media samples

    NASA Astrophysics Data System (ADS)

    Kosulnikov, S. Y.; Mirmoosa, M. S.; Simovski, C. R.

    2018-02-01

    To simplify the solution of electromagnetic problems with wire media samples, it is reasonable to treat them as the samples of a homogeneous material without spatial dispersion. The account of spatial dispersion implies additional boundary conditions and makes the solution of boundary problems difficult especially if the sample is not an infinitely extended layer. Moreover, for a novel type of wire media - arrays of randomly tilted wires - a spatially dispersive model has not been developed. Here, we introduce a simplistic heuristic model of wire media samples shaped as bricks. Our model covers WM of both regularly and irregularly stretched wires.

  17. The Mimetic Finite Element Method and the Virtual Element Method for elliptic problems with arbitrary regularity.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Manzini, Gianmarco

    2012-07-13

    We develop and analyze a new family of virtual element methods on unstructured polygonal meshes for the diffusion problem in primal form, that use arbitrarily regular discrete spaces V{sub h} {contained_in} C{sup {alpha}} {element_of} N. The degrees of freedom are (a) solution and derivative values of various degree at suitable nodes and (b) solution moments inside polygons. The convergence of the method is proven theoretically and an optimal error estimate is derived. The connection with the Mimetic Finite Difference method is also discussed. Numerical experiments confirm the convergence rate that is expected from the theory.

  18. The Cauchy Problem in Local Spaces for the Complex Ginzburg-Landau EquationII. Contraction Methods

    NASA Astrophysics Data System (ADS)

    Ginibre, J.; Velo, G.

    We continue the study of the initial value problem for the complex Ginzburg-Landau equation (with a > 0, b > 0, g>= 0) in initiated in a previous paper [I]. We treat the case where the initial data and the solutions belong to local uniform spaces, more precisely to spaces of functions satisfying local regularity conditions and uniform bounds in local norms, but no decay conditions (or arbitrarily weak decay conditions) at infinity in . In [I] we used compactness methods and an extended version of recent local estimates [3] and proved in particular the existence of solutions globally defined in time with local regularity of the initial data corresponding to the spaces Lr for r>= 2 or H1. Here we treat the same problem by contraction methods. This allows us in particular to prove that the solutions obtained in [I] are unique under suitable subcriticality conditions, and to obtain for them additional regularity properties and uniform bounds. The method extends some of those previously applied to the nonlinear heat equation in global spaces to the framework of local uniform spaces.

  19. Divergent series and memory of the initial condition in the long-time solution of some anomalous diffusion problems.

    PubMed

    Yuste, S Bravo; Borrego, R; Abad, E

    2010-02-01

    We consider various anomalous d -dimensional diffusion problems in the presence of an absorbing boundary with radial symmetry. The motion of particles is described by a fractional diffusion equation. Their mean-square displacement is given by r(2) proportional, variant t(gamma)(00 , the emergence of such series in the long-time domain is a specific feature of subdiffusion problems. We present a method to regularize such series, and, in some cases, validate the procedure by using alternative techniques (Laplace transform method and numerical simulations). In the normal diffusion case, we find that the signature of the initial condition on the approach to the steady state rapidly fades away and the solution approaches a single (the main) decay mode in the long-time regime. In remarkable contrast, long-time memory of the initial condition is present in the subdiffusive case as the spatial part Psi1(r) describing the long-time decay of the solution to the steady state is determined by a weighted superposition of all spatial modes characteristic of the normal diffusion problem, the weight being dependent on the initial condition. Interestingly, Psi1(r) turns out to be independent of the anomalous diffusion exponent gamma .

  20. Algorithmic aspects for the reconstruction of spatio-spectral data cubes in the perspective of the SKA

    NASA Astrophysics Data System (ADS)

    Mary, D.; Ferrari, A.; Ferrari, C.; Deguignet, J.; Vannier, M.

    2016-12-01

    With millions of receivers leading to TerraByte data cubes, the story of the giant SKA telescope is also that of collaborative efforts from radioastronomy, signal processing, optimization and computer sciences. Reconstructing SKA cubes poses two challenges. First, the majority of existing algorithms work in 2D and cannot be directly translated into 3D. Second, the reconstruction implies solving an inverse problem and it is not clear what ultimate limit we can expect on the error of this solution. This study addresses (of course partially) both challenges. We consider an extremely simple data acquisition model, and we focus on strategies making it possible to implement 3D reconstruction algorithms that use state-of-the-art image/spectral regularization. The proposed approach has two main features: (i) reduced memory storage with respect to a previous approach; (ii) efficient parallelization and ventilation of the computational load over the spectral bands. This work will allow to implement and compare various 3D reconstruction approaches in a large scale framework.

  1. Aerial vehicles collision avoidance using monocular vision

    NASA Astrophysics Data System (ADS)

    Balashov, Oleg; Muraviev, Vadim; Strotov, Valery

    2016-10-01

    In this paper image-based collision avoidance algorithm that provides detection of nearby aircraft and distance estimation is presented. The approach requires a vision system with a single moving camera and additional information about carrier's speed and orientation from onboard sensors. The main idea is to create a multi-step approach based on a preliminary detection, regions of interest (ROI) selection, contour segmentation, object matching and localization. The proposed algorithm is able to detect small targets but unlike many other approaches is designed to work with large-scale objects as well. To localize aerial vehicle position the system of equations relating object coordinates in space and observed image is solved. The system solution gives the current position and speed of the detected object in space. Using this information distance and time to collision can be estimated. Experimental research on real video sequences and modeled data is performed. Video database contained different types of aerial vehicles: aircrafts, helicopters, and UAVs. The presented algorithm is able to detect aerial vehicles from several kilometers under regular daylight conditions.

  2. Surface defects on the Gd{sub 2}Zr{sub 2}O{sub 7} oxide films grown on textured NiW technical substrates by chemical solution method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Y., E-mail: yuezhao@sjtu.edu.cn

    2017-02-15

    Epitaxial growth of oxide thin films has attracted much interest because of their broad applications in various fields. In this study, we investigated the microstructure of textured Gd{sub 2}Zr{sub 2}O{sub 7} films grown on (001)〈100〉 orientated NiW alloy substrates by a chemical solution deposition (CSD) method. The aging effect of precursor solution on defect formation was thoroughly investigated. A slight difference was observed between the as-obtained and aged precursor solutions with respect to the phase purity and global texture of films prepared using these solutions. However, the surface morphologies are different, i.e., some regular-shaped regions (mainly hexagonal or dodecagonal) weremore » observed on the film prepared using the as-obtained precursor, whereas the film prepared using the aged precursor exhibits a homogeneous structure. Electron backscatter diffraction and scanning electron microscopy analyses showed that the Gd{sub 2}Zr{sub 2}O{sub 7} grains present within the regular-shaped regions are polycrystalline, whereas those present in the surrounding are epitaxial. Some polycrystalline regions ranging from several micrometers to several tens of micrometers grew across the NiW grain boundaries underneath. To understand this phenomenon, the properties of the precursors and corresponding xerogel were studied by Fourier transform infrared spectroscopy and coupled thermogravimetry/differential thermal analysis. The results showed that both the solutions mainly contain small Gd−Zr−O clusters obtained by the reaction of zirconium acetylacetonate with propionic acid during the precursor synthesis. The regular-shaped regions were probably formed by large Gd−Zr−O frameworks with a metastable structure in the solution with limited aging time. This study demonstrates the importance of the precise control of chemical reaction path to enhance the stability and homogeneity of the precursors of the CSD route. - Highlights: •We investigate microstructure of Gd{sub 2}Zr{sub 2}O{sub 7} films grown by a chemical solution route. •The aging effect of precursor solution on formation of surface defect was thoroughly studied. •Gd−Zr−O clusters are present in the precursor solutions.« less

  3. Dynamical black holes in low-energy string theory

    NASA Astrophysics Data System (ADS)

    Aniceto, Pedro; Rocha, Jorge V.

    2017-05-01

    We investigate time-dependent spherically symmetric solutions of the four-dimensional Einstein-Maxwell-axion-dilaton system, with the dilaton coupling that occurs in low-energy effective heterotic string theory. A class of dilaton-electrovacuum radiating solutions with a trivial axion, previously found by Güven and Yörük, is re-derived in a simpler manner and its causal structure is clarified. It is shown that such dynamical spacetimes featuring apparent horizons do not possess a regular light-like past null infinity or future null infinity, depending on whether they are radiating or accreting. These solutions are then extended in two ways. First we consider a Vaidya-like generalisation, which introduces a null dust source. Such spacetimes are used to test the status of cosmic censorship in the context of low-energy string theory. We prove that — within this family of solutions — regular black holes cannot evolve into naked singularities by accreting null dust, unless standard energy conditions are violated. Secondly, we employ S-duality to derive new time-dependent dyon solutions with a nontrivial axion turned on. Although they share the same causal structure as their Einstein-Maxwell-dilaton counterparts, these solutions possess both electric and magnetic charges.

  4. Evaluation of global equal-area mass grid solutions from GRACE

    NASA Astrophysics Data System (ADS)

    Save, Himanshu; Bettadpur, Srinivas; Tapley, Byron

    2015-04-01

    The Gravity Recovery and Climate Experiment (GRACE) range-rate data was inverted into global equal-area mass grid solutions at the Center for Space Research (CSR) using Tikhonov Regularization to stabilize the ill-posed inversion problem. These solutions are intended to be used for applications in Hydrology, Oceanography, Cryosphere etc without any need for post-processing. This paper evaluates these solutions with emphasis on spatial and temporal characteristics of the signal content. These solutions will be validated against multiple models and in-situ data sets.

  5. Condition Number Regularized Covariance Estimation*

    PubMed Central

    Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala

    2012-01-01

    Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the “large p small n” setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required. PMID:23730197

  6. Condition Number Regularized Covariance Estimation.

    PubMed

    Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala

    2013-06-01

    Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the "large p small n " setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required.

  7. Swimming in a two-dimensional Brinkman fluid: Computational modeling and regularized solutions

    NASA Astrophysics Data System (ADS)

    Leiderman, Karin; Olson, Sarah D.

    2016-02-01

    The incompressible Brinkman equation represents the homogenized fluid flow past obstacles that comprise a small volume fraction. In nondimensional form, the Brinkman equation can be characterized by a single parameter that represents the friction or resistance due to the obstacles. In this work, we derive an exact fundamental solution for 2D Brinkman flow driven by a regularized point force and describe the numerical method to use it in practice. To test our solution and method, we compare numerical results with an analytic solution of a stationary cylinder in a uniform Brinkman flow. Our method is also compared to asymptotic theory; for an infinite-length, undulating sheet of small amplitude, we recover an increasing swimming speed as the resistance is increased. With this computational framework, we study a model swimmer of finite length and observe an enhancement in propulsion and efficiency for small to moderate resistance. Finally, we study the interaction of two swimmers where attraction does not occur when the initial separation distance is larger than the screening length.

  8. Neural network for nonsmooth pseudoconvex optimization with general convex constraints.

    PubMed

    Bian, Wei; Ma, Litao; Qin, Sitian; Xue, Xiaoping

    2018-05-01

    In this paper, a one-layer recurrent neural network is proposed for solving a class of nonsmooth, pseudoconvex optimization problems with general convex constraints. Based on the smoothing method, we construct a new regularization function, which does not depend on any information of the feasible region. Thanks to the special structure of the regularization function, we prove the global existence, uniqueness and "slow solution" character of the state of the proposed neural network. Moreover, the state solution of the proposed network is proved to be convergent to the feasible region in finite time and to the optimal solution set of the related optimization problem subsequently. In particular, the convergence of the state to an exact optimal solution is also considered in this paper. Numerical examples with simulation results are given to show the efficiency and good characteristics of the proposed network. In addition, some preliminary theoretical analysis and application of the proposed network for a wider class of dynamic portfolio optimization are included. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. Measuring, Enabling and Comparing Modularity, Regularity and Hierarchy in Evolutionary Design

    NASA Technical Reports Server (NTRS)

    Hornby, Gregory S.

    2005-01-01

    For computer-automated design systems to scale to complex designs they must be able to produce designs that exhibit the characteristics of modularity, regularity and hierarchy - characteristics that are found both in man-made and natural designs. Here we claim that these characteristics are enabled by implementing the attributes of combination, control-flow and abstraction in the representation. To support this claim we use an evolutionary algorithm to evolve solutions to different sizes of a table design problem using five different representations, each with different combinations of modularity, regularity and hierarchy enabled and show that the best performance happens when all three of these attributes are enabled. We also define metrics for modularity, regularity and hierarchy in design encodings and demonstrate that high fitness values are achieved with high values of modularity, regularity and hierarchy and that there is a positive correlation between increases in fitness and increases in modularity. regularity and hierarchy.

  10. An improved genetic algorithm for designing optimal temporal patterns of neural stimulation

    NASA Astrophysics Data System (ADS)

    Cassar, Isaac R.; Titus, Nathan D.; Grill, Warren M.

    2017-12-01

    Objective. Electrical neuromodulation therapies typically apply constant frequency stimulation, but non-regular temporal patterns of stimulation may be more effective and more efficient. However, the design space for temporal patterns is exceedingly large, and model-based optimization is required for pattern design. We designed and implemented a modified genetic algorithm (GA) intended for design optimal temporal patterns of electrical neuromodulation. Approach. We tested and modified standard GA methods for application to designing temporal patterns of neural stimulation. We evaluated each modification individually and all modifications collectively by comparing performance to the standard GA across three test functions and two biophysically-based models of neural stimulation. Main results. The proposed modifications of the GA significantly improved performance across the test functions and performed best when all were used collectively. The standard GA found patterns that outperformed fixed-frequency, clinically-standard patterns in biophysically-based models of neural stimulation, but the modified GA, in many fewer iterations, consistently converged to higher-scoring, non-regular patterns of stimulation. Significance. The proposed improvements to standard GA methodology reduced the number of iterations required for convergence and identified superior solutions.

  11. A trace ratio maximization approach to multiple kernel-based dimensionality reduction.

    PubMed

    Jiang, Wenhao; Chung, Fu-lai

    2014-01-01

    Most dimensionality reduction techniques are based on one metric or one kernel, hence it is necessary to select an appropriate kernel for kernel-based dimensionality reduction. Multiple kernel learning for dimensionality reduction (MKL-DR) has been recently proposed to learn a kernel from a set of base kernels which are seen as different descriptions of data. As MKL-DR does not involve regularization, it might be ill-posed under some conditions and consequently its applications are hindered. This paper proposes a multiple kernel learning framework for dimensionality reduction based on regularized trace ratio, termed as MKL-TR. Our method aims at learning a transformation into a space of lower dimension and a corresponding kernel from the given base kernels among which some may not be suitable for the given data. The solutions for the proposed framework can be found based on trace ratio maximization. The experimental results demonstrate its effectiveness in benchmark datasets, which include text, image and sound datasets, for supervised, unsupervised as well as semi-supervised settings. Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. A class of nonideal solutions. 1: Definition and properties

    NASA Technical Reports Server (NTRS)

    Zeleznik, F. J.

    1983-01-01

    A class of nonideal solutions is defined by constructing a function to represent the composition dependence of thermodynamic properties for members of the class, and some properties of these solutions are studied. The constructed function has several useful features: (1) its parameters occur linearly; (2) it contains a logarithmic singularity in the dilute solution region and contains ideal solutions and regular solutions as special cases; and (3) it is applicable to N-ary systems and reduces to M-ary systems (M or = N) in a form-invariant manner.

  13. Modelisation of the SECMin molten salts environment

    NASA Astrophysics Data System (ADS)

    Lucas, M.; Slim, C.; Delpech, S.; di Caprio, D.; Stafiej, J.

    2014-06-01

    We develop a cellular automata modelisation of SECM experiments to study corrosion in molten salt media for generation IV nuclear reactors. The electrodes used in these experiments are cylindrical glass tips with a coaxial metal wire inside. As the result of simulations we obtain the current approach curves of the electrodes with geometries characterized by several values of the ratios of glass to metal area at the tip. We compare these results with predictions of the known analytic expressions, solutions of partial differential equations for flat uniform geometry of the substrate. We present the results for other, more complicated substrate surface geometries e. g. regular saw modulated surface, surface obtained by Eden model process, ...

  14. The Training of Physics Teachers in Cuba: A Historical Approach

    NASA Astrophysics Data System (ADS)

    de Jesús Alamino Ortega, Diego

    The regular, systematic training of physics teachers in Cuba is quite recent when compared to the long history of physics itself. However, its development may serve to illustrate some interesting solutions to a long-standing question: How should a physics teacher be trained in agreement with a certain society at a given moment? In the Cuban context the answer to this question involves quite an original sequence of continuities and breaks, following perhaps the thoughts of Bolívar's teacher, Simón Rodríguez, who wrote in the nineteenth century: "Beware! The mania of slavishly imitating the enlightened nations may well make America in its infancy play the role of an old lady."

  15. Radial rescaling approach for the eigenvalue problem of a particle in an arbitrarily shaped box.

    PubMed

    Lijnen, Erwin; Chibotaru, Liviu F; Ceulemans, Arnout

    2008-01-01

    In the present work we introduce a methodology for solving a quantum billiard with Dirichlet boundary conditions. The procedure starts from the exactly known solutions for the particle in a circular disk, which are subsequently radially rescaled in such a way that they obey the new boundary conditions. In this way one constructs a complete basis set which can be used to obtain the eigenstates and eigenenergies of the corresponding quantum billiard to a high level of precision. Test calculations for several regular polygons show the efficiency of the method which often requires one or two basis functions to describe the lowest eigenstates with high accuracy.

  16. Control of the transition between regular and mach reflection of shock waves

    NASA Astrophysics Data System (ADS)

    Alekseev, A. K.

    2012-06-01

    A control problem was considered that makes it possible to switch the flow between stationary Mach and regular reflection of shock waves within the dual solution domain. The sensitivity of the flow was computed by solving adjoint equations. A control disturbance was sought by applying gradient optimization methods. According to the computational results, the transition from regular to Mach reflection can be executed by raising the temperature. The transition from Mach to regular reflection can be achieved by lowering the temperature at moderate Mach numbers and is impossible at large numbers. The reliability of the numerical results was confirmed by verifying them with the help of a posteriori analysis.

  17. Stability Properties of the Regular Set for the Navier-Stokes Equation

    NASA Astrophysics Data System (ADS)

    D'Ancona, Piero; Lucà, Renato

    2018-06-01

    We investigate the size of the regular set for small perturbations of some classes of strong large solutions to the Navier-Stokes equation. We consider perturbations of the data that are small in suitable weighted L2 spaces but can be arbitrarily large in any translation invariant Banach space. We give similar results in the small data setting.

  18. Solving the hypersingular boundary integral equation for the Burton and Miller formulation.

    PubMed

    Langrenne, Christophe; Garcia, Alexandre; Bonnet, Marc

    2015-11-01

    This paper presents an easy numerical implementation of the Burton and Miller (BM) formulation, where the hypersingular Helmholtz integral is regularized by identities from the associated Laplace equation and thus needing only the evaluation of weakly singular integrals. The Helmholtz equation and its normal derivative are combined directly with combinations at edge or corner collocation nodes not used when the surface is not smooth. The hypersingular operators arising in this process are regularized and then evaluated by an indirect procedure based on discretized versions of the Calderón identities linking the integral operators for associated Laplace problems. The method is valid for acoustic radiation and scattering problems involving arbitrarily shaped three-dimensional bodies. Unlike other approaches using direct evaluation of hypersingular integrals, collocation points still coincide with mesh nodes, as is usual when using conforming elements. Using higher-order shape functions (with the boundary element method model size kept fixed) reduces the overall numerical integration effort while increasing the solution accuracy. To reduce the condition number of the resulting BM formulation at low frequencies, a regularized version α = ik/(k(2 )+ λ) of the classical BM coupling factor α = i/k is proposed. Comparisons with the combined Helmholtz integral equation Formulation method of Schenck are made for four example configurations, two of them featuring non-smooth surfaces.

  19. $L^1$ penalization of volumetric dose objectives in optimal control of PDEs

    DOE PAGES

    Barnard, Richard C.; Clason, Christian

    2017-02-11

    This work is concerned with a class of PDE-constrained optimization problems that are motivated by an application in radiotherapy treatment planning. Here the primary design objective is to minimize the volume where a functional of the state violates a prescribed level, but prescribing these levels in the form of pointwise state constraints leads to infeasible problems. We therefore propose an alternative approach based on L 1 penalization of the violation that is also applicable when state constraints are infeasible. We establish well-posedness of the corresponding optimal control problem, derive first-order optimality conditions, discuss convergence of minimizers as the penalty parametermore » tends to infinity, and present a semismooth Newton method for their efficient numerical solution. Finally, the performance of this method for a model problem is illustrated and contrasted with an alternative approach based on (regularized) state constraints.« less

  20. Experimental/clinical evaluation of EIT image reconstruction with l1 data and image norms

    NASA Astrophysics Data System (ADS)

    Mamatjan, Yasin; Borsic, Andrea; Gürsoy, Doga; Adler, Andy

    2013-04-01

    Electrical impedance tomography (EIT) image reconstruction is ill-posed, and the spatial resolution of reconstructed images is low due to the diffuse propagation of current and limited number of independent measurements. Generally, image reconstruction is formulated using a regularized scheme in which l2 norms are preferred for both the data misfit and image prior terms due to computational convenience which result in smooth solutions. However, recent work on a Primal Dual-Interior Point Method (PDIPM) framework showed its effectiveness in dealing with the minimization problem. l1 norms on data and regularization terms in EIT image reconstruction address both problems of reconstruction with sharp edges and dealing with measurement errors. We aim for a clinical and experimental evaluation of the PDIPM method by selecting scenarios (human lung and dog breathing) with known electrode errors, which require a rigorous regularization and cause the failure of reconstructions with l2 norm. Results demonstrate the applicability of PDIPM algorithms, especially l1 data and regularization norms for clinical applications of EIT showing that l1 solution is not only more robust to measurement errors in clinical setting, but also provides high contrast resolution on organ boundaries.

  1. An entropy regularization method applied to the identification of wave distribution function for an ELF hiss event

    NASA Astrophysics Data System (ADS)

    Prot, Olivier; SantolíK, OndřEj; Trotignon, Jean-Gabriel; Deferaudy, Hervé

    2006-06-01

    An entropy regularization algorithm (ERA) has been developed to compute the wave-energy density from electromagnetic field measurements. It is based on the wave distribution function (WDF) concept. To assess its suitability and efficiency, the algorithm is applied to experimental data that has already been analyzed using other inversion techniques. The FREJA satellite data that is used consists of six spectral matrices corresponding to six time-frequency points of an ELF hiss-event spectrogram. The WDF analysis is performed on these six points and the results are compared with those obtained previously. A statistical stability analysis confirms the stability of the solutions. The WDF computation is fast and without any prespecified parameters. The regularization parameter has been chosen in accordance with the Morozov's discrepancy principle. The Generalized Cross Validation and L-curve criterions are then tentatively used to provide a fully data-driven method. However, these criterions fail to determine a suitable value of the regularization parameter. Although the entropy regularization leads to solutions that agree fairly well with those already published, some differences are observed, and these are discussed in detail. The main advantage of the ERA is to return the WDF that exhibits the largest entropy and to avoid the use of a priori models, which sometimes seem to be more accurate but without any justification.

  2. Evaluation of uncertainty for regularized deconvolution: A case study in hydrophone measurements.

    PubMed

    Eichstädt, S; Wilkens, V

    2017-06-01

    An estimation of the measurand in dynamic metrology usually requires a deconvolution based on a dynamic calibration of the measuring system. Since deconvolution is, mathematically speaking, an ill-posed inverse problem, some kind of regularization is required to render the problem stable and obtain usable results. Many approaches to regularized deconvolution exist in the literature, but the corresponding evaluation of measurement uncertainties is, in general, an unsolved issue. In particular, the uncertainty contribution of the regularization itself is a topic of great importance, because it has a significant impact on the estimation result. Here, a versatile approach is proposed to express prior knowledge about the measurand based on a flexible, low-dimensional modeling of an upper bound on the magnitude spectrum of the measurand. This upper bound allows the derivation of an uncertainty associated with the regularization method in line with the guidelines in metrology. As a case study for the proposed method, hydrophone measurements in medical ultrasound with an acoustic working frequency of up to 7.5 MHz are considered, but the approach is applicable for all kinds of estimation methods in dynamic metrology, where regularization is required and which can be expressed as a multiplication in the frequency domain.

  3. Einstein-Podolsky-Rosen-like separability indicators for two-mode Gaussian states

    NASA Astrophysics Data System (ADS)

    Marian, Paulina; Marian, Tudor A.

    2018-02-01

    We investigate the separability of the two-mode Gaussian states (TMGSs) by using the variances of a pair of Einstein-Podolsky-Rosen (EPR)-like observables. Our starting point is inspired by the general necessary condition of separability introduced by Duan et al (2000 Phys. Rev. Lett. 84 2722). We evaluate the minima of the normalized forms of both the product and sum of such variances, as well as that of a regularized sum. Making use of Simon’s separability criterion, which is based on the condition of positivity of the partial transpose (PPT) of the density matrix (Simon 2000 Phys. Rev. Lett. 84 2726), we prove that these minima are separability indicators in their own right. They appear to quantify the greatest amount of EPR-like correlations that can be created in a TMGS by means of local operations. Furthermore, we reconsider the EPR-like approach to the separability of TMGSs which was developed by Duan et al with no reference to the PPT condition. By optimizing the regularized form of their EPR-like uncertainty sum, we derive a separability indicator for any TMGS. We prove that the corresponding EPR-like condition of separability is manifestly equivalent to Simon’s PPT one. The consistency of these two distinct approaches (EPR-like and PPT) affords a better understanding of the examined separability problem, whose explicit solution found long ago by Simon covers all situations of interest.

  4. Constrained Optimization Methods in Health Services Research-An Introduction: Report 1 of the ISPOR Optimization Methods Emerging Good Practices Task Force.

    PubMed

    Crown, William; Buyukkaramikli, Nasuh; Thokala, Praveen; Morton, Alec; Sir, Mustafa Y; Marshall, Deborah A; Tosh, Jon; Padula, William V; Ijzerman, Maarten J; Wong, Peter K; Pasupathy, Kalyan S

    2017-03-01

    Providing health services with the greatest possible value to patients and society given the constraints imposed by patient characteristics, health care system characteristics, budgets, and so forth relies heavily on the design of structures and processes. Such problems are complex and require a rigorous and systematic approach to identify the best solution. Constrained optimization is a set of methods designed to identify efficiently and systematically the best solution (the optimal solution) to a problem characterized by a number of potential solutions in the presence of identified constraints. This report identifies 1) key concepts and the main steps in building an optimization model; 2) the types of problems for which optimal solutions can be determined in real-world health applications; and 3) the appropriate optimization methods for these problems. We first present a simple graphical model based on the treatment of "regular" and "severe" patients, which maximizes the overall health benefit subject to time and budget constraints. We then relate it back to how optimization is relevant in health services research for addressing present day challenges. We also explain how these mathematical optimization methods relate to simulation methods, to standard health economic analysis techniques, and to the emergent fields of analytics and machine learning. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  5. Dynamics of temporally localized states in passively mode-locked semiconductor lasers

    NASA Astrophysics Data System (ADS)

    Schelte, C.; Javaloyes, J.; Gurevich, S. V.

    2018-05-01

    We study the emergence and the stability of temporally localized structures in the output of a semiconductor laser passively mode locked by a saturable absorber in the long-cavity regime. For large yet realistic values of the linewidth enhancement factor, we disclose the existence of secondary dynamical instabilities where the pulses develop regular and subsequent irregular temporal oscillations. By a detailed bifurcation analysis we show that additional solution branches that consist of multipulse (molecules) solutions exist. We demonstrate that the various solution curves for the single and multipeak pulses can splice and intersect each other via transcritical bifurcations, leading to a complex web of solutions. Our analysis is based on a generic model of mode locking that consists of a time-delayed dynamical system, but also on a much more numerically efficient, yet approximate, partial differential equation. We compare the results of the bifurcation analysis of both models in order to assess up to which point the two approaches are equivalent. We conclude our analysis by the study of the influence of group velocity dispersion, which is only possible in the framework of the partial differential equation model, and we show that it may have a profound impact on the dynamics of the localized states.

  6. Initial-boundary value problem to 2D Boussinesq equations for MHD convection with stratification effects

    NASA Astrophysics Data System (ADS)

    Bian, Dongfen; Liu, Jitao

    2017-12-01

    This paper is concerned with the initial-boundary value problem to 2D magnetohydrodynamics-Boussinesq system with the temperature-dependent viscosity, thermal diffusivity and electrical conductivity. First, we establish the global weak solutions under the minimal initial assumption. Then by imposing higher regularity assumption on the initial data, we obtain the global strong solution with uniqueness. Moreover, the exponential decay rates of weak solutions and strong solution are obtained respectively.

  7. A 25% tannic acid solution as a root canal irrigant cleanser: a scanning electron microscope study.

    PubMed

    Bitter, N C

    1989-03-01

    A scanning electron microscope was used to evaluate the cleansing properties of a 25% tannic acid solution on the dentinal surface in the pulp chamber of endodontically prepared teeth. This was compared with the amorphous smear layer of the canal with the use of hydrogen peroxide and sodium hypochlorite solution as an irrigant. The tannic acid solution removed the smear layer more effectively than the regular cleansing agent.

  8. A deformation of Sasakian structure in the presence of torsion and supergravity solutions

    NASA Astrophysics Data System (ADS)

    Houri, Tsuyoshi; Takeuchi, Hiroshi; Yasui, Yukinori

    2013-07-01

    A deformation of Sasakian structure in the presence of totally skew-symmetric torsion is discussed on odd-dimensional manifolds whose metric cones are Kähler with torsion. It is shown that such a geometry inherits similar properties to those of Sasakian geometry. As their example, we present an explicit expression of local metrics. It is also demonstrated that our example of the metrics admits the existence of hidden symmetry described by non-trivial odd-rank generalized closed conformal Killing-Yano tensors. Furthermore, using these metrics as an ansatz, we construct exact solutions in five-dimensional minimal gauged/ungauged supergravity and 11-dimensional supergravity. Finally, the global structures of the solutions are discussed. We obtain regular metrics on compact manifolds in five dimensions, which give natural generalizations of Sasaki-Einstein manifolds Yp, q and La, b, c. We also briefly discuss regular metrics on non-compact manifolds in 11 dimensions.

  9. Estimates of the Modeling Error of the α -Models of Turbulence in Two and Three Space Dimensions

    NASA Astrophysics Data System (ADS)

    Dunca, Argus A.

    2017-12-01

    This report investigates the convergence rate of the weak solutions w^{α } of the Leray-α , modified Leray-α , Navier-Stokes-α and the zeroth ADM turbulence models to a weak solution u of the Navier-Stokes equations. It is assumed that this weak solution u of the NSE belongs to the space L^4(0, T; H^1) . It is shown that under this regularity condition the error u-w^{α } is O(α ) in the norms L^2(0, T; H^1) and L^{∞}(0, T; L^2) , thus improving related known results. It is also shown that the averaged error \\overline{u}-\\overline{w^{α }} is higher order, O(α ^{1.5}) , in the same norms, therefore the α -regularizations considered herein approximate better filtered flow structures than the exact (unfiltered) flow velocities.

  10. Wormhole solutions with a complex ghost scalar field and their instability

    NASA Astrophysics Data System (ADS)

    Dzhunushaliev, Vladimir; Folomeev, Vladimir; Kleihaus, Burkhard; Kunz, Jutta

    2018-01-01

    We study compact configurations with a nontrivial wormholelike spacetime topology supported by a complex ghost scalar field with a quartic self-interaction. For this case, we obtain regular asymptotically flat equilibrium solutions possessing reflection symmetry. We then show their instability with respect to linear radial perturbations.

  11. Inverse problems with nonnegative and sparse solutions: algorithms and application to the phase retrieval problem

    NASA Astrophysics Data System (ADS)

    Quy Muoi, Pham; Nho Hào, Dinh; Sahoo, Sujit Kumar; Tang, Dongliang; Cong, Nguyen Huu; Dang, Cuong

    2018-05-01

    In this paper, we study a gradient-type method and a semismooth Newton method for minimization problems in regularizing inverse problems with nonnegative and sparse solutions. We propose a special penalty functional forcing the minimizers of regularized minimization problems to be nonnegative and sparse, and then we apply the proposed algorithms in a practical the problem. The strong convergence of the gradient-type method and the local superlinear convergence of the semismooth Newton method are proven. Then, we use these algorithms for the phase retrieval problem and illustrate their efficiency in numerical examples, particularly in the practical problem of optical imaging through scattering media where all the noises from experiment are presented.

  12. Dirac-Born-Infeld actions and tachyon monopoles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Calo, Vincenzo; Tallarita, Gianni; Thomas, Steven

    2010-04-15

    We investigate magnetic monopole solutions of the non-Abelian Dirac-Born-Infeld (DBI) action describing two coincident non-BPS D9-branes in flat space. Just as in the case of kink and vortex solitonic tachyon solutions of the full DBI non-BPS actions, as previously analyzed by Sen, these monopole configurations are singular in the first instance and require regularization. We discuss a suitable non-Abelian ansatz that describes a pointlike magnetic monopole and show it solves the equations of motion to leading order in the regularization parameter. Fluctuations are studied and shown to describe a codimension three BPS D6-brane, and a formula is derived for itsmore » tension.« less

  13. On the Solutions of a 2+1-Dimensional Model for Epitaxial Growth with Axial Symmetry

    NASA Astrophysics Data System (ADS)

    Lu, Xin Yang

    2018-04-01

    In this paper, we study the evolution equation derived by Xu and Xiang (SIAM J Appl Math 69(5):1393-1414, 2009) to describe heteroepitaxial growth in 2+1 dimensions with elastic forces on vicinal surfaces is in the radial case and uniform mobility. This equation is strongly nonlinear and contains two elliptic integrals and defined via Cauchy principal value. We will first derive a formally equivalent parabolic evolution equation (i.e., full equivalence when sufficient regularity is assumed), and the main aim is to prove existence, uniqueness and regularity of strong solutions. We will extensively use techniques from the theory of evolution equations governed by maximal monotone operators in Banach spaces.

  14. Complex optimization for big computational and experimental neutron datasets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bao, Feng; Oak Ridge National Lab.; Archibald, Richard

    Here, we present a framework to use high performance computing to determine accurate solutions to the inverse optimization problem of big experimental data against computational models. We demonstrate how image processing, mathematical regularization, and hierarchical modeling can be used to solve complex optimization problems on big data. We also demonstrate how both model and data information can be used to further increase solution accuracy of optimization by providing confidence regions for the processing and regularization algorithms. Finally, we use the framework in conjunction with the software package SIMPHONIES to analyze results from neutron scattering experiments on silicon single crystals, andmore » refine first principles calculations to better describe the experimental data.« less

  15. Some Investigations Relating to the Elastostatics of a Tapered Tube

    DTIC Science & Technology

    1978-03-01

    regularity of the solution on the Z axis. Indeed the assumption of such’regularity is stated explicitly by Heins (p. 789) and the problems solved (e.g. a... assumptions , becomes where t h e integrand is evaluated a t ( + i ,O). This i s a form P a of t he i n t e g r a l representa t ion of t h e...solut ion. Now l e t us look a t t h e assumptions on Q. F i r s t of a l l , i n order t o be sure t h a t our operations a r e l eg i

  16. Complex optimization for big computational and experimental neutron datasets

    DOE PAGES

    Bao, Feng; Oak Ridge National Lab.; Archibald, Richard; ...

    2016-11-07

    Here, we present a framework to use high performance computing to determine accurate solutions to the inverse optimization problem of big experimental data against computational models. We demonstrate how image processing, mathematical regularization, and hierarchical modeling can be used to solve complex optimization problems on big data. We also demonstrate how both model and data information can be used to further increase solution accuracy of optimization by providing confidence regions for the processing and regularization algorithms. Finally, we use the framework in conjunction with the software package SIMPHONIES to analyze results from neutron scattering experiments on silicon single crystals, andmore » refine first principles calculations to better describe the experimental data.« less

  17. Analysis of borehole expansion and gallery tests in anisotropic rock masses

    USGS Publications Warehouse

    Amadei, B.; Savage, W.Z.

    1991-01-01

    Closed-form solutions are used to show how rock anisotropy affects the variation of the modulus of deformation around the walls of a hole in which expansion tests are conducted. These tests include dilatometer and NX-jack tests in boreholes and gallery tests in tunnels. The effects of rock anisotropy on the modulus of deformation are shown for transversely isotropic and regularly jointed rock masses with planes of transverse isotropy or joint planes parallel or normal to the hole longitudinal axis for plane strain or plane stress condition. The closed-form solutions can also be used when determining the elastic properties of anisotropic rock masses (intact or regularly jointed) in situ. ?? 1991.

  18. Charge generation layers for solution processed tandem organic light emitting diodes with regular device architecture.

    PubMed

    Höfle, Stefan; Bernhard, Christoph; Bruns, Michael; Kübel, Christian; Scherer, Torsten; Lemmer, Uli; Colsmann, Alexander

    2015-04-22

    Tandem organic light emitting diodes (OLEDs) utilizing fluorescent polymers in both sub-OLEDs and a regular device architecture were fabricated from solution, and their structure and performance characterized. The charge carrier generation layer comprised a zinc oxide layer, modified by a polyethylenimine interface dipole, for electron injection and either MoO3, WO3, or VOx for hole injection into the adjacent sub-OLEDs. ToF-SIMS investigations and STEM-EDX mapping verified the distinct functional layers throughout the layer stack. At a given device current density, the current efficiencies of both sub-OLEDs add up to a maximum of 25 cd/A, indicating a properly working tandem OLED.

  19. Distributed intelligent control and management (DICAM) applications and support for semi-automated development

    NASA Technical Reports Server (NTRS)

    Hayes-Roth, Frederick; Erman, Lee D.; Terry, Allan; Hayes-Roth, Barbara

    1992-01-01

    We have recently begun a 4-year effort to develop a new technology foundation and associated methodology for the rapid development of high-performance intelligent controllers. Our objective in this work is to enable system developers to create effective real-time systems for control of multiple, coordinated entities in much less time than is currently required. Our technical strategy for achieving this objective is like that in other domain-specific software efforts: analyze the domain and task underlying effective performance, construct parametric or model-based generic components and overall solutions to the task, and provide excellent means for specifying, selecting, tailoring or automatically generating the solution elements particularly appropriate for the problem at hand. In this paper, we first present our specific domain focus, briefly describe the methodology and environment we are developing to provide a more regular approach to software development, and then later describe the issues this raises for the research community and this specific workshop.

  20. Robust GNSS and InSAR tomography of neutrospheric refractivity using a Compressive Sensing approach

    NASA Astrophysics Data System (ADS)

    Heublein, Marion; Alshawaf, Fadwa; Zhu, Xiao Xiang; Hinz, Stefan

    2017-04-01

    Motivation: An accurate knowledge of the 3D distribution of water vapor in the atmosphere is a key element for weather forecasting and climate research. In addition, a precise determination of water vapor is also required for accurate positioning and deformation monitoring using Global Navigation Satellite Systems (GNSS) and Interferometric Synthetic Aperture Radar (InSAR). Several approaches for 3D tomographic water vapor reconstruction from GNSS-based Slant Wet Delay (SWD) estimates using the least squares (LSQ) adjustment exist. However, the tomographic system is in general ill-conditioned and its solution is unstable. Therefore, additional information or constraints need to be added in order to regularize the system. Goal of this work: In this work, we analyze the potential of Compressive Sensing (CS) for robustly reconstructing neutrospheric refractivity from GNSS SWD estimates. Moreover, the benefit of adding InSAR SWD estimates into the tomographic system is studied. Approach: A sparse representation of the refractivity field is obtained using a dictionary composed of Discrete Cosine Transforms (DCT) in longitude and latitude direction and of an Euler transform in height direction. This sparsity of the signal can be used as a prior for regularization and the CS inversion is solved by minimizing the number of non-zero entries of the sparse solution in the DCT-Euler domain. No other regularization constraints or prior knowledge is applied. The tomographic reconstruction relies on total SWD estimates from GNSS Precise Point Positioning (PPP) and Persistent Scatterer (PS) InSAR. On the one hand, GNSS PPP SWD estimates are included into the system of equations. On the other hand, 2D ZWD maps are obtained by a combination of point-wise estimates of the wet delay using GNSS observations and partial InSAR wet delay maps. These ZWD estimates are aggregated to derive realistic wet delay input data at given points as if corresponding to GNSS sites within the study area. The made-up ZWD values can be mapped into different elevation and azimuth angles. Moreover, using the same observation geometry as in the case of the GNSS and InSAR data, a synthetic set of SWD values was generated based on WRF simulations. Results: The CS approach shows particular strength in the case of a small number of SWD estimates. When compared to LSQ, the sparse reconstruction is much more robust. In the case of a low density of GNSS sites, adding InSAR SWD estimates improves the reconstruction accuracy for both LSQ and CS. Based on a synthetic SWD dataset generated using WRF simulations of wet refractivity, the CS based solution of the tomographic system is validated. In the vertical direction, the refractivity distribution deduced from GNSS and InSAR SWD estimates is compared to a tropospheric humidity data set provided by EUMETSAT consisting of daily mean values of specific humidity given on six pressure levels between 1000 hPa and 200 hPa. Study area: The Upper Rhine Graben (URG) characterized by negligible surface deformations is chosen as study area. A network of seven permanent GNSS receivers is used for this study, and a total number of 17 SAR images, acquired by ENVISAT ASAR is available.

  1. Nonconvex Sparse Logistic Regression With Weakly Convex Regularization

    NASA Astrophysics Data System (ADS)

    Shen, Xinyue; Gu, Yuantao

    2018-06-01

    In this work we propose to fit a sparse logistic regression model by a weakly convex regularized nonconvex optimization problem. The idea is based on the finding that a weakly convex function as an approximation of the $\\ell_0$ pseudo norm is able to better induce sparsity than the commonly used $\\ell_1$ norm. For a class of weakly convex sparsity inducing functions, we prove the nonconvexity of the corresponding sparse logistic regression problem, and study its local optimality conditions and the choice of the regularization parameter to exclude trivial solutions. Despite the nonconvexity, a method based on proximal gradient descent is used to solve the general weakly convex sparse logistic regression, and its convergence behavior is studied theoretically. Then the general framework is applied to a specific weakly convex function, and a necessary and sufficient local optimality condition is provided. The solution method is instantiated in this case as an iterative firm-shrinkage algorithm, and its effectiveness is demonstrated in numerical experiments by both randomly generated and real datasets.

  2. Global Regularity and Time Decay for the 2D Magnetohydrodynamic Equations with Fractional Dissipation and Partial Magnetic Diffusion

    NASA Astrophysics Data System (ADS)

    Dong, Bo-Qing; Jia, Yan; Li, Jingna; Wu, Jiahong

    2018-05-01

    This paper focuses on a system of the 2D magnetohydrodynamic (MHD) equations with the kinematic dissipation given by the fractional operator (-Δ )^α and the magnetic diffusion by partial Laplacian. We are able to show that this system with any α >0 always possesses a unique global smooth solution when the initial data is sufficiently smooth. In addition, we make a detailed study on the large-time behavior of these smooth solutions and obtain optimal large-time decay rates. Since the magnetic diffusion is only partial here, some classical tools such as the maximal regularity property for the 2D heat operator can no longer be applied. A key observation on the structure of the MHD equations allows us to get around the difficulties due to the lack of full Laplacian magnetic diffusion. The results presented here are the sharpest on the global regularity problem for the 2D MHD equations with only partial magnetic diffusion.

  3. Syndrome Analysis: Chronic Alcoholism in Adults.

    ERIC Educational Resources Information Center

    Pendorf, James E.

    1990-01-01

    Provides outline narrative of most possible outcomes of regular heavy alcohol use, regular alcohol abuse, or chronic alcoholism. A systems analysis approach is used to expose conditions that may result when a human organism is subjected to excessive and chronic alcohol consumption. Such an approach illustrates the detrimental effects which alcohol…

  4. Particle-like solutions of the Einstein-Dirac-Maxwell equations

    NASA Astrophysics Data System (ADS)

    Finster, Felix; Smoller, Joel; Yau, Shing-Tung

    1999-08-01

    We consider the coupled Einstein-Dirac-Maxwell equations for a static, spherically symmetric system of two fermions in a singlet spinor state. Soliton-like solutions are constructed numerically. The stability and the properties of the ground state solutions are discussed for different values of the electromagnetic coupling constant. We find solutions even when the electromagnetic coupling is so strong that the total interaction is repulsive in the Newtonian limit. Our solutions are regular and well-behaved; this shows that the combined electromagnetic and gravitational self-interaction of the Dirac particles is finite.

  5. The United States Regular Education Initiative: Flames of Controversy.

    ERIC Educational Resources Information Center

    Lowenthal, Barbara

    1990-01-01

    Arguments in favor of and against the Regular Education Initiative (REI) are presented. Lack of appropriate qualifications of regular classroom teachers and a lack of empirical evidence on REI effectiveness are cited as some of the problems with the approach. (JDD)

  6. A constrained regularization method for inverting data represented by linear algebraic or integral equations

    NASA Astrophysics Data System (ADS)

    Provencher, Stephen W.

    1982-09-01

    CONTIN is a portable Fortran IV package for inverting noisy linear operator equations. These problems occur in the analysis of data from a wide variety experiments. They are generally ill-posed problems, which means that errors in an unregularized inversion are unbounded. Instead, CONTIN seeks the optimal solution by incorporating parsimony and any statistical prior knowledge into the regularizor and absolute prior knowledge into equallity and inequality constraints. This can be greatly increase the resolution and accuracyh of the solution. CONTIN is very flexible, consisting of a core of about 50 subprograms plus 13 small "USER" subprograms, which the user can easily modify to specify special-purpose constraints, regularizors, operator equations, simulations, statistical weighting, etc. Specjial collections of USER subprograms are available for photon correlation spectroscopy, multicomponent spectra, and Fourier-Bessel, Fourier and Laplace transforms. Numerically stable algorithms are used throughout CONTIN. A fairly precise definition of information content in terms of degrees of freedom is given. The regularization parameter can be automatically chosen on the basis of an F-test and confidence region. The interpretation of the latter and of error estimates based on the covariance matrix of the constrained regularized solution are discussed. The strategies, methods and options in CONTIN are outlined. The program itself is described in the following paper.

  7. Regularized solution of a nonlinear problem in electromagnetic sounding

    NASA Astrophysics Data System (ADS)

    Piero Deidda, Gian; Fenu, Caterina; Rodriguez, Giuseppe

    2014-12-01

    Non destructive investigation of soil properties is crucial when trying to identify inhomogeneities in the ground or the presence of conductive substances. This kind of survey can be addressed with the aid of electromagnetic induction measurements taken with a ground conductivity meter. In this paper, starting from electromagnetic data collected by this device, we reconstruct the electrical conductivity of the soil with respect to depth, with the aid of a regularized damped Gauss-Newton method. We propose an inversion method based on the low-rank approximation of the Jacobian of the function to be inverted, for which we develop exact analytical formulae. The algorithm chooses a relaxation parameter in order to ensure the positivity of the solution and implements various methods for the automatic estimation of the regularization parameter. This leads to a fast and reliable algorithm, which is tested on numerical experiments both on synthetic data sets and on field data. The results show that the algorithm produces reasonable solutions in the case of synthetic data sets, even in the presence of a noise level consistent with real applications, and yields results that are compatible with those obtained by electrical resistivity tomography in the case of field data. Research supported in part by Regione Sardegna grant CRP2_686.

  8. Application of Turchin's method of statistical regularization

    NASA Astrophysics Data System (ADS)

    Zelenyi, Mikhail; Poliakova, Mariia; Nozik, Alexander; Khudyakov, Alexey

    2018-04-01

    During analysis of experimental data, one usually needs to restore a signal after it has been convoluted with some kind of apparatus function. According to Hadamard's definition this problem is ill-posed and requires regularization to provide sensible results. In this article we describe an implementation of the Turchin's method of statistical regularization based on the Bayesian approach to the regularization strategy.

  9. Review, evaluation, and discussion of the challenges of missing value imputation for mass spectrometry-based label-free global proteomics

    DOE PAGES

    Webb-Robertson, Bobbie-Jo M.; Wiberg, Holli K.; Matzke, Melissa M.; ...

    2015-04-09

    In this review, we apply selected imputation strategies to label-free liquid chromatography–mass spectrometry (LC–MS) proteomics datasets to evaluate the accuracy with respect to metrics of variance and classification. We evaluate several commonly used imputation approaches for individual merits and discuss the caveats of each approach with respect to the example LC–MS proteomics data. In general, local similarity-based approaches, such as the regularized expectation maximization and least-squares adaptive algorithms, yield the best overall performances with respect to metrics of accuracy and robustness. However, no single algorithm consistently outperforms the remaining approaches, and in some cases, performing classification without imputation sometimes yieldedmore » the most accurate classification. Thus, because of the complex mechanisms of missing data in proteomics, which also vary from peptide to protein, no individual method is a single solution for imputation. In summary, on the basis of the observations in this review, the goal for imputation in the field of computational proteomics should be to develop new approaches that work generically for this data type and new strategies to guide users in the selection of the best imputation for their dataset and analysis objectives.« less

  10. Asymptotic traveling wave solution for a credit rating migration problem

    NASA Astrophysics Data System (ADS)

    Liang, Jin; Wu, Yuan; Hu, Bei

    2016-07-01

    In this paper, an asymptotic traveling wave solution of a free boundary model for pricing a corporate bond with credit rating migration risk is studied. This is the first study to associate the asymptotic traveling wave solution to the credit rating migration problem. The pricing problem with credit rating migration risk is modeled by a free boundary problem. The existence, uniqueness and regularity of the solution are obtained. Under some condition, we proved that the solution of our credit rating problem is convergent to a traveling wave solution, which has an explicit form. Furthermore, numerical examples are presented.

  11. Higher order total variation regularization for EIT reconstruction.

    PubMed

    Gong, Bo; Schullcke, Benjamin; Krueger-Ziolek, Sabine; Zhang, Fan; Mueller-Lisse, Ullrich; Moeller, Knut

    2018-01-08

    Electrical impedance tomography (EIT) attempts to reveal the conductivity distribution of a domain based on the electrical boundary condition. This is an ill-posed inverse problem; its solution is very unstable. Total variation (TV) regularization is one of the techniques commonly employed to stabilize reconstructions. However, it is well known that TV regularization induces staircase effects, which are not realistic in clinical applications. To reduce such artifacts, modified TV regularization terms considering a higher order differential operator were developed in several previous studies. One of them is called total generalized variation (TGV) regularization. TGV regularization has been successively applied in image processing in a regular grid context. In this study, we adapted TGV regularization to the finite element model (FEM) framework for EIT reconstruction. Reconstructions using simulation and clinical data were performed. First results indicate that, in comparison to TV regularization, TGV regularization promotes more realistic images. Graphical abstract Reconstructed conductivity changes located on selected vertical lines. For each of the reconstructed images as well as the ground truth image, conductivity changes located along the selected left and right vertical lines are plotted. In these plots, the notation GT in the legend stands for ground truth, TV stands for total variation method, and TGV stands for total generalized variation method. Reconstructed conductivity distributions from the GREIT algorithm are also demonstrated.

  12. Generalizations of Tikhonov's regularized method of least squares to non-Euclidean vector norms

    NASA Astrophysics Data System (ADS)

    Volkov, V. V.; Erokhin, V. I.; Kakaev, V. V.; Onufrei, A. Yu.

    2017-09-01

    Tikhonov's regularized method of least squares and its generalizations to non-Euclidean norms, including polyhedral, are considered. The regularized method of least squares is reduced to mathematical programming problems obtained by "instrumental" generalizations of the Tikhonov lemma on the minimal (in a certain norm) solution of a system of linear algebraic equations with respect to an unknown matrix. Further studies are needed for problems concerning the development of methods and algorithms for solving reduced mathematical programming problems in which the objective functions and admissible domains are constructed using polyhedral vector norms.

  13. Convection Regularization of High Wavenumbers in Turbulence ANS Shocks

    DTIC Science & Technology

    2011-07-31

    dynamics of particles that adhere to one another upon collision and has been studied as a simple cosmological model for describing the nonlinear formation of...solution we mean a solution to the Cauchy problem in the following sense. Definition 5.1. A function u : R × [0, T ] 7→ RN is a weak solution of the...step 2 the limit function in the α → 0 limit is shown to satisfy the definition of a weak solution for the Cauchy problem. Without loss of generality

  14. Explicit error bounds for the α-quasi-periodic Helmholtz problem.

    PubMed

    Lord, Natacha H; Mulholland, Anthony J

    2013-10-01

    This paper considers a finite element approach to modeling electromagnetic waves in a periodic diffraction grating. In particular, an a priori error estimate associated with the α-quasi-periodic transformation is derived. This involves the solution of the associated Helmholtz problem being written as a product of e(iαx) and an unknown function called the α-quasi-periodic solution. To begin with, the well-posedness of the continuous problem is examined using a variational formulation. The problem is then discretized, and a rigorous a priori error estimate, which guarantees the uniqueness of this approximate solution, is derived. In previous studies, the continuity of the Dirichlet-to-Neumann map has simply been assumed and the dependency of the regularity constant on the system parameters, such as the wavenumber, has not been shown. To address this deficiency, in this paper an explicit dependence on the wavenumber and the degree of the polynomial basis in the a priori error estimate is obtained. Since the finite element method is well known for dealing with any geometries, comparison of numerical results obtained using the α-quasi-periodic transformation with a lattice sum technique is then presented.

  15. Inter and intra-modal deformable registration: continuous deformations meet efficient optimal linear programming.

    PubMed

    Glocker, Ben; Paragios, Nikos; Komodakis, Nikos; Tziritas, Georgios; Navab, Nassir

    2007-01-01

    In this paper we propose a novel non-rigid volume registration based on discrete labeling and linear programming. The proposed framework reformulates registration as a minimal path extraction in a weighted graph. The space of solutions is represented using a set of a labels which are assigned to predefined displacements. The graph topology corresponds to a superimposed regular grid onto the volume. Links between neighborhood control points introduce smoothness, while links between the graph nodes and the labels (end-nodes) measure the cost induced to the objective function through the selection of a particular deformation for a given control point once projected to the entire volume domain, Higher order polynomials are used to express the volume deformation from the ones of the control points. Efficient linear programming that can guarantee the optimal solution up to (a user-defined) bound is considered to recover the optimal registration parameters. Therefore, the method is gradient free, can encode various similarity metrics (simple changes on the graph construction), can guarantee a globally sub-optimal solution and is computational tractable. Experimental validation using simulated data with known deformation, as well as manually segmented data demonstrate the extreme potentials of our approach.

  16. Relativistic Bessel cylinders

    NASA Astrophysics Data System (ADS)

    Krisch, J. P.; Glass, E. N.

    2014-10-01

    A set of cylindrical solutions to Einstein's field equations for power law densities is described. The solutions have a Bessel function contribution to the metric. For matter cylinders regular on axis, the first two solutions are the constant density Gott-Hiscock string and a cylinder with a metric Airy function. All members of this family have the Vilenkin limit to their mass per length. Some examples of Bessel shells and Bessel motion are given.

  17. Nonlinear refraction and reflection travel time tomography

    USGS Publications Warehouse

    Zhang, Jiahua; ten Brink, Uri S.; Toksoz, M.N.

    1998-01-01

    We develop a rapid nonlinear travel time tomography method that simultaneously inverts refraction and reflection travel times on a regular velocity grid. For travel time and ray path calculations, we apply a wave front method employing graph theory. The first-arrival refraction travel times are calculated on the basis of cell velocities, and the later refraction and reflection travel times are computed using both cell velocities and given interfaces. We solve a regularized nonlinear inverse problem. A Laplacian operator is applied to regularize the model parameters (cell slownesses and reflector geometry) so that the inverse problem is valid for a continuum. The travel times are also regularized such that we invert travel time curves rather than travel time points. A conjugate gradient method is applied to minimize the nonlinear objective function. After obtaining a solution, we perform nonlinear Monte Carlo inversions for uncertainty analysis and compute the posterior model covariance. In numerical experiments, we demonstrate that combining the first arrival refraction travel times with later reflection travel times can better reconstruct the velocity field as well as the reflector geometry. This combination is particularly important for modeling crustal structures where large velocity variations occur in the upper crust. We apply this approach to model the crustal structure of the California Borderland using ocean bottom seismometer and land data collected during the Los Angeles Region Seismic Experiment along two marine survey lines. Details of our image include a high-velocity zone under the Catalina Ridge, but a smooth gradient zone between. Catalina Ridge and San Clemente Ridge. The Moho depth is about 22 km with lateral variations. Copyright 1998 by the American Geophysical Union.

  18. Applications of compressed sensing image reconstruction to sparse view phase tomography

    NASA Astrophysics Data System (ADS)

    Ueda, Ryosuke; Kudo, Hiroyuki; Dong, Jian

    2017-10-01

    X-ray phase CT has a potential to give the higher contrast in soft tissue observations. To shorten the measure- ment time, sparse-view CT data acquisition has been attracting the attention. This paper applies two major compressed sensing (CS) approaches to image reconstruction in the x-ray sparse-view phase tomography. The first CS approach is the standard Total Variation (TV) regularization. The major drawbacks of TV regularization are a patchy artifact and loss in smooth intensity changes due to the piecewise constant nature of image model. The second CS method is a relatively new approach of CS which uses a nonlinear smoothing filter to design the regularization term. The nonlinear filter based CS is expected to reduce the major artifact in the TV regular- ization. The both cost functions can be minimized by the very fast iterative reconstruction method. However, in the past research activities, it is not clearly demonstrated how much image quality difference occurs between the TV regularization and the nonlinear filter based CS in x-ray phase CT applications. We clarify the issue by applying the two CS applications to the case of x-ray phase tomography. We provide results with numerically simulated data, which demonstrates that the nonlinear filter based CS outperforms the TV regularization in terms of textures and smooth intensity changes.

  19. An algorithm for variational data assimilation of contact concentration measurements for atmospheric chemistry models

    NASA Astrophysics Data System (ADS)

    Penenko, Alexey; Penenko, Vladimir

    2014-05-01

    Contact concentration measurement data assimilation problem is considered for convection-diffusion-reaction models originating from the atmospheric chemistry study. High dimensionality of models imposes strict requirements on the computational efficiency of the algorithms. Data assimilation is carried out within the variation approach on a single time step of the approximated model. A control function is introduced into the source term of the model to provide flexibility for data assimilation. This function is evaluated as the minimum of the target functional that connects its norm to a misfit between measured and model-simulated data. In the case mathematical model acts as a natural Tikhonov regularizer for the ill-posed measurement data inversion problem. This provides flow-dependent and physically-plausible structure of the resulting analysis and reduces a need to calculate model error covariance matrices that are sought within conventional approach to data assimilation. The advantage comes at the cost of the adjoint problem solution. This issue is solved within the frameworks of splitting-based realization of the basic convection-diffusion-reaction model. The model is split with respect to physical processes and spatial variables. A contact measurement data is assimilated on each one-dimensional convection-diffusion splitting stage. In this case a computationally-efficient direct scheme for both direct and adjoint problem solution can be constructed based on the matrix sweep method. Data assimilation (or regularization) parameter that regulates ratio between model and data in the resulting analysis is obtained with Morozov discrepancy principle. For the proper performance the algorithm takes measurement noise estimation. In the case of Gaussian errors the probability that the used Chi-squared-based estimate is the upper one acts as the assimilation parameter. A solution obtained can be used as the initial guess for data assimilation algorithms that assimilate outside the splitting stages and involve iterations. Splitting method stage that is responsible for chemical transformation processes is realized with the explicit discrete-analytical scheme with respect to time. The scheme is based on analytical extraction of the exponential terms from the solution. This provides unconditional positive sign for the evaluated concentrations. Splitting-based structure of the algorithm provides means for efficient parallel realization. The work is partially supported by the Programs No 4 of Presidium RAS and No 3 of Mathematical Department of RAS, by RFBR project 11-01-00187 and Integrating projects of SD RAS No 8 and 35. Our studies are in the line with the goals of COST Action ES1004.

  20. A regularization approach to hydrofacies delineation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wohlberg, Brendt; Tartakovsky, Daniel

    2009-01-01

    We consider an inverse problem of identifying complex internal structures of composite (geological) materials from sparse measurements of system parameters and system states. Two conceptual frameworks for identifying internal boundaries between constitutive materials in a composite are considered. A sequential approach relies on support vector machines, nearest neighbor classifiers, or geostatistics to reconstruct boundaries from measurements of system parameters and then uses system states data to refine the reconstruction. A joint approach inverts the two data sets simultaneously by employing a regularization approach.

  1. Euclid, Fibonacci, Sketchpad.

    ERIC Educational Resources Information Center

    Litchfield, Daniel C.; Goldenheim, David A.

    1997-01-01

    Describes the solution to a geometric problem by two ninth-grade mathematicians using The Geometer's Sketchpad computer software program. The problem was to divide any line segment into a regular partition of any number of parts, a variation on a problem by Euclid. The solution yielded two constructions, one a GLaD construction and the other using…

  2. Persistent Problems and Promising Solutions in Inservice Education. Report of Selected REGI Project Directors.

    ERIC Educational Resources Information Center

    Grigsby, Greg

    This report summarizes and presents information from interviews with 22 National Inservice Network project directors. The purpose was to identify problems and solutions encountered in directing regular education inservice (REGI) projects. The projects were sponsored by institutions of higher education, state and local education agencies, and an…

  3. Regularities in the association of polymethacrylic acid with benzethonium chloride in aqueous solutions

    NASA Astrophysics Data System (ADS)

    Tugay, A. V.; Zakordonskiy, V. P.

    2006-06-01

    The association of cationogenic benzethonium chloride with polymethacrylic acid in aqueous solutions was studied by nephelometry, conductometry, tensiometry, viscometry, and pH-metry. The critical concentrations of aggregation and polymer saturation with the surface-active substance were determined. A model describing processes in such systems step by step was suggested.

  4. WATSFAR: numerical simulation of soil WATer and Solute fluxes using a FAst and Robust method

    NASA Astrophysics Data System (ADS)

    Crevoisier, David; Voltz, Marc

    2013-04-01

    To simulate the evolution of hydro- and agro-systems, numerous spatialised models are based on a multi-local approach and improvement of simulation accuracy by data-assimilation techniques are now used in many application field. The latest acquisition techniques provide a large amount of experimental data, which increase the efficiency of parameters estimation and inverse modelling approaches. In turn simulations are often run on large temporal and spatial domains which requires a large number of model runs. Eventually, despite the regular increase in computing capacities, the development of fast and robust methods describing the evolution of saturated-unsaturated soil water and solute fluxes is still a challenge. Ross (2003, Agron J; 95:1352-1361) proposed a method, solving 1D Richards' and convection-diffusion equation, that fulfil these characteristics. The method is based on a non iterative approach which reduces the numerical divergence risks and allows the use of coarser spatial and temporal discretisations, while assuring a satisfying accuracy of the results. Crevoisier et al. (2009, Adv Wat Res; 32:936-947) proposed some technical improvements and validated this method on a wider range of agro- pedo- climatic situations. In this poster, we present the simulation code WATSFAR which generalises the Ross method to other mathematical representations of soil water retention curve (i.e. standard and modified van Genuchten model) and includes a dual permeability context (preferential fluxes) for both water and solute transfers. The situations tested are those known to be the less favourable when using standard numerical methods: fine textured and extremely dry soils, intense rainfall and solute fluxes, soils near saturation, ... The results of WATSFAR have been compared with the standard finite element model Hydrus. The analysis of these comparisons highlights two main advantages for WATSFAR, i) robustness: even on fine textured soil or high water and solute fluxes - where Hydrus simulations may fail to converge - no numerical problem appears, and ii) accuracy of simulations even for loose spatial domain discretisations, which can only be obtained by Hydrus with fine discretisations.

  5. The environmental virtual observatory pilot (EVOp): a cloud solution demonstrating effective science for efficient decisions

    NASA Astrophysics Data System (ADS)

    Gurney, R. J.; Emmett, B.; McDonald, A.

    2012-12-01

    Environmental managers and policy makers face a challenging future trying to accommodate growing expectations of environmental well-being, while subject to maturing regulation, constrained budgets and a public scrutiny that expects easier and more meaningful access to data and decision logic. To support such a challenge requires new tools and new approaches. The EVOp is an initiative from the UK Natural Environment Research Council (NERC) designed to deliver proof of concept for these new tools and approaches. A series of exemplar 'big catchment science questions' are posed and the prospects for their solution are assessed. These are then used to develop cloud solutions for serving data, models, visualisation and analysis tools to scientists, regulators, private companies and the public, all of whom have different expectations of what environmental information is important. Approaches are tested regularly with users using SCRUM. The VO vision encompasses seven key ambitions: i. being driven by the need to contribute to the solution of major environmental issues that impinge on, or link to, catchment science ii. having the flexibility and adaptability to address future problems not yet defined or fully clarified iii. being able to communicate issues and solutions to a range of audiences iv. supporting easy access by a variety of users v. drawing meaningful information from data and models and identifying the constraints on application in terms of errors, uncertainties, etc vi. adding value and cost effectiveness to current investigations by supporting transfer and scale adjustment thus limiting the repetition of expensive field monitoring addressing essentially the same issues in varying locations vii. promoting effective interfacing of robust science with a variety of end users by using terminology or measures familiar to the user (or required by regulation), including financial and carbon accounting, whole life or fixed period costing, risk as probability or as disability adjusted life years/ etc as appropriate Architectures pivotal to communicating these ambitions are presented. Cloud computing facilitates the required interoperability across data sets, models, visualisations etc. There are also additional legal, security, culrural and standards barriers that need to be solved before such a cloud becomes operational.

  6. The Automated Root Exudate System (ARES): a method to apply solutes at regular intervals to soils in the field.

    PubMed

    Lopez-Sangil, Luis; George, Charles; Medina-Barcenas, Eduardo; Birkett, Ali J; Baxendale, Catherine; Bréchet, Laëtitia M; Estradera-Gumbau, Eduard; Sayer, Emma J

    2017-09-01

    Root exudation is a key component of nutrient and carbon dynamics in terrestrial ecosystems. Exudation rates vary widely by plant species and environmental conditions, but our understanding of how root exudates affect soil functioning is incomplete, in part because there are few viable methods to manipulate root exudates in situ . To address this, we devised the Automated Root Exudate System (ARES), which simulates increased root exudation by applying small amounts of labile solutes at regular intervals in the field.The ARES is a gravity-fed drip irrigation system comprising a reservoir bottle connected via a timer to a micro-hose irrigation grid covering c . 1 m 2 ; 24 drip-tips are inserted into the soil to 4-cm depth to apply solutions into the rooting zone. We installed two ARES subplots within existing litter removal and control plots in a temperate deciduous woodland. We applied either an artificial root exudate solution (RE) or a procedural control solution (CP) to each subplot for 1 min day -1 during two growing seasons. To investigate the influence of root exudation on soil carbon dynamics, we measured soil respiration monthly and soil microbial biomass at the end of each growing season.The ARES applied the solutions at a rate of c . 2 L m -2  week -1 without significantly increasing soil water content. The application of RE solution had a clear effect on soil carbon dynamics, but the response varied by litter treatment. Across two growing seasons, soil respiration was 25% higher in RE compared to CP subplots in the litter removal treatment, but not in the control plots. By contrast, we observed a significant increase in microbial biomass carbon (33%) and nitrogen (26%) in RE subplots in the control litter treatment.The ARES is an effective, low-cost method to apply experimental solutions directly into the rooting zone in the field. The installation of the systems entails minimal disturbance to the soil and little maintenance is required. Although we used ARES to apply root exudate solution, the method can be used to apply many other treatments involving solute inputs at regular intervals in a wide range of ecosystems.

  7. Orientation domains: A mobile grid clustering algorithm with spherical corrections

    NASA Astrophysics Data System (ADS)

    Mencos, Joana; Gratacós, Oscar; Farré, Mercè; Escalante, Joan; Arbués, Pau; Muñoz, Josep Anton

    2012-12-01

    An algorithm has been designed and tested which was devised as a tool assisting the analysis of geological structures solely from orientation data. More specifically, the algorithm was intended for the analysis of geological structures that can be approached as planar and piecewise features, like many folded strata. Input orientation data is expressed as pairs of angles (azimuth and dip). The algorithm starts by considering the data in Cartesian coordinates. This is followed by a search for an initial clustering solution, which is achieved by comparing the results output from the systematic shift of a regular rigid grid over the data. This initial solution is optimal (achieves minimum square error) once the grid size and the shift increment are fixed. Finally, the algorithm corrects for the variable spread that is generally expected from the data type using a reshaped non-rigid grid. The algorithm is size-oriented, which implies the application of conditions over cluster size through all the process in contrast to density-oriented algorithms, also widely used when dealing with spatial data. Results are derived in few seconds and, when tested over synthetic examples, they were found to be consistent and reliable. This makes the algorithm a valuable alternative to the time-consuming traditional approaches available to geologists.

  8. Preconditioned MoM Solutions for Complex Planar Arrays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fasenfest, B J; Jackson, D; Champagne, N

    2004-01-23

    The numerical analysis of large arrays is a complex problem. There are several techniques currently under development in this area. One such technique is the FAIM (Faster Adaptive Integral Method). This method uses a modification of the standard AIM approach which takes into account the reusability properties of matrices that arise from identical array elements. If the array consists of planar conducting bodies, the array elements are meshed using standard subdomain basis functions, such as the RWG basis. These bases are then projected onto a regular grid of interpolating polynomials. This grid can then be used in a 2D ormore » 3D FFT to accelerate the matrix-vector product used in an iterative solver. The method has been proven to greatly reduce solve time by speeding the matrix-vector product computation. The FAIM approach also reduces fill time and memory requirements, since only the near element interactions need to be calculated exactly. The present work extends FAIM by modifying it to allow for layered material Green's Functions and dielectrics. In addition, a preconditioner is implemented to greatly reduce the number of iterations required for a solution. The general scheme of the FAIM method is reported in; this contribution is limited to presenting new results.« less

  9. Geodesic active fields--a geometric framework for image registration.

    PubMed

    Zosso, Dominique; Bresson, Xavier; Thiran, Jean-Philippe

    2011-05-01

    In this paper we present a novel geometric framework called geodesic active fields for general image registration. In image registration, one looks for the underlying deformation field that best maps one image onto another. This is a classic ill-posed inverse problem, which is usually solved by adding a regularization term. Here, we propose a multiplicative coupling between the registration term and the regularization term, which turns out to be equivalent to embed the deformation field in a weighted minimal surface problem. Then, the deformation field is driven by a minimization flow toward a harmonic map corresponding to the solution of the registration problem. This proposed approach for registration shares close similarities with the well-known geodesic active contours model in image segmentation, where the segmentation term (the edge detector function) is coupled with the regularization term (the length functional) via multiplication as well. As a matter of fact, our proposed geometric model is actually the exact mathematical generalization to vector fields of the weighted length problem for curves and surfaces introduced by Caselles-Kimmel-Sapiro. The energy of the deformation field is measured with the Polyakov energy weighted by a suitable image distance, borrowed from standard registration models. We investigate three different weighting functions, the squared error and the approximated absolute error for monomodal images, and the local joint entropy for multimodal images. As compared to specialized state-of-the-art methods tailored for specific applications, our geometric framework involves important contributions. Firstly, our general formulation for registration works on any parametrizable, smooth and differentiable surface, including nonflat and multiscale images. In the latter case, multiscale images are registered at all scales simultaneously, and the relations between space and scale are intrinsically being accounted for. Second, this method is, to the best of our knowledge, the first reparametrization invariant registration method introduced in the literature. Thirdly, the multiplicative coupling between the registration term, i.e. local image discrepancy, and the regularization term naturally results in a data-dependent tuning of the regularization strength. Finally, by choosing the metric on the deformation field one can freely interpolate between classic Gaussian and more interesting anisotropic, TV-like regularization.

  10. Regularization of instabilities in gravity theories

    NASA Astrophysics Data System (ADS)

    Ramazanoǧlu, Fethi M.

    2018-01-01

    We investigate instabilities and their regularization in theories of gravitation. Instabilities can be beneficial since their growth often leads to prominent observable signatures, which makes them especially relevant to relatively low signal-to-noise ratio measurements such as gravitational wave detections. An indefinitely growing instability usually renders a theory unphysical; hence, a desirable instability should also come with underlying physical machinery that stops the growth at finite values, i.e., regularization mechanisms. The prototypical gravity theory that presents such an instability is the spontaneous scalarization phenomena of scalar-tensor theories, which feature a tachyonic instability. We identify the regularization mechanisms in this theory and show that they can be utilized to regularize other instabilities as well. Namely, we present theories in which spontaneous growth is triggered by a ghost rather than a tachyon and numerically calculate stationary solutions of scalarized neutron stars in these theories. We speculate on the possibility of regularizing known divergent instabilities in certain gravity theories using our findings and discuss alternative theories of gravitation in which regularized instabilities may be present. Even though we study many specific examples, our main point is the recognition of regularized instabilities as a common theme and unifying mechanism in a vast array of gravity theories.

  11. Regularization of soft-X-ray imaging in the DIII-D tokamak

    DOE PAGES

    Wingen, A.; Shafer, M. W.; Unterberg, E. A.; ...

    2015-03-02

    We developed an image inversion scheme for the soft X-ray imaging system (SXRIS) diagnostic at the DIII-D tokamak in order to obtain the local soft X-ray emission at a poloidal cross-section from the spatially line-integrated image taken by the SXRIS camera. The scheme uses the Tikhonov regularization method since the inversion problem is generally ill-posed. The regularization technique uses the generalized singular value decomposition to determine a solution that depends on a free regularization parameter. The latter has to be chosen carefully, and the so called {\\it L-curve} method to find the optimum regularization parameter is outlined. A representative testmore » image is used to study the properties of the inversion scheme with respect to inversion accuracy, amount/strength of regularization, image noise and image resolution. Moreover, the optimum inversion parameters are identified, while the L-curve method successfully computes the optimum regularization parameter. Noise is found to be the most limiting issue, but sufficient regularization is still possible at noise to signal ratios up to 10%-15%. Finally, the inversion scheme is applied to measured SXRIS data and the line-integrated SXRIS image is successfully inverted.« less

  12. Convex blind image deconvolution with inverse filtering

    NASA Astrophysics Data System (ADS)

    Lv, Xiao-Guang; Li, Fang; Zeng, Tieyong

    2018-03-01

    Blind image deconvolution is the process of estimating both the original image and the blur kernel from the degraded image with only partial or no information about degradation and the imaging system. It is a bilinear ill-posed inverse problem corresponding to the direct problem of convolution. Regularization methods are used to handle the ill-posedness of blind deconvolution and get meaningful solutions. In this paper, we investigate a convex regularized inverse filtering method for blind deconvolution of images. We assume that the support region of the blur object is known, as has been done in a few existing works. By studying the inverse filters of signal and image restoration problems, we observe the oscillation structure of the inverse filters. Inspired by the oscillation structure of the inverse filters, we propose to use the star norm to regularize the inverse filter. Meanwhile, we use the total variation to regularize the resulting image obtained by convolving the inverse filter with the degraded image. The proposed minimization model is shown to be convex. We employ the first-order primal-dual method for the solution of the proposed minimization model. Numerical examples for blind image restoration are given to show that the proposed method outperforms some existing methods in terms of peak signal-to-noise ratio (PSNR), structural similarity (SSIM), visual quality and time consumption.

  13. Racial Differences in Trajectories of Heavy Drinking and Regular Marijuana Use from Ages 13 through 24 Among African-American and White Males

    PubMed Central

    Finlay, Andrea K.; White, Helene R.; Mun, Eun-Young; Cronley, Courtney C.; Lee, Chioun

    2011-01-01

    Background Although there are significant differences in prevalence of substance use between African-American and White adolescents, few studies have examined racial differences in developmental patterns of substance use, especially during the important developmental transition from adolescence to young adulthood. This study examines racial differences in trajectories of heavy drinking and regular marijuana use from adolescence into young adulthood. Methods A community-based sample of non-Hispanic African-American (n = 276) and non-Hispanic White (n = 211) males was analyzed to identify trajectories from ages 13 through 24. Results Initial analyses indicated race differences in heavy drinking and regular marijuana use trajectories. African Americans were more likely than Whites to be members of the nonheavy drinkers/nondrinkers group and less likely to be members of the early-onset heavy drinkers group. The former were also more likely than the latter to be members of the late-onset regular marijuana use group. Separate analyses by race indicated differences in heavy drinking for African Americans and Whites. A 2-group model for heavy drinking fit best for African Americans, whereas a 4-group solution fit best for Whites. For regular marijuana use, a similar 4-group solution fit for both races, although group proportions differed. Conclusions Within-race analyses indicated that there were clear race differences in the long-term patterns of alcohol use; regular marijuana use patterns were more similar. Extended follow ups are needed to examine differences and similarities in maturation processes for African-American and White males. For both races, prevention and intervention efforts are necessary into young adulthood. PMID:21908109

  14. Regularity of random attractors for fractional stochastic reaction-diffusion equations on Rn

    NASA Astrophysics Data System (ADS)

    Gu, Anhui; Li, Dingshi; Wang, Bixiang; Yang, Han

    2018-06-01

    We investigate the regularity of random attractors for the non-autonomous non-local fractional stochastic reaction-diffusion equations in Hs (Rn) with s ∈ (0 , 1). We prove the existence and uniqueness of the tempered random attractor that is compact in Hs (Rn) and attracts all tempered random subsets of L2 (Rn) with respect to the norm of Hs (Rn). The main difficulty is to show the pullback asymptotic compactness of solutions in Hs (Rn) due to the noncompactness of Sobolev embeddings on unbounded domains and the almost sure nondifferentiability of the sample paths of the Wiener process. We establish such compactness by the ideas of uniform tail-estimates and the spectral decomposition of solutions in bounded domains.

  15. Gravitational field of static p -branes in linearized ghost-free gravity

    NASA Astrophysics Data System (ADS)

    Boos, Jens; Frolov, Valeri P.; Zelnikov, Andrei

    2018-04-01

    We study the gravitational field of static p -branes in D -dimensional Minkowski space in the framework of linearized ghost-free (GF) gravity. The concrete models of GF gravity we consider are parametrized by the nonlocal form factors exp (-□/μ2) and exp (□2/μ4) , where μ-1 is the scale of nonlocality. We show that the singular behavior of the gravitational field of p -branes in general relativity is cured by short-range modifications introduced by the nonlocalities, and we derive exact expressions of the regularized gravitational fields, whose geometry can be written as a warped metric. For large distances compared to the scale of nonlocality, μ r →∞ , our solutions approach those found in linearized general relativity.

  16. Pareto joint inversion of 2D magnetotelluric and gravity data

    NASA Astrophysics Data System (ADS)

    Miernik, Katarzyna; Bogacz, Adrian; Kozubal, Adam; Danek, Tomasz; Wojdyła, Marek

    2015-04-01

    In this contribution, the first results of the "Innovative technology of petrophysical parameters estimation of geological media using joint inversion algorithms" project were described. At this stage of the development, Pareto joint inversion scheme for 2D MT and gravity data was used. Additionally, seismic data were provided to set some constrains for the inversion. Sharp Boundary Interface(SBI) approach and description model with set of polygons were used to limit the dimensionality of the solution space. The main engine was based on modified Particle Swarm Optimization(PSO). This algorithm was properly adapted to handle two or more target function at once. Additional algorithm was used to eliminate non- realistic solution proposals. Because PSO is a method of stochastic global optimization, it requires a lot of proposals to be evaluated to find a single Pareto solution and then compose a Pareto front. To optimize this stage parallel computing was used for both inversion engine and 2D MT forward solver. There are many advantages of proposed solution of joint inversion problems. First of all, Pareto scheme eliminates cumbersome rescaling of the target functions, that can highly affect the final solution. Secondly, the whole set of solution is created in one optimization run, providing a choice of the final solution. This choice can be based off qualitative data, that are usually very hard to be incorporated into the regular inversion schema. SBI parameterisation not only limits the problem of dimensionality, but also makes constraining of the solution easier. At this stage of work, decision to test the approach using MT and gravity data was made, because this combination is often used in practice. It is important to mention, that the general solution is not limited to this two methods and it is flexible enough to be used with more than two sources of data. Presented results were obtained for synthetic models, imitating real geological conditions, where interesting density distributions are relatively shallow and resistivity changes are related to deeper parts. This kind of conditions are well suited for joint inversion of MT and gravity data. In the next stage of the solution development of further code optimization and extensive tests for real data will be realized. Presented work was supported by Polish National Centre for Research and Development under the contract number POIG.01.04.00-12-279/13

  17. Boundedness and almost Periodicity in Time of Solutions of Evolutionary Variational Inequalities

    NASA Astrophysics Data System (ADS)

    Pankov, A. A.

    1983-04-01

    In this paper existence theorems are obtained for the solutions of abstract parabolic variational inequalities, which are bounded with respect to time (in the Stepanov and L^\\infty norms). The regularity and almost periodicity properties of such solutions are studied. Theorems are also established concerning their solvability in spaces of Besicovitch almost periodic functions. The majority of the results are obtained without any compactness assumptions. Bibliography: 30 titles.

  18. Three-gradient regular solution model for simple liquids wetting complex surface topologies

    PubMed Central

    Akerboom, Sabine; Kamperman, Marleen

    2016-01-01

    Summary We use regular solution theory and implement a three-gradient model for a liquid/vapour system in contact with a complex surface topology to study the shape of a liquid drop in advancing and receding wetting scenarios. More specifically, we study droplets on an inverse opal: spherical cavities in a hexagonal pattern. In line with experimental data, we find that the surface may switch from hydrophilic (contact angle on a smooth surface θY < 90°) to hydrophobic (effective advancing contact angle θ > 90°). Both the Wenzel wetting state, that is cavities under the liquid are filled, as well as the Cassie–Baxter wetting state, that is air entrapment in the cavities under the liquid, were observed using our approach, without a discontinuity in the water front shape or in the water advancing contact angle θ. Therefore, air entrapment cannot be the main reason why the contact angle θ for an advancing water front varies. Rather, the contact line is pinned and curved due to the surface structures, inducing curvature perpendicular to the plane in which the contact angle θ is observed, and the contact line does not move in a continuous way, but via depinning transitions. The pinning is not limited to kinks in the surface with angles θkink smaller than the angle θY. Even for θkink > θY, contact line pinning is found. Therefore, the full 3D-structure of the inverse opal, rather than a simple parameter such as the wetting state or θkink, determines the final observed contact angle. PMID:27826512

  19. A dynamical regularization algorithm for solving inverse source problems of elliptic partial differential equations

    NASA Astrophysics Data System (ADS)

    Zhang, Ye; Gong, Rongfang; Cheng, Xiaoliang; Gulliksson, Mårten

    2018-06-01

    This study considers the inverse source problem for elliptic partial differential equations with both Dirichlet and Neumann boundary data. The unknown source term is to be determined by additional boundary conditions. Unlike the existing methods found in the literature, which usually employ the first-order in time gradient-like system (such as the steepest descent methods) for numerically solving the regularized optimization problem with a fixed regularization parameter, we propose a novel method with a second-order in time dissipative gradient-like system and a dynamical selected regularization parameter. A damped symplectic scheme is proposed for the numerical solution. Theoretical analysis is given for both the continuous model and the numerical algorithm. Several numerical examples are provided to show the robustness of the proposed algorithm.

  20. Analytic solutions for Long's equation and its generalization

    NASA Astrophysics Data System (ADS)

    Humi, Mayer

    2017-12-01

    Two-dimensional, steady-state, stratified, isothermal atmospheric flow over topography is governed by Long's equation. Numerical solutions of this equation were derived and used by several authors. In particular, these solutions were applied extensively to analyze the experimental observations of gravity waves. In the first part of this paper we derive an extension of this equation to non-isothermal flows. Then we devise a transformation that simplifies this equation. We show that this simplified equation admits solitonic-type solutions in addition to regular gravity waves. These new analytical solutions provide new insights into the propagation and amplitude of gravity waves over topography.

  1. A general framework for regularized, similarity-based image restoration.

    PubMed

    Kheradmand, Amin; Milanfar, Peyman

    2014-12-01

    Any image can be represented as a function defined on a weighted graph, in which the underlying structure of the image is encoded in kernel similarity and associated Laplacian matrices. In this paper, we develop an iterative graph-based framework for image restoration based on a new definition of the normalized graph Laplacian. We propose a cost function, which consists of a new data fidelity term and regularization term derived from the specific definition of the normalized graph Laplacian. The normalizing coefficients used in the definition of the Laplacian and associated regularization term are obtained using fast symmetry preserving matrix balancing. This results in some desired spectral properties for the normalized Laplacian such as being symmetric, positive semidefinite, and returning zero vector when applied to a constant image. Our algorithm comprises of outer and inner iterations, where in each outer iteration, the similarity weights are recomputed using the previous estimate and the updated objective function is minimized using inner conjugate gradient iterations. This procedure improves the performance of the algorithm for image deblurring, where we do not have access to a good initial estimate of the underlying image. In addition, the specific form of the cost function allows us to render the spectral analysis for the solutions of the corresponding linear equations. In addition, the proposed approach is general in the sense that we have shown its effectiveness for different restoration problems, including deblurring, denoising, and sharpening. Experimental results verify the effectiveness of the proposed algorithm on both synthetic and real examples.

  2. A framework with Cucho algorithm for discovering regular plans in mobile clients

    NASA Astrophysics Data System (ADS)

    Tsiligaridis, John

    2017-09-01

    In a mobile computing system, broadcasting has become a very interesting and challenging research issue. The server continuously broadcasts data to mobile users; the data can be inserted into customized size relations and broadcasted as Regular Broadcast Plan (RBP) with multiple channels. Two algorithms, given the data size for each provided service, the Basic Regular (BRA) and the Partition Value Algorithm (PVA) can provide a static and dynamic RBP construction with multiple constraints solutions respectively. Servers have to define the data size of the services and can provide a feasible RBP working with many broadcasting plan operations. The operations become more complicated when there are many kinds of services and the sizes of data sets are unknown to the server. To that end a framework has been developed that also gives the ability to select low or high capacity channels for servicing. Theorems with new analytical results can provide direct conditions that can state the existence of solutions for the RBP problem with the compound criterion. Two kinds of solutions are provided: the equal and the non equal subrelation solutions. The Cucho Search Algorithm (CS) with the Levy flight behavior has been selected for the optimization. The CS for RBP (CSRP) is developed applying the theorems to the discovery of RBPs. An additional change to CS has been made in order to increase the local search. The CS can also discover RBPs with the minimum number of channels. From all the above modern servers can be upgraded with these possibilities in regards to RBPs discovery with fewer channels.

  3. Path Following in the Exact Penalty Method of Convex Programming.

    PubMed

    Zhou, Hua; Lange, Kenneth

    2015-07-01

    Classical penalty methods solve a sequence of unconstrained problems that put greater and greater stress on meeting the constraints. In the limit as the penalty constant tends to ∞, one recovers the constrained solution. In the exact penalty method, squared penalties are replaced by absolute value penalties, and the solution is recovered for a finite value of the penalty constant. In practice, the kinks in the penalty and the unknown magnitude of the penalty constant prevent wide application of the exact penalty method in nonlinear programming. In this article, we examine a strategy of path following consistent with the exact penalty method. Instead of performing optimization at a single penalty constant, we trace the solution as a continuous function of the penalty constant. Thus, path following starts at the unconstrained solution and follows the solution path as the penalty constant increases. In the process, the solution path hits, slides along, and exits from the various constraints. For quadratic programming, the solution path is piecewise linear and takes large jumps from constraint to constraint. For a general convex program, the solution path is piecewise smooth, and path following operates by numerically solving an ordinary differential equation segment by segment. Our diverse applications to a) projection onto a convex set, b) nonnegative least squares, c) quadratically constrained quadratic programming, d) geometric programming, and e) semidefinite programming illustrate the mechanics and potential of path following. The final detour to image denoising demonstrates the relevance of path following to regularized estimation in inverse problems. In regularized estimation, one follows the solution path as the penalty constant decreases from a large value.

  4. Path Following in the Exact Penalty Method of Convex Programming

    PubMed Central

    Zhou, Hua; Lange, Kenneth

    2015-01-01

    Classical penalty methods solve a sequence of unconstrained problems that put greater and greater stress on meeting the constraints. In the limit as the penalty constant tends to ∞, one recovers the constrained solution. In the exact penalty method, squared penalties are replaced by absolute value penalties, and the solution is recovered for a finite value of the penalty constant. In practice, the kinks in the penalty and the unknown magnitude of the penalty constant prevent wide application of the exact penalty method in nonlinear programming. In this article, we examine a strategy of path following consistent with the exact penalty method. Instead of performing optimization at a single penalty constant, we trace the solution as a continuous function of the penalty constant. Thus, path following starts at the unconstrained solution and follows the solution path as the penalty constant increases. In the process, the solution path hits, slides along, and exits from the various constraints. For quadratic programming, the solution path is piecewise linear and takes large jumps from constraint to constraint. For a general convex program, the solution path is piecewise smooth, and path following operates by numerically solving an ordinary differential equation segment by segment. Our diverse applications to a) projection onto a convex set, b) nonnegative least squares, c) quadratically constrained quadratic programming, d) geometric programming, and e) semidefinite programming illustrate the mechanics and potential of path following. The final detour to image denoising demonstrates the relevance of path following to regularized estimation in inverse problems. In regularized estimation, one follows the solution path as the penalty constant decreases from a large value. PMID:26366044

  5. Regular Motions of Resonant Asteroids

    NASA Astrophysics Data System (ADS)

    Ferraz-Mello, S.

    1990-11-01

    RESUMEN. Se revisan resultados analiticos relativos a soluciones regulares del problema asteroidal eliptico promediados en la vecindad de una resonancia con jupiten Mencionamos Ia ley de estructura para libradores de alta excentricidad, la estabilidad de los centros de liberaci6n, las perturbaciones forzadas por la excentricidad de jupiter y las 6rbitas de corotaci6n. ABSTRAC This paper reviews analytical results concerning the regular solutions of the elliptic asteroidal problem averaged in the neighbourhood of a resonance with jupiter. We mention the law of structure for high-eccentricity librators, the stability of the libration centers, the perturbations forced by the eccentricity ofjupiter and the corotation orbits. Key words: ASThROIDS

  6. Exploratory Mediation Analysis via Regularization

    PubMed Central

    Serang, Sarfaraz; Jacobucci, Ross; Brimhall, Kim C.; Grimm, Kevin J.

    2017-01-01

    Exploratory mediation analysis refers to a class of methods used to identify a set of potential mediators of a process of interest. Despite its exploratory nature, conventional approaches are rooted in confirmatory traditions, and as such have limitations in exploratory contexts. We propose a two-stage approach called exploratory mediation analysis via regularization (XMed) to better address these concerns. We demonstrate that this approach is able to correctly identify mediators more often than conventional approaches and that its estimates are unbiased. Finally, this approach is illustrated through an empirical example examining the relationship between college acceptance and enrollment. PMID:29225454

  7. Bayesian inversion of marine CSEM data from the Scarborough gas field using a transdimensional 2-D parametrization

    NASA Astrophysics Data System (ADS)

    Ray, Anandaroop; Key, Kerry; Bodin, Thomas; Myer, David; Constable, Steven

    2014-12-01

    We apply a reversible-jump Markov chain Monte Carlo method to sample the Bayesian posterior model probability density function of 2-D seafloor resistivity as constrained by marine controlled source electromagnetic data. This density function of earth models conveys information on which parts of the model space are illuminated by the data. Whereas conventional gradient-based inversion approaches require subjective regularization choices to stabilize this highly non-linear and non-unique inverse problem and provide only a single solution with no model uncertainty information, the method we use entirely avoids model regularization. The result of our approach is an ensemble of models that can be visualized and queried to provide meaningful information about the sensitivity of the data to the subsurface, and the level of resolution of model parameters. We represent models in 2-D using a Voronoi cell parametrization. To make the 2-D problem practical, we use a source-receiver common midpoint approximation with 1-D forward modelling. Our algorithm is transdimensional and self-parametrizing where the number of resistivity cells within a 2-D depth section is variable, as are their positions and geometries. Two synthetic studies demonstrate the algorithm's use in the appraisal of a thin, segmented, resistive reservoir which makes for a challenging exploration target. As a demonstration example, we apply our method to survey data collected over the Scarborough gas field on the Northwest Australian shelf.

  8. A finite element method with overlapping meshes for free-boundary axisymmetric plasma equilibria in realistic geometries

    NASA Astrophysics Data System (ADS)

    Heumann, Holger; Rapetti, Francesca

    2017-04-01

    Existing finite element implementations for the computation of free-boundary axisymmetric plasma equilibria approximate the unknown poloidal flux function by standard lowest order continuous finite elements with discontinuous gradients. As a consequence, the location of critical points of the poloidal flux, that are of paramount importance in tokamak engineering, is constrained to nodes of the mesh leading to undesired jumps in transient problems. Moreover, recent numerical results for the self-consistent coupling of equilibrium with resistive diffusion and transport suggest the necessity of higher regularity when approximating the flux map. In this work we propose a mortar element method that employs two overlapping meshes. One mesh with Cartesian quadrilaterals covers the vacuum chamber domain accessible by the plasma and one mesh with triangles discretizes the region outside. The two meshes overlap in a narrow region. This approach gives the flexibility to achieve easily and at low cost higher order regularity for the approximation of the flux function in the domain covered by the plasma, while preserving accurate meshing of the geometric details outside this region. The continuity of the numerical solution in the region of overlap is weakly enforced by a mortar-like mapping.

  9. Blind image fusion for hyperspectral imaging with the directional total variation

    NASA Astrophysics Data System (ADS)

    Bungert, Leon; Coomes, David A.; Ehrhardt, Matthias J.; Rasch, Jennifer; Reisenhofer, Rafael; Schönlieb, Carola-Bibiane

    2018-04-01

    Hyperspectral imaging is a cutting-edge type of remote sensing used for mapping vegetation properties, rock minerals and other materials. A major drawback of hyperspectral imaging devices is their intrinsic low spatial resolution. In this paper, we propose a method for increasing the spatial resolution of a hyperspectral image by fusing it with an image of higher spatial resolution that was obtained with a different imaging modality. This is accomplished by solving a variational problem in which the regularization functional is the directional total variation. To accommodate for possible mis-registrations between the two images, we consider a non-convex blind super-resolution problem where both a fused image and the corresponding convolution kernel are estimated. Using this approach, our model can realign the given images if needed. Our experimental results indicate that the non-convexity is negligible in practice and that reliable solutions can be computed using a variety of different optimization algorithms. Numerical results on real remote sensing data from plant sciences and urban monitoring show the potential of the proposed method and suggests that it is robust with respect to the regularization parameters, mis-registration and the shape of the kernel.

  10. Systematic size study of an insect antifreeze protein and its interaction with ice.

    PubMed

    Liu, Kai; Jia, Zongchao; Chen, Guangju; Tung, Chenho; Liu, Ruozhuang

    2005-02-01

    Because of their remarkable ability to depress the freezing point of aqueous solutions, antifreeze proteins (AFPs) play a critical role in helping many organisms survive subzero temperatures. The beta-helical insect AFP structures solved to date, consisting of multiple repeating circular loops or coils, are perhaps the most regular protein structures discovered thus far. Taking an exceptional advantage of the unusually high structural regularity of insect AFPs, we have employed both semiempirical and quantum mechanics computational approaches to systematically investigate the relationship between the number of AFP coils and the AFP-ice interaction energy, an indicator of antifreeze activity. We generated a series of AFP models with varying numbers of 12-residue coils (sequence TCTxSxxCxxAx) and calculated their interaction energies with ice. Using several independent computational methods, we found that the AFP-ice interaction energy increased as the number of coils increased, until an upper bound was reached. The increase of interaction energy was significant for each of the first five coils, and there was a clear synergism that gradually diminished and even decreased with further increase of the number of coils. Our results are in excellent agreement with the recently reported experimental observations.

  11. Systematic Size Study of an Insect Antifreeze Protein and Its Interaction with Ice

    PubMed Central

    Liu, Kai; Jia, Zongchao; Chen, Guangju; Tung, Chenho; Liu, Ruozhuang

    2005-01-01

    Because of their remarkable ability to depress the freezing point of aqueous solutions, antifreeze proteins (AFPs) play a critical role in helping many organisms survive subzero temperatures. The β-helical insect AFP structures solved to date, consisting of multiple repeating circular loops or coils, are perhaps the most regular protein structures discovered thus far. Taking an exceptional advantage of the unusually high structural regularity of insect AFPs, we have employed both semiempirical and quantum mechanics computational approaches to systematically investigate the relationship between the number of AFP coils and the AFP-ice interaction energy, an indicator of antifreeze activity. We generated a series of AFP models with varying numbers of 12-residue coils (sequence TCTxSxxCxxAx) and calculated their interaction energies with ice. Using several independent computational methods, we found that the AFP-ice interaction energy increased as the number of coils increased, until an upper bound was reached. The increase of interaction energy was significant for each of the first five coils, and there was a clear synergism that gradually diminished and even decreased with further increase of the number of coils. Our results are in excellent agreement with the recently reported experimental observations. PMID:15713600

  12. Riga-Fede Disease Associated with Natal Teeth: Two Different Approaches in the Same Case.

    PubMed

    Volpato, Luiz Evaristo Ricci; Simões, Cintia Aparecida Damo; Simões, Flávio; Nespolo, Priscila Alves; Borges, Álvaro Henrique

    2015-01-01

    Natal teeth are those present in the oral cavity at the child's birth. These teeth can cause ulcers on the ventral surface of the tongue, lip, and the mother's breast characterizing the Riga-Fede Disease. The treatment depends on the tooth's mobility and the risk of aspiration or swallowing; whether it is supernumerary or regular primary teeth; whether it is causing interference in breastfeeding; breast and oral soft tissue injuries; and the general state of child's health. A 1-month-old female infant was diagnosed with two natal teeth and an ulcerated lesion on the ventral surface of the tongue, leading to the clinical diagnosis of Riga-Fede Disease. The treatment performed consisted of the maintenance of the natal tooth that showed no increased mobility, adding a small increment of glass ionomer cement to its incisal edge, and orientation for hygiene with saline solution. Due to the increased mobility of the other natal tooth, surgical removal was performed. There was regular monitoring of the patient and complete wound healing was observed after 15 days. The proposed treatment was successful and the patient is still in follow-up without recurrence of the lesion after one year.

  13. A regularization corrected score method for nonlinear regression models with covariate error.

    PubMed

    Zucker, David M; Gorfine, Malka; Li, Yi; Tadesse, Mahlet G; Spiegelman, Donna

    2013-03-01

    Many regression analyses involve explanatory variables that are measured with error, and failing to account for this error is well known to lead to biased point and interval estimates of the regression coefficients. We present here a new general method for adjusting for covariate error. Our method consists of an approximate version of the Stefanski-Nakamura corrected score approach, using the method of regularization to obtain an approximate solution of the relevant integral equation. We develop the theory in the setting of classical likelihood models; this setting covers, for example, linear regression, nonlinear regression, logistic regression, and Poisson regression. The method is extremely general in terms of the types of measurement error models covered, and is a functional method in the sense of not involving assumptions on the distribution of the true covariate. We discuss the theoretical properties of the method and present simulation results in the logistic regression setting (univariate and multivariate). For illustration, we apply the method to data from the Harvard Nurses' Health Study concerning the relationship between physical activity and breast cancer mortality in the period following a diagnosis of breast cancer. Copyright © 2013, The International Biometric Society.

  14. Bayesian Inference for Generalized Linear Models for Spiking Neurons

    PubMed Central

    Gerwinn, Sebastian; Macke, Jakob H.; Bethge, Matthias

    2010-01-01

    Generalized Linear Models (GLMs) are commonly used statistical methods for modelling the relationship between neural population activity and presented stimuli. When the dimension of the parameter space is large, strong regularization has to be used in order to fit GLMs to datasets of realistic size without overfitting. By imposing properly chosen priors over parameters, Bayesian inference provides an effective and principled approach for achieving regularization. Here we show how the posterior distribution over model parameters of GLMs can be approximated by a Gaussian using the Expectation Propagation algorithm. In this way, we obtain an estimate of the posterior mean and posterior covariance, allowing us to calculate Bayesian confidence intervals that characterize the uncertainty about the optimal solution. From the posterior we also obtain a different point estimate, namely the posterior mean as opposed to the commonly used maximum a posteriori estimate. We systematically compare the different inference techniques on simulated as well as on multi-electrode recordings of retinal ganglion cells, and explore the effects of the chosen prior and the performance measure used. We find that good performance can be achieved by choosing an Laplace prior together with the posterior mean estimate. PMID:20577627

  15. Low-dose 4D cone-beam CT via joint spatiotemporal regularization of tensor framelet and nonlocal total variation

    NASA Astrophysics Data System (ADS)

    Han, Hao; Gao, Hao; Xing, Lei

    2017-08-01

    Excessive radiation exposure is still a major concern in 4D cone-beam computed tomography (4D-CBCT) due to its prolonged scanning duration. Radiation dose can be effectively reduced by either under-sampling the x-ray projections or reducing the x-ray flux. However, 4D-CBCT reconstruction under such low-dose protocols is prone to image artifacts and noise. In this work, we propose a novel joint regularization-based iterative reconstruction method for low-dose 4D-CBCT. To tackle the under-sampling problem, we employ spatiotemporal tensor framelet (STF) regularization to take advantage of the spatiotemporal coherence of the patient anatomy in 4D images. To simultaneously suppress the image noise caused by photon starvation, we also incorporate spatiotemporal nonlocal total variation (SNTV) regularization to make use of the nonlocal self-recursiveness of anatomical structures in the spatial and temporal domains. Under the joint STF-SNTV regularization, the proposed iterative reconstruction approach is evaluated first using two digital phantoms and then using physical experiment data in the low-dose context of both under-sampled and noisy projections. Compared with existing approaches via either STF or SNTV regularization alone, the presented hybrid approach achieves improved image quality, and is particularly effective for the reconstruction of low-dose 4D-CBCT data that are not only sparse but noisy.

  16. Development of an automated experimental setup for the study of ionic-exchange kinetics. Application to the ionic adsorption, equilibrium attainment and dissolution of apatite compounds.

    PubMed

    Thomann, J M; Gasser, P; Bres, E F; Voegel, J C; Gramain, P

    1990-02-01

    An ion-selective electrode and microcomputer-based experimental setup for the study of ionic-exchange kinetics between a powdered solid and the solution is described. The equipment is composed of easily available commercial devices and a data acquisition and regularization computer program is presented. The system, especially developed to investigate the ionic adsorption, equilibrium attainment and dissolution of hard mineralized tissues, provides good reliable results by taking into account the volume changes of the reacting solution and the electrode behaviour under different experimental conditions, and by avoiding carbonation of the solution. A second computer program, using the regularized data and the experimental parameters, calculates the quantities of protons consumed and calcium released in the case of equilibrium attainment and dissolution of apatite-like compounds. Finally, typical examples of ion-exchange and dissolution kinetics under constant pH of enamel and synthetic hydroxyapatite are examined.

  17. High-resolution imaging-guided electroencephalography source localization: temporal effect regularization incorporation in LORETA inverse solution

    NASA Astrophysics Data System (ADS)

    Boughariou, Jihene; Zouch, Wassim; Slima, Mohamed Ben; Kammoun, Ines; Hamida, Ahmed Ben

    2015-11-01

    Electroencephalography (EEG) and magnetic resonance imaging (MRI) are noninvasive neuroimaging modalities. They are widely used and could be complementary. The fusion of these modalities may enhance some emerging research fields targeting the exploration better brain activities. Such research attracted various scientific investigators especially to provide a convivial and helpful advanced clinical-aid tool enabling better neurological explorations. Our present research was, in fact, in the context of EEG inverse problem resolution and investigated an advanced estimation methodology for the localization of the cerebral activity. Our focus was, therefore, on the integration of temporal priors to low-resolution brain electromagnetic tomography (LORETA) formalism and to solve the inverse problem in the EEG. The main idea behind our proposed method was in the integration of a temporal projection matrix within the LORETA weighting matrix. A hyperparameter is the principal fact for such a temporal integration, and its importance would be obvious when obtaining a regularized smoothness solution. Our experimental results clearly confirmed the impact of such an optimization procedure adopted for the temporal regularization parameter comparatively to the LORETA method.

  18. FPGA-accelerated algorithm for the regular expression matching system

    NASA Astrophysics Data System (ADS)

    Russek, P.; Wiatr, K.

    2015-01-01

    This article describes an algorithm to support a regular expressions matching system. The goal was to achieve an attractive performance system with low energy consumption. The basic idea of the algorithm comes from a concept of the Bloom filter. It starts from the extraction of static sub-strings for strings of regular expressions. The algorithm is devised to gain from its decomposition into parts which are intended to be executed by custom hardware and the central processing unit (CPU). The pipelined custom processor architecture is proposed and a software algorithm explained accordingly. The software part of the algorithm was coded in C and runs on a processor from the ARM family. The hardware architecture was described in VHDL and implemented in field programmable gate array (FPGA). The performance results and required resources of the above experiments are given. An example of target application for the presented solution is computer and network security systems. The idea was tested on nearly 100,000 body-based viruses from the ClamAV virus database. The solution is intended for the emerging technology of clusters of low-energy computing nodes.

  19. Chaotic and regular instantons in helical shell models of turbulence

    NASA Astrophysics Data System (ADS)

    De Pietro, Massimo; Mailybaev, Alexei A.; Biferale, Luca

    2017-03-01

    Shell models of turbulence have a finite-time blowup in the inviscid limit, i.e., the enstrophy diverges while the single-shell velocities stay finite. The signature of this blowup is represented by self-similar instantonic structures traveling coherently through the inertial range. These solutions might influence the energy transfer and the anomalous scaling properties empirically observed for the forced and viscous models. In this paper we present a study of the instantonic solutions for a set of four shell models of turbulence based on the exact decomposition of the Navier-Stokes equations in helical eigenstates. We find that depending on the helical structure of each model, instantons are chaotic or regular. Some instantonic solutions tend to recover mirror symmetry for scales small enough. Models that have anomalous scaling develop regular nonchaotic instantons. Conversely, models that have nonanomalous scaling in the stationary regime are those that have chaotic instantons. The direction of the energy carried by each single instanton tends to coincide with the direction of the energy cascade in the stationary regime. Finally, we find that whenever the small-scale stationary statistics is intermittent, the instanton is less steep than the dimensional Kolmogorov scaling, independently of whether or not it is chaotic. Our findings further support the idea that instantons might be crucial to describe some aspects of the multiscale anomalous statistics of shell models.

  20. Lp-Norm Regularization in Volumetric Imaging of Cardiac Current Sources

    PubMed Central

    Rahimi, Azar; Xu, Jingjia; Wang, Linwei

    2013-01-01

    Advances in computer vision have substantially improved our ability to analyze the structure and mechanics of the heart. In comparison, our ability to observe and analyze cardiac electrical activities is much limited. The progress to computationally reconstruct cardiac current sources from noninvasive voltage data sensed on the body surface has been hindered by the ill-posedness and the lack of a unique solution of the reconstruction problem. Common L2- and L1-norm regularizations tend to produce a solution that is either too diffused or too scattered to reflect the complex spatial structure of current source distribution in the heart. In this work, we propose a general regularization with Lp-norm (1 < p < 2) constraint to bridge the gap and balance between an overly smeared and overly focal solution in cardiac source reconstruction. In a set of phantom experiments, we demonstrate the superiority of the proposed Lp-norm method over its L1 and L2 counterparts in imaging cardiac current sources with increasing extents. Through computer-simulated and real-data experiments, we further demonstrate the feasibility of the proposed method in imaging the complex structure of excitation wavefront, as well as current sources distributed along the postinfarction scar border. This ability to preserve the spatial structure of source distribution is important for revealing the potential disruption to the normal heart excitation. PMID:24348735

  1. Assessing Human Activity in Elderly People Using Non-Intrusive Load Monitoring.

    PubMed

    Alcalá, José M; Ureña, Jesús; Hernández, Álvaro; Gualda, David

    2017-02-11

    The ageing of the population, and their increasing wish of living independently, are motivating the development of welfare and healthcare models. Existing approaches based on the direct heath-monitoring using body sensor networks (BSN) are precise and accurate. Nonetheless, their intrusiveness causes non-acceptance. New approaches seek the indirect monitoring through monitoring activities of daily living (ADLs), which proves to be a suitable solution. ADL monitoring systems use many heterogeneous sensors, are less intrusive, and are less expensive than BSN, however, the deployment and maintenance of wireless sensor networks (WSN) prevent them from a widespread acceptance. In this work, a novel technique to monitor the human activity, based on non-intrusive load monitoring (NILM), is presented. The proposal uses only smart meter data, which leads to minimum intrusiveness and a potential massive deployment at minimal cost. This could be the key to develop sustainable healthcare models for smart homes, capable of complying with the elderly people' demands. This study also uses the Dempster-Shafer theory to provide a daily score of normality with regard to the regular behavior. This approach has been evaluated using real datasets and, additionally, a benchmarking against a Gaussian mixture model approach is presented.

  2. Assessing Human Activity in Elderly People Using Non-Intrusive Load Monitoring

    PubMed Central

    Alcalá, José M.; Ureña, Jesús; Hernández, Álvaro; Gualda, David

    2017-01-01

    The ageing of the population, and their increasing wish of living independently, are motivating the development of welfare and healthcare models. Existing approaches based on the direct heath-monitoring using body sensor networks (BSN) are precise and accurate. Nonetheless, their intrusiveness causes non-acceptance. New approaches seek the indirect monitoring through monitoring activities of daily living (ADLs), which proves to be a suitable solution. ADL monitoring systems use many heterogeneous sensors, are less intrusive, and are less expensive than BSN, however, the deployment and maintenance of wireless sensor networks (WSN) prevent them from a widespread acceptance. In this work, a novel technique to monitor the human activity, based on non-intrusive load monitoring (NILM), is presented. The proposal uses only smart meter data, which leads to minimum intrusiveness and a potential massive deployment at minimal cost. This could be the key to develop sustainable healthcare models for smart homes, capable of complying with the elderly people’ demands. This study also uses the Dempster-Shafer theory to provide a daily score of normality with regard to the regular behavior. This approach has been evaluated using real datasets and, additionally, a benchmarking against a Gaussian mixture model approach is presented. PMID:28208672

  3. The Increase of Critical Thinking Skills through Mathematical Investigation Approach

    NASA Astrophysics Data System (ADS)

    Sumarna, N.; Wahyudin; Herman, T.

    2017-02-01

    Some research findings on critical thinking skills of prospective elementary teachers, showed a response that is not optimal. On the other hand, critical thinking skills will lead a student in the process of analysis, evaluation and synthesis in solving a mathematical problem. This study attempts to perform an alternative solution with a focus on mathematics learning conditions that is held in the lecture room through mathematical investigation approach. This research method was Quasi-Experimental design with pre-test post-test design. Data analysis using a mixed method with Embedded design. Subjects were regular students enrolled in 2014 at the study program of education of primary school teachers. The number of research subjects were 111 students consisting of 56 students in the experimental group and 55 students in the control group. The results of the study showed that (1) there is a significant difference in the improvement of critical thinking ability of students who receive learning through mathematical investigation approach when compared with students studying through expository approach, and (2) there is no interaction effect between prior knowledge of mathematics and learning factors (mathematical investigation and expository) to increase of critical thinking skills of students.

  4. High-Order Accurate Solutions to the Helmholtz Equation in the Presence of Boundary Singularities

    DTIC Science & Technology

    2015-03-31

    FD scheme is only consistent for classical solutions of the PDE . For this reason, we implement the method of singularity subtraction as a means for...regularity due to the boundary conditions. This is because the FD scheme is only consistent for classical solutions of the PDE . For this reason, we...Introduction In the present work, we develop a high-order numerical method for solving linear elliptic PDEs with well-behaved variable coefficients on

  5. Expanded solutions of force-free electrodynamics on general Kerr black holes

    NASA Astrophysics Data System (ADS)

    Li, Huiquan; Wang, Jiancheng

    2017-07-01

    In this work, expanded solutions of force-free magnetospheres on general Kerr black holes are derived through a radial distance expansion method. From the regular conditions both at the horizon and at spatial infinity, two previously known asymptotical solutions (one of them is actually an exact solution) are identified as the only solutions that satisfy the same conditions at the two boundaries. Taking them as initial conditions at the boundaries, expanded solutions up to the first few orders are derived by solving the stream equation order by order. It is shown that our extension of the exact solution can (partially) cure the problems of the solution: it leads to magnetic domination and a mostly timelike current for restricted parameters.

  6. Role of the chemical substitution on the luminescence properties of solid solutions Ca{sub (1−x)}Cd{sub (x)}WO{sub 4} (0 ≤ x ≤1)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taoufyq, A.; Laboratoire Matériaux et Environnement LME, Faculté des Sciences, Université Ibn Zohr, BP 8106, Cité Dakhla, Agadir; CEA, DEN, Département d'Etudes des Réacteurs, Service de Physique Expérimentale, Laboratoire Dosimétrie Capteurs Instrumentation, 13108 Saint-Paul-lez-Durance

    2015-10-15

    Highlights: • Luminescence can be modified by chemical substitution in solid solutions Ca{sub 1−x}Cd{sub x}WO{sub 4}. • The various emission spectra (charge transfer) were obtained under X-ray excitation. • Scheelite or wolframite solid solutions presented two types of emission spectra. • A luminescence component depended on cadmium substitution in each solid solution. • A component was only characteristic of oxyanion symmetry in each solid solution. - Abstract: We have investigated the chemical substitution effects on the luminescence properties under X-ray excitation of the solid solutions Ca{sub (1−x)}Cd{sub (x)}WO{sub 4} with 0 ≤ x ≤ 1. Two types of wide spectralmore » bands, associated with scheelite-type or wolframite-type solid solutions, have been observed at room temperature. We decomposed each spectral band into several spectral components characterized by energies and intensities varying with composition x. One Gaussian component was characterized by an energy decreasing regularly with the composition x, while the other Gaussian component was only related to the tetrahedral or octahedral configurations of tungstate groups WO{sub 4}{sup 2−} or WO{sub 6}{sup 6−}. The luminescence intensities exhibited minimum values in the composition range x < 0.5 corresponding to scheelite-type structures, then, they regularly increased for cadmium compositions x > 0.5 corresponding to wolframite-type structures.« less

  7. A spatially adaptive total variation regularization method for electrical resistance tomography

    NASA Astrophysics Data System (ADS)

    Song, Xizi; Xu, Yanbin; Dong, Feng

    2015-12-01

    The total variation (TV) regularization method has been used to solve the ill-posed inverse problem of electrical resistance tomography (ERT), owing to its good ability to preserve edges. However, the quality of the reconstructed images, especially in the flat region, is often degraded by noise. To optimize the regularization term and the regularization factor according to the spatial feature and to improve the resolution of reconstructed images, a spatially adaptive total variation (SATV) regularization method is proposed. A kind of effective spatial feature indicator named difference curvature is used to identify which region is a flat or edge region. According to different spatial features, the SATV regularization method can automatically adjust both the regularization term and regularization factor. At edge regions, the regularization term is approximate to the TV functional to preserve the edges; in flat regions, it is approximate to the first-order Tikhonov (FOT) functional to make the solution stable. Meanwhile, the adaptive regularization factor determined by the spatial feature is used to constrain the regularization strength of the SATV regularization method for different regions. Besides, a numerical scheme is adopted for the implementation of the second derivatives of difference curvature to improve the numerical stability. Several reconstruction image metrics are used to quantitatively evaluate the performance of the reconstructed results. Both simulation and experimental results indicate that, compared with the TV (mean relative error 0.288, mean correlation coefficient 0.627) and FOT (mean relative error 0.295, mean correlation coefficient 0.638) regularization methods, the proposed SATV (mean relative error 0.259, mean correlation coefficient 0.738) regularization method can endure a relatively high level of noise and improve the resolution of reconstructed images.

  8. Modified truncated randomized singular value decomposition (MTRSVD) algorithms for large scale discrete ill-posed problems with general-form regularization

    NASA Astrophysics Data System (ADS)

    Jia, Zhongxiao; Yang, Yanfei

    2018-05-01

    In this paper, we propose new randomization based algorithms for large scale linear discrete ill-posed problems with general-form regularization: subject to , where L is a regularization matrix. Our algorithms are inspired by the modified truncated singular value decomposition (MTSVD) method, which suits only for small to medium scale problems, and randomized SVD (RSVD) algorithms that generate good low rank approximations to A. We use rank-k truncated randomized SVD (TRSVD) approximations to A by truncating the rank- RSVD approximations to A, where q is an oversampling parameter. The resulting algorithms are called modified TRSVD (MTRSVD) methods. At every step, we use the LSQR algorithm to solve the resulting inner least squares problem, which is proved to become better conditioned as k increases so that LSQR converges faster. We present sharp bounds for the approximation accuracy of the RSVDs and TRSVDs for severely, moderately and mildly ill-posed problems, and substantially improve a known basic bound for TRSVD approximations. We prove how to choose the stopping tolerance for LSQR in order to guarantee that the computed and exact best regularized solutions have the same accuracy. Numerical experiments illustrate that the best regularized solutions by MTRSVD are as accurate as the ones by the truncated generalized singular value decomposition (TGSVD) algorithm, and at least as accurate as those by some existing truncated randomized generalized singular value decomposition (TRGSVD) algorithms. This work was supported in part by the National Science Foundation of China (Nos. 11771249 and 11371219).

  9. A two-component Matched Interface and Boundary (MIB) regularization for charge singularity in implicit solvation

    NASA Astrophysics Data System (ADS)

    Geng, Weihua; Zhao, Shan

    2017-12-01

    We present a new Matched Interface and Boundary (MIB) regularization method for treating charge singularity in solvated biomolecules whose electrostatics are described by the Poisson-Boltzmann (PB) equation. In a regularization method, by decomposing the potential function into two or three components, the singular component can be analytically represented by the Green's function, while other components possess a higher regularity. Our new regularization combines the efficiency of two-component schemes with the accuracy of the three-component schemes. Based on this regularization, a new MIB finite difference algorithm is developed for solving both linear and nonlinear PB equations, where the nonlinearity is handled by using the inexact-Newton's method. Compared with the existing MIB PB solver based on a three-component regularization, the present algorithm is simpler to implement by circumventing the work to solve a boundary value Poisson equation inside the molecular interface and to compute related interface jump conditions numerically. Moreover, the new MIB algorithm becomes computationally less expensive, while maintains the same second order accuracy. This is numerically verified by calculating the electrostatic potential and solvation energy on the Kirkwood sphere on which the analytical solutions are available and on a series of proteins with various sizes.

  10. Comparison between two meshless methods based on collocation technique for the numerical solution of four-species tumor growth model

    NASA Astrophysics Data System (ADS)

    Dehghan, Mehdi; Mohammadi, Vahid

    2017-03-01

    As is said in [27], the tumor-growth model is the incorporation of nutrient within the mixture as opposed to being modeled with an auxiliary reaction-diffusion equation. The formulation involves systems of highly nonlinear partial differential equations of surface effects through diffuse-interface models [27]. Simulations of this practical model using numerical methods can be applied for evaluating it. The present paper investigates the solution of the tumor growth model with meshless techniques. Meshless methods are applied based on the collocation technique which employ multiquadrics (MQ) radial basis function (RBFs) and generalized moving least squares (GMLS) procedures. The main advantages of these choices come back to the natural behavior of meshless approaches. As well as, a method based on meshless approach can be applied easily for finding the solution of partial differential equations in high-dimension using any distributions of points on regular and irregular domains. The present paper involves a time-dependent system of partial differential equations that describes four-species tumor growth model. To overcome the time variable, two procedures will be used. One of them is a semi-implicit finite difference method based on Crank-Nicolson scheme and another one is based on explicit Runge-Kutta time integration. The first case gives a linear system of algebraic equations which will be solved at each time-step. The second case will be efficient but conditionally stable. The obtained numerical results are reported to confirm the ability of these techniques for solving the two and three-dimensional tumor-growth equations.

  11. Scalar field cosmologies with inverted potentials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boisseau, B.; Giacomini, H.; Polarski, D., E-mail: bruno.boisseau@lmpt.univ-tours.fr, E-mail: hector.giacomini@lmpt.univ-tours.fr, E-mail: david.polarski@umontpellier.fr

    Regular bouncing solutions in the framework of a scalar-tensor gravity model were found in a recent work. We reconsider the problem in the Einstein frame (EF) in the present work. Singularities arising at the limit of physical viability of the model in the Jordan frame (JF) are either of the Big Bang or of the Big Crunch type in the EF. As a result we obtain integrable scalar field cosmological models in general relativity (GR) with inverted double-well potentials unbounded from below which possess solutions regular in the future, tending to a de Sitter space, and starting with a Bigmore » Bang. The existence of the two fixed points for the field dynamics at late times found earlier in the JF becomes transparent in the EF.« less

  12. Vacuum-assisted headspace-solid phase microextraction for determining volatile free fatty acids and phenols. Investigations on the effect of pressure on competitive adsorption phenomena in a multicomponent system.

    PubMed

    Trujillo-Rodríguez, María J; Pino, Verónica; Psillakis, Elefteria; Anderson, Jared L; Ayala, Juan H; Yiantzi, Evangelia; Afonso, Ana M

    2017-04-15

    This work proposes a new vacuum headspace solid-phase microextraction (Vac-HSSPME) method combined to gas chromatography-flame ionization detection for the determination of free fatty acids (FFAs) and phenols. All target analytes of the multicomponent solution were volatiles but their low Henry's Law constants rendered them amenable to Vac-HSSPME. The ability of a new and easy to construct Vac-HSSPME sampler to maintain low-pressure conditions for extended sampling times was concurrently demonstrated. Vac-HSSPME and regular HSSPME methods were independently optimized and the results were compared at all times. The performances of four commercial SPME fibers and two polymeric ionic liquid (PIL)-based SPME fibers were evaluated and the best overall results were obtained with the adsorbent-type CAR/PDMS fiber. For the concentrations used here, competitive displacement became more intense for the smaller and more volatile analytes of the multi-component solution when lowering the sampling pressure. The extraction time profiles showed that Vac-HSSPME had a dramatic positive effect on extraction kinetics. The local maxima of adsorbed analytes recorded with Vac-HSSPME occurred faster, but were always lower than that with regular HSSPME due to the faster analyte-loading from the multicomponent solution. Increasing the sampling temperature during Vac-HSSPME reduced the extraction efficiency of smaller analytes due to the enhancement in water molecule collisions with the fiber. This effect was not recorded for the larger phenolic compounds. Based on the optimum values selected, Vac-HSSPME required a shorter extraction time and milder sampling conditions than regular HSSPME: 20 min and 35 °C for Vac-HSSPME versus 40 min and 45 °C for regular HSSPME. The performance of the optimized Vac-HSSPME and regular HSSPME procedures were assessed and Vac-HSSPME method proved to be more sensitive, with lower limits of detection (from 0.14 to 13 μg L -1 ), and better intra-day precision (relative standard deviations values < 10% at the lowest spiked level) than regular HSSPME for almost all target analytes. The proposed Vac-HSSPME method was successfully applied to quantify FFAs and phenols in milk and milk derivatives samples. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Deconvolution of mixing time series on a graph

    PubMed Central

    Blocker, Alexander W.; Airoldi, Edoardo M.

    2013-01-01

    In many applications we are interested in making inference on latent time series from indirect measurements, which are often low-dimensional projections resulting from mixing or aggregation. Positron emission tomography, super-resolution, and network traffic monitoring are some examples. Inference in such settings requires solving a sequence of ill-posed inverse problems, yt = Axt, where the projection mechanism provides information on A. We consider problems in which A specifies mixing on a graph of times series that are bursty and sparse. We develop a multilevel state-space model for mixing times series and an efficient approach to inference. A simple model is used to calibrate regularization parameters that lead to efficient inference in the multilevel state-space model. We apply this method to the problem of estimating point-to-point traffic flows on a network from aggregate measurements. Our solution outperforms existing methods for this problem, and our two-stage approach suggests an efficient inference strategy for multilevel models of multivariate time series. PMID:25309135

  14. Modeling solvation effects in real-space and real-time within density functional approaches

    NASA Astrophysics Data System (ADS)

    Delgado, Alain; Corni, Stefano; Pittalis, Stefano; Rozzi, Carlo Andrea

    2015-10-01

    The Polarizable Continuum Model (PCM) can be used in conjunction with Density Functional Theory (DFT) and its time-dependent extension (TDDFT) to simulate the electronic and optical properties of molecules and nanoparticles immersed in a dielectric environment, typically liquid solvents. In this contribution, we develop a methodology to account for solvation effects in real-space (and real-time) (TD)DFT calculations. The boundary elements method is used to calculate the solvent reaction potential in terms of the apparent charges that spread over the van der Waals solute surface. In a real-space representation, this potential may exhibit a Coulomb singularity at grid points that are close to the cavity surface. We propose a simple approach to regularize such singularity by using a set of spherical Gaussian functions to distribute the apparent charges. We have implemented the proposed method in the Octopus code and present results for the solvation free energies and solvatochromic shifts for a representative set of organic molecules in water.

  15. Modeling solvation effects in real-space and real-time within density functional approaches

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Delgado, Alain; Centro de Aplicaciones Tecnológicas y Desarrollo Nuclear, Calle 30 # 502, 11300 La Habana; Corni, Stefano

    2015-10-14

    The Polarizable Continuum Model (PCM) can be used in conjunction with Density Functional Theory (DFT) and its time-dependent extension (TDDFT) to simulate the electronic and optical properties of molecules and nanoparticles immersed in a dielectric environment, typically liquid solvents. In this contribution, we develop a methodology to account for solvation effects in real-space (and real-time) (TD)DFT calculations. The boundary elements method is used to calculate the solvent reaction potential in terms of the apparent charges that spread over the van der Waals solute surface. In a real-space representation, this potential may exhibit a Coulomb singularity at grid points that aremore » close to the cavity surface. We propose a simple approach to regularize such singularity by using a set of spherical Gaussian functions to distribute the apparent charges. We have implemented the proposed method in the OCTOPUS code and present results for the solvation free energies and solvatochromic shifts for a representative set of organic molecules in water.« less

  16. Parameter Identification Of Multilayer Thermal Insulation By Inverse Problems

    NASA Astrophysics Data System (ADS)

    Nenarokomov, Aleksey V.; Alifanov, Oleg M.; Gonzalez, Vivaldo M.

    2012-07-01

    The purpose of this paper is to introduce an iterative regularization method in the research of radiative and thermal properties of materials with further applications in the design of Thermal Control Systems (TCS) of spacecrafts. In this paper the radiative and thermal properties (heat capacity, emissivity and thermal conductance) of a multilayered thermal-insulating blanket (MLI), which is a screen-vacuum thermal insulation as a part of the (TCS) for perspective spacecrafts, are estimated. Properties of the materials under study are determined in the result of temperature and heat flux measurement data processing based on the solution of the Inverse Heat Transfer Problem (IHTP) technique. Given are physical and mathematical models of heat transfer processes in a specimen of the multilayered thermal-insulating blanket located in the experimental facility. A mathematical formulation of the IHTP, based on sensitivity function approach, is presented too. The practical testing was performed for specimen of the real MLI. This paper consists of recent researches, which developed the approach suggested at [1].

  17. Remarks on the regularity criteria of three-dimensional magnetohydrodynamics system in terms of two velocity field components

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yamazaki, Kazuo

    2014-03-15

    We study the three-dimensional magnetohydrodynamics system and obtain its regularity criteria in terms of only two velocity vector field components eliminating the condition on the third component completely. The proof consists of a new decomposition of the four nonlinear terms of the system and estimating a component of the magnetic vector field in terms of the same component of the velocity vector field. This result may be seen as a component reduction result of many previous works [C. He and Z. Xin, “On the regularity of weak solutions to the magnetohydrodynamic equations,” J. Differ. Equ. 213(2), 234–254 (2005); Y. Zhou,more » “Remarks on regularities for the 3D MHD equations,” Discrete Contin. Dyn. Syst. 12(5), 881–886 (2005)].« less

  18. Adiabatic regularization for gauge fields and the conformal anomaly

    NASA Astrophysics Data System (ADS)

    Chu, Chong-Sun; Koyama, Yoji

    2017-03-01

    Adiabatic regularization for quantum field theory in conformally flat spacetime is known for scalar and Dirac fermion fields. In this paper, we complete the construction by establishing the adiabatic regularization scheme for the gauge field. We show that the adiabatic expansion for the mode functions and the adiabatic vacuum can be defined in a similar way using Wentzel-Kramers-Brillouin-type (WKB-type) solutions as the scalar fields. As an application of the adiabatic method, we compute the trace of the energy momentum tensor and reproduce the known result for the conformal anomaly obtained by the other regularization methods. The availability of the adiabatic expansion scheme for the gauge field allows one to study various renormalized physical quantities of theories coupled to (non-Abelian) gauge fields in conformally flat spacetime, such as conformal supersymmetric Yang Mills, inflation, and cosmology.

  19. The hypergraph regularity method and its applications

    PubMed Central

    Rödl, V.; Nagle, B.; Skokan, J.; Schacht, M.; Kohayakawa, Y.

    2005-01-01

    Szemerédi's regularity lemma asserts that every graph can be decomposed into relatively few random-like subgraphs. This random-like behavior enables one to find and enumerate subgraphs of a given isomorphism type, yielding the so-called counting lemma for graphs. The combined application of these two lemmas is known as the regularity method for graphs and has proved useful in graph theory, combinatorial geometry, combinatorial number theory, and theoretical computer science. Here, we report on recent advances in the regularity method for k-uniform hypergraphs, for arbitrary k ≥ 2. This method, purely combinatorial in nature, gives alternative proofs of density theorems originally due to E. Szemerédi, H. Furstenberg, and Y. Katznelson. Further results in extremal combinatorics also have been obtained with this approach. The two main components of the regularity method for k-uniform hypergraphs, the regularity lemma and the counting lemma, have been obtained recently: Rödl and Skokan (based on earlier work of Frankl and Rödl) generalized Szemerédi's regularity lemma to k-uniform hypergraphs, and Nagle, Rödl, and Schacht succeeded in proving a counting lemma accompanying the Rödl–Skokan hypergraph regularity lemma. The counting lemma is proved by reducing the counting problem to a simpler one previously investigated by Kohayakawa, Rödl, and Skokan. Similar results were obtained independently by W. T. Gowers, following a different approach. PMID:15919821

  20. Gaussian black holes in Rastall gravity

    NASA Astrophysics Data System (ADS)

    Spallucci, Euro; Smailagic, Anais

    In this short note we present the solution of Rastall gravity equations sourced by a Gaussian matter distribution. We find that the black hole metric shares all the common features of other regular, General Relativity BH solutions discussed in the literature: there is no curvature singularity and the Hawking radiation leaves a remnant at zero temperature in the form of a massive ordinary particle.

  1. Higher order sensitivity of solutions to convex programming problems without strict complementarity

    NASA Technical Reports Server (NTRS)

    Malanowski, Kazimierz

    1988-01-01

    Consideration is given to a family of convex programming problems which depend on a vector parameter. It is shown that the solutions of the problems and the associated Lagrange multipliers are arbitrarily many times directionally differentiable functions of the parameter, provided that the data of the problems are sufficiently regular. The characterizations of the respective derivatives are given.

  2. On microscopic structure of the QCD vacuum

    NASA Astrophysics Data System (ADS)

    Pak, D. G.; Lee, Bum-Hoon; Kim, Youngman; Tsukioka, Takuya; Zhang, P. M.

    2018-05-01

    We propose a new class of regular stationary axially symmetric solutions in a pure QCD which correspond to monopole-antimonopole pairs at macroscopic scale. The solutions represent vacuum field configurations which are locally stable against quantum gluon fluctuations in any small space-time vicinity. This implies that the monopole-antimonopole pair can serve as a structural element in microscopic description of QCD vacuum formation.

  3. Symmetry-plane model of 3D Euler flows: Mapping to regular systems and numerical solutions of blowup

    NASA Astrophysics Data System (ADS)

    Mulungye, Rachel M.; Lucas, Dan; Bustamante, Miguel D.

    2014-11-01

    We introduce a family of 2D models describing the dynamics on the so-called symmetry plane of the full 3D Euler fluid equations. These models depend on a free real parameter and can be solved analytically. For selected representative values of the free parameter, we apply the method introduced in [M.D. Bustamante, Physica D: Nonlinear Phenom. 240, 1092 (2011)] to map the fluid equations bijectively to globally regular systems. By comparing the analytical solutions with the results of numerical simulations, we establish that the numerical simulations of the mapped regular systems are far more accurate than the numerical simulations of the original systems, at the same spatial resolution and CPU time. In particular, the numerical integrations of the mapped regular systems produce robust estimates for the growth exponent and singularity time of the main blowup quantity (vorticity stretching rate), converging well to the analytically-predicted values even beyond the time at which the flow becomes under-resolved (i.e. the reliability time). In contrast, direct numerical integrations of the original systems develop unstable oscillations near the reliability time. We discuss the reasons for this improvement in accuracy, and explain how to extend the analysis to the full 3D case. Supported under the programme for Research in Third Level Institutions (PRTLI) Cycle 5 and co-funded by the European Regional Development Fund.

  4. Application of L1-norm regularization to epicardial potential reconstruction based on gradient projection.

    PubMed

    Wang, Liansheng; Qin, Jing; Wong, Tien Tsin; Heng, Pheng Ann

    2011-10-07

    The epicardial potential (EP)-targeted inverse problem of electrocardiography (ECG) has been widely investigated as it is demonstrated that EPs reflect underlying myocardial activity. It is a well-known ill-posed problem as small noises in input data may yield a highly unstable solution. Traditionally, L2-norm regularization methods have been proposed to solve this ill-posed problem. But the L2-norm penalty function inherently leads to considerable smoothing of the solution, which reduces the accuracy of distinguishing abnormalities and locating diseased regions. Directly using the L1-norm penalty function, however, may greatly increase computational complexity due to its non-differentiability. We propose an L1-norm regularization method in order to reduce the computational complexity and make rapid convergence possible. Variable splitting is employed to make the L1-norm penalty function differentiable based on the observation that both positive and negative potentials exist on the epicardial surface. Then, the inverse problem of ECG is further formulated as a bound-constrained quadratic problem, which can be efficiently solved by gradient projection in an iterative manner. Extensive experiments conducted on both synthetic data and real data demonstrate that the proposed method can handle both measurement noise and geometry noise and obtain more accurate results than previous L2- and L1-norm regularization methods, especially when the noises are large.

  5. Mixed linear-non-linear inversion of crustal deformation data: Bayesian inference of model, weighting and regularization parameters

    NASA Astrophysics Data System (ADS)

    Fukuda, Jun'ichi; Johnson, Kaj M.

    2010-06-01

    We present a unified theoretical framework and solution method for probabilistic, Bayesian inversions of crustal deformation data. The inversions involve multiple data sets with unknown relative weights, model parameters that are related linearly or non-linearly through theoretic models to observations, prior information on model parameters and regularization priors to stabilize underdetermined problems. To efficiently handle non-linear inversions in which some of the model parameters are linearly related to the observations, this method combines both analytical least-squares solutions and a Monte Carlo sampling technique. In this method, model parameters that are linearly and non-linearly related to observations, relative weights of multiple data sets and relative weights of prior information and regularization priors are determined in a unified Bayesian framework. In this paper, we define the mixed linear-non-linear inverse problem, outline the theoretical basis for the method, provide a step-by-step algorithm for the inversion, validate the inversion method using synthetic data and apply the method to two real data sets. We apply the method to inversions of multiple geodetic data sets with unknown relative data weights for interseismic fault slip and locking depth. We also apply the method to the problem of estimating the spatial distribution of coseismic slip on faults with unknown fault geometry, relative data weights and smoothing regularization weight.

  6. Global Regularity for Several Incompressible Fluid Models with Partial Dissipation

    NASA Astrophysics Data System (ADS)

    Wu, Jiahong; Xu, Xiaojing; Ye, Zhuan

    2017-09-01

    This paper examines the global regularity problem on several 2D incompressible fluid models with partial dissipation. They are the surface quasi-geostrophic (SQG) equation, the 2D Euler equation and the 2D Boussinesq equations. These are well-known models in fluid mechanics and geophysics. The fundamental issue of whether or not they are globally well-posed has attracted enormous attention. The corresponding models with partial dissipation may arise in physical circumstances when the dissipation varies in different directions. We show that the SQG equation with either horizontal or vertical dissipation always has global solutions. This is in sharp contrast with the inviscid SQG equation for which the global regularity problem remains outstandingly open. Although the 2D Euler is globally well-posed for sufficiently smooth data, the associated equations with partial dissipation no longer conserve the vorticity and the global regularity is not trivial. We are able to prove the global regularity for two partially dissipated Euler equations. Several global bounds are also obtained for a partially dissipated Boussinesq system.

  7. Weakly decaying solutions of nonlinear Schrödinger equation in the plane

    NASA Astrophysics Data System (ADS)

    Villarroel, Javier; Prada, Julia; Estévez, Pilar G.

    2017-12-01

    We show that the nonlinear Schrödinger equation in 2  +  1 dimensions possesses a class of regular and rationally decaying solutions associated to interacting solitons. The interesting dynamics of the associated pulses is studied in detail and related to homothetic Lagrange configurations of certain N- body problems. These solutions correspond to the discrete spectrum of the Lax pair associated operator. A natural characterization of this spectrum is given. We show that a certain subset of solutions correspond to rogue waves, localized along curves in the plane. Other configurations like grey solitons, cnoidal waves and general N- lumps solutions are also described.

  8. Quasivariational Solutions for First Order Quasilinear Equations with Gradient Constraint

    NASA Astrophysics Data System (ADS)

    Rodrigues, José Francisco; Santos, Lisa

    2012-08-01

    We prove the existence of solutions for a quasi-variational inequality of evolution with a first order quasilinear operator and a variable convex set which is characterized by a constraint on the absolute value of the gradient that depends on the solution itself. The only required assumption on the nonlinearity of this constraint is its continuity and positivity. The method relies on an appropriate parabolic regularization and suitable a priori estimates. We also obtain the existence of stationary solutions by studying the asymptotic behaviour in time. In the variational case, corresponding to a constraint independent of the solution, we also give uniqueness results.

  9. Analytic dyon solution in SU/N/ grand unified theories

    NASA Astrophysics Data System (ADS)

    Lyi, W. S.; Park, Y. J.; Koh, I. G.; Kim, Y. D.

    1982-10-01

    Analytic solutions which are regular everywhere, including at the origin, are found for certain cases of SU(N) grand unified theories. Attention is restricted to order-1/g behavior of the SU(N) grand unified theory, and aspects of the solutions of the Higgs field of the SU(N) near the origin are considered. Comments regarding the mass, the Pontryagin-like index of the dyon, and magnetic charge are made with respect to the recent report of a monopole discovery.

  10. Analysis of a class of boundary value problems depending on left and right Caputo fractional derivatives

    NASA Astrophysics Data System (ADS)

    Antunes, Pedro R. S.; Ferreira, Rui A. C.

    2017-07-01

    In this work we study boundary value problems associated to a nonlinear fractional ordinary differential equation involving left and right Caputo derivatives. We discuss the regularity of the solutions of such problems and, in particular, give precise necessary conditions so that the solutions are C1([0, 1]). Taking into account our analytical results, we address the numerical solution of those problems by the augmented -RBF method. Several examples illustrate the good performance of the numerical method.

  11. SU(2) Yang-Mills solitons in R2 gravity

    NASA Astrophysics Data System (ADS)

    Perapechka, I.; Shnir, Ya.

    2018-05-01

    We construct new family of spherically symmetric regular solutions of SU (2) Yang-Mills theory coupled to pure R2 gravity. The particle-like field configurations possess non-integer non-Abelian magnetic charge. A discussion of the main properties of the solutions and their differences from the usual Bartnik-McKinnon solitons in the asymptotically flat case is presented. It is shown that there is continuous family of linearly stable non-trivial solutions in which the gauge field has no nodes.

  12. Electronic orbital response of regular extended and infinite periodic systems to magnetic fields. I. Theoretical foundations for static case

    NASA Astrophysics Data System (ADS)

    Springborg, Michael; Molayem, Mohammad; Kirtman, Bernard

    2017-09-01

    A theoretical treatment for the orbital response of an infinite, periodic system to a static, homogeneous, magnetic field is presented. It is assumed that the system of interest has an energy gap separating occupied and unoccupied orbitals and a zero Chern number. In contrast to earlier studies, we do not utilize a perturbation expansion, although we do assume the field is sufficiently weak that the occurrence of Landau levels can be ignored. The theory is developed by analyzing results for large, finite systems and also by comparing with the analogous treatment of an electrostatic field. The resulting many-electron Hamilton operator is forced to be hermitian, but hermiticity is not preserved, in general, for the subsequently derived single-particle operators that determine the electronic orbitals. However, we demonstrate that when focusing on the canonical solutions to the single-particle equations, hermiticity is preserved. The issue of gauge-origin dependence of approximate solutions is addressed. Our approach is compared with several previously proposed treatments, whereby limitations in some of the latter are identified.

  13. Mesh refinement strategy for optimal control problems

    NASA Astrophysics Data System (ADS)

    Paiva, L. T.; Fontes, F. A. C. C.

    2013-10-01

    Direct methods are becoming the most used technique to solve nonlinear optimal control problems. Regular time meshes having equidistant spacing are frequently used. However, in some cases these meshes cannot cope accurately with nonlinear behavior. One way to improve the solution is to select a new mesh with a greater number of nodes. Another way, involves adaptive mesh refinement. In this case, the mesh nodes have non equidistant spacing which allow a non uniform nodes collocation. In the method presented in this paper, a time mesh refinement strategy based on the local error is developed. After computing a solution in a coarse mesh, the local error is evaluated, which gives information about the subintervals of time domain where refinement is needed. This procedure is repeated until the local error reaches a user-specified threshold. The technique is applied to solve the car-like vehicle problem aiming minimum consumption. The approach developed in this paper leads to results with greater accuracy and yet with lower overall computational time as compared to using a time meshes having equidistant spacing.

  14. Rational Degenerations of M-Curves, Totally Positive Grassmannians and KP2-Solitons

    NASA Astrophysics Data System (ADS)

    Abenda, Simonetta; Grinevich, Petr G.

    2018-03-01

    We establish a new connection between the theory of totally positive Grassmannians and the theory of M-curves using the finite-gap theory for solitons of the KP equation. Here and in the following KP equation denotes the Kadomtsev-Petviashvili 2 equation [see (1)], which is the first flow from the KP hierarchy. We also assume that all KP times are real. We associate to any point of the real totally positive Grassmannian Gr^{tp} (N,M) a reducible curve which is a rational degeneration of an M-curve of minimal genus {g=N(M-N)} , and we reconstruct the real algebraic-geometric data á la Krichever for the underlying real bounded multiline KP soliton solutions. From this construction, it follows that these multiline solitons can be explicitly obtained by degenerating regular real finite-gap solutions corresponding to smooth M-curves. In our approach, we rule the addition of each new rational component to the spectral curve via an elementary Darboux transformation which corresponds to a section of a specific projection Gr^{tp} (r+1,M-N+r+1)\\mapsto Gr^{tp} (r,M-N+r).

  15. Numerical 3+1 General Relativistic Magnetohydrodynamics: A Local Characteristic Approach

    NASA Astrophysics Data System (ADS)

    Antón, Luis; Zanotti, Olindo; Miralles, Juan A.; Martí, José M.; Ibáñez, José M.; Font, José A.; Pons, José A.

    2006-01-01

    We present a general procedure to solve numerically the general relativistic magnetohydrodynamics (GRMHD) equations within the framework of the 3+1 formalism. The work reported here extends our previous investigation in general relativistic hydrodynamics (Banyuls et al. 1997) where magnetic fields were not considered. The GRMHD equations are written in conservative form to exploit their hyperbolic character in the solution procedure. All theoretical ingredients necessary to build up high-resolution shock-capturing schemes based on the solution of local Riemann problems (i.e., Godunov-type schemes) are described. In particular, we use a renormalized set of regular eigenvectors of the flux Jacobians of the relativistic MHD equations. In addition, the paper describes a procedure based on the equivalence principle of general relativity that allows the use of Riemann solvers designed for special relativistic MHD in GRMHD. Our formulation and numerical methodology are assessed by performing various test simulations recently considered by different authors. These include magnetized shock tubes, spherical accretion onto a Schwarzschild black hole, equatorial accretion onto a Kerr black hole, and magnetized thick disks accreting onto a black hole and subject to the magnetorotational instability.

  16. Application of generalized singular value decomposition to ionospheric tomography

    NASA Astrophysics Data System (ADS)

    Bhuyan, K.; Singh, S.; Bhuyan, P.

    2004-10-01

    The electron density distribution of the low- and mid-latitude ionosphere has been investigated by the computerized tomography technique using a Generalized Singular Value Decomposition (GSVD) based algorithm. Model ionospheric total electron content (TEC) data obtained from the International Reference Ionosphere 2001 and slant relative TEC data measured at a chain of three stations receiving transit satellite transmissions in Alaska, USA are used in this analysis. The issue of optimum efficiency of the GSVD algorithm in the reconstruction of ionospheric structures is being addressed through simulation of the equatorial ionization anomaly (EIA), in addition to its application to investigate complicated ionospheric density irregularities. Results show that the Generalized Cross Validation approach to find the regularization parameter and the corresponding solution gives a very good reconstructed image of the low-latitude ionosphere and the EIA within it. Provided that some minimum norm is fulfilled, the GSVD solution is found to be least affected by considerations, such as pixel size and number of ray paths. The method has also been used to investigate the behaviour of the mid-latitude ionosphere under magnetically quiet and disturbed conditions.

  17. Optimal Control and Smoothing Techniques for Computing Minimum Fuel Orbital Transfers and Rendezvous

    NASA Astrophysics Data System (ADS)

    Epenoy, R.; Bertrand, R.

    We investigate in this paper the computation of minimum fuel orbital transfers and rendezvous. Each problem is seen as an optimal control problem and is solved by means of shooting methods [1]. This approach corresponds to the use of Pontryagin's Maximum Principle (PMP) [2-4] and leads to the solution of a Two Point Boundary Value Problem (TPBVP). It is well known that this last one is very difficult to solve when the performance index is fuel consumption because in this case the optimal control law has a particular discontinuous structure called "bang-bang". We will show how to modify the performance index by a term depending on a small parameter in order to yield regular controls. Then, a continuation method on this parameter will lead us to the solution of the original problem. Convergence theorems will be given. Finally, numerical examples will illustrate the interest of our method. We will consider two particular problems: The GTO (Geostationary Transfer Orbit) to GEO (Geostationary Equatorial Orbit) transfer and the LEO (Low Earth Orbit) rendezvous.

  18. Shock formation in the dispersionless Kadomtsev-Petviashvili equation

    NASA Astrophysics Data System (ADS)

    Grava, T.; Klein, C.; Eggers, J.

    2016-04-01

    The dispersionless Kadomtsev-Petviashvili (dKP) equation {{≤ft({{u}t}+u{{u}x}\\right)}x}={{u}yy} is one of the simplest nonlinear wave equations describing two-dimensional shocks. To solve the dKP equation numerically we use a coordinate transformation inspired by the method of characteristics for the one-dimensional Hopf equation {{u}t}+u{{u}x}=0 . We show numerically that the solutions to the transformed equation stays regular for longer times than the solution of the dKP equation. This permits us to extend the dKP solution as the graph of a multivalued function beyond the critical time when the gradients blow up. This overturned solution is multivalued in a lip shape region in the (x, y) plane, where the solution of the dKP equation exists in a weak sense only, and a shock front develops. A local expansion reveals the universal scaling structure of the shock, which after a suitable change of coordinates corresponds to a generic cusp catastrophe. We provide a heuristic derivation of the shock front position near the critical point for the solution of the dKP equation, and study the solution of the dKP equation when a small amount of dissipation is added. Using multiple-scale analysis, we show that in the limit of small dissipation and near the critical point of the dKP solution, the solution of the dissipative dKP equation converges to a Pearcey integral. We test and illustrate our results by detailed comparisons with numerical simulations of both the regularized equation, the dKP equation, and the asymptotic description given in terms of the Pearcey integral.

  19. The condition of regular degeneration for singularly perturbed systems of linear differential-difference equations.

    NASA Technical Reports Server (NTRS)

    Cooke, K. L.; Meyer, K. R.

    1966-01-01

    Extension of problem of singular perturbation for linear scalar constant coefficient differential- difference equation with single retardation to several retardations, noting degenerate equation solution

  20. Functional Itô versus Banach space stochastic calculus and strict solutions of semilinear path-dependent equations

    NASA Astrophysics Data System (ADS)

    Cosso, Andrea; Russo, Francesco

    2016-11-01

    Functional Itô calculus was introduced in order to expand a functional F(t,Xṡ+t,Xt) depending on time t, past and present values of the process X. Another possibility to expand F(t,Xṡ+t,Xt) consists in considering the path Xṡ+t = {Xx+t,x ∈ [-T, 0]} as an element of the Banach space of continuous functions on C([-T, 0]) and to use Banach space stochastic calculus. The aim of this paper is threefold. (1) To reformulate functional Itô calculus, separating time and past, making use of the regularization procedures which match more naturally the notion of horizontal derivative which is one of the tools of that calculus. (2) To exploit this reformulation in order to discuss the (not obvious) relation between the functional and the Banach space approaches. (3) To study existence and uniqueness of smooth solutions to path-dependent partial differential equations which naturally arise in the study of functional Itô calculus. More precisely, we study a path-dependent equation of Kolmogorov type which is related to the window process of the solution to an Itô stochastic differential equation with path-dependent coefficients. We also study a semilinear version of that equation.

  1. A variational regularization of Abel transform for GPS radio occultation

    NASA Astrophysics Data System (ADS)

    Wee, Tae-Kwon

    2018-04-01

    In the Global Positioning System (GPS) radio occultation (RO) technique, the inverse Abel transform of measured bending angle (Abel inversion, hereafter AI) is the standard means of deriving the refractivity. While concise and straightforward to apply, the AI accumulates and propagates the measurement error downward. The measurement error propagation is detrimental to the refractivity in lower altitudes. In particular, it builds up negative refractivity bias in the tropical lower troposphere. An alternative to AI is the numerical inversion of the forward Abel transform, which does not incur the integration of error-possessing measurement and thus precludes the error propagation. The variational regularization (VR) proposed in this study approximates the inversion of the forward Abel transform by an optimization problem in which the regularized solution describes the measurement as closely as possible within the measurement's considered accuracy. The optimization problem is then solved iteratively by means of the adjoint technique. VR is formulated with error covariance matrices, which permit a rigorous incorporation of prior information on measurement error characteristics and the solution's desired behavior into the regularization. VR holds the control variable in the measurement space to take advantage of the posterior height determination and to negate the measurement error due to the mismodeling of the refractional radius. The advantages of having the solution and the measurement in the same space are elaborated using a purposely corrupted synthetic sounding with a known true solution. The competency of VR relative to AI is validated with a large number of actual RO soundings. The comparison to nearby radiosonde observations shows that VR attains considerably smaller random and systematic errors compared to AI. A noteworthy finding is that in the heights and areas that the measurement bias is supposedly small, VR follows AI very closely in the mean refractivity deserting the first guess. In the lowest few kilometers that AI produces large negative refractivity bias, VR reduces the refractivity bias substantially with the aid of the background, which in this study is the operational forecasts of the European Centre for Medium-Range Weather Forecasts (ECMWF). It is concluded based on the results presented in this study that VR offers a definite advantage over AI in the quality of refractivity.

  2. Dimensional regularization in position space and a Forest Formula for Epstein-Glaser renormalization

    NASA Astrophysics Data System (ADS)

    Dütsch, Michael; Fredenhagen, Klaus; Keller, Kai Johannes; Rejzner, Katarzyna

    2014-12-01

    We reformulate dimensional regularization as a regularization method in position space and show that it can be used to give a closed expression for the renormalized time-ordered products as solutions to the induction scheme of Epstein-Glaser. This closed expression, which we call the Epstein-Glaser Forest Formula, is analogous to Zimmermann's Forest Formula for BPH renormalization. For scalar fields, the resulting renormalization method is always applicable, we compute several examples. We also analyze the Hopf algebraic aspects of the combinatorics. Our starting point is the Main Theorem of Renormalization of Stora and Popineau and the arising renormalization group as originally defined by Stückelberg and Petermann.

  3. Energy functions for regularization algorithms

    NASA Technical Reports Server (NTRS)

    Delingette, H.; Hebert, M.; Ikeuchi, K.

    1991-01-01

    Regularization techniques are widely used for inverse problem solving in computer vision such as surface reconstruction, edge detection, or optical flow estimation. Energy functions used for regularization algorithms measure how smooth a curve or surface is, and to render acceptable solutions these energies must verify certain properties such as invariance with Euclidean transformations or invariance with parameterization. The notion of smoothness energy is extended here to the notion of a differential stabilizer, and it is shown that to void the systematic underestimation of undercurvature for planar curve fitting, it is necessary that circles be the curves of maximum smoothness. A set of stabilizers is proposed that meet this condition as well as invariance with rotation and parameterization.

  4. Towards adjoint-based inversion for rheological parameters in nonlinear viscous mantle flow

    NASA Astrophysics Data System (ADS)

    Worthen, Jennifer; Stadler, Georg; Petra, Noemi; Gurnis, Michael; Ghattas, Omar

    2014-09-01

    We address the problem of inferring mantle rheological parameter fields from surface velocity observations and instantaneous nonlinear mantle flow models. We formulate this inverse problem as an infinite-dimensional nonlinear least squares optimization problem governed by nonlinear Stokes equations. We provide expressions for the gradient of the cost functional of this optimization problem with respect to two spatially-varying rheological parameter fields: the viscosity prefactor and the exponent of the second invariant of the strain rate tensor. Adjoint (linearized) Stokes equations, which are characterized by a 4th order anisotropic viscosity tensor, facilitates efficient computation of the gradient. A quasi-Newton method for the solution of this optimization problem is presented, which requires the repeated solution of both nonlinear forward Stokes and linearized adjoint Stokes equations. For the solution of the nonlinear Stokes equations, we find that Newton’s method is significantly more efficient than a Picard fixed point method. Spectral analysis of the inverse operator given by the Hessian of the optimization problem reveals that the numerical eigenvalues collapse rapidly to zero, suggesting a high degree of ill-posedness of the inverse problem. To overcome this ill-posedness, we employ Tikhonov regularization (favoring smooth parameter fields) or total variation (TV) regularization (favoring piecewise-smooth parameter fields). Solution of two- and three-dimensional finite element-based model inverse problems show that a constant parameter in the constitutive law can be recovered well from surface velocity observations. Inverting for a spatially-varying parameter field leads to its reasonable recovery, in particular close to the surface. When inferring two spatially varying parameter fields, only an effective viscosity field and the total viscous dissipation are recoverable. Finally, a model of a subducting plate shows that a localized weak zone at the plate boundary can be partially recovered, especially with TV regularization.

  5. Slice regular functions of several Clifford variables

    NASA Astrophysics Data System (ADS)

    Ghiloni, R.; Perotti, A.

    2012-11-01

    We introduce a class of slice regular functions of several Clifford variables. Our approach to the definition of slice functions is based on the concept of stem functions of several variables and on the introduction on real Clifford algebras of a family of commuting complex structures. The class of slice regular functions include, in particular, the family of (ordered) polynomials in several Clifford variables. We prove some basic properties of slice and slice regular functions and give examples to illustrate this function theory. In particular, we give integral representation formulas for slice regular functions and a Hartogs type extension result.

  6. Challenges in the quality assurance of elemental and isotopic analyses in the nuclear domain benefitting from high resolution ICP-OES and sector field ICP-MS.

    PubMed

    Krachler, Michael; Alvarez-Sarandes, Rafael; Van Winckel, Stefaan

    Accurate analytical data reinforces fundamentally the meaningfulness of nuclear fuel performance assessments and nuclear waste characterization. Regularly lacking matrix-matched certified reference materials, quality assurance of elemental and isotopic analysis of nuclear materials remains a challenging endeavour. In this context, this review highlights various dedicated experimental approaches envisaged at the European Commission-Joint Research Centre-Institute for Transuranium Elements to overcome this limitation, mainly focussing on the use of high resolution-inductively coupled plasma-optical emission spectrometry (HR-ICP-OES) and sector field-inductively coupled plasma-mass spectrometry (SF-ICP-MS). However, also α- and γ-spectrometry are included here to help characterise extensively the investigated actinide solutions for their actual concentration, potential impurities and isotopic purity.

  7. A new aerodynamic integral equation based on an acoustic formula in the time domain

    NASA Technical Reports Server (NTRS)

    Farassat, F.

    1984-01-01

    An aerodynamic integral equation for bodies moving at transonic and supersonic speeds is presented. Based on a time-dependent acoustic formula for calculating the noise emanating from the outer portion of a propeller blade travelling at high speed (the Ffowcs Williams-Hawking formulation), the loading terms and a conventional thickness source terms are retained. Two surface and three line integrals are employed to solve an equation for the loading noise. The near-field term is regularized using the collapsing sphere approach to obtain semiconvergence on the blade surface. A singular integral equation is thereby derived for the unknown surface pressure, and is amenable to numerical solutions using Galerkin or collocation methods. The technique is useful for studying the nonuniform inflow to the propeller.

  8. Least squares QR-based decomposition provides an efficient way of computing optimal regularization parameter in photoacoustic tomography.

    PubMed

    Shaw, Calvin B; Prakash, Jaya; Pramanik, Manojit; Yalavarthy, Phaneendra K

    2013-08-01

    A computationally efficient approach that computes the optimal regularization parameter for the Tikhonov-minimization scheme is developed for photoacoustic imaging. This approach is based on the least squares-QR decomposition which is a well-known dimensionality reduction technique for a large system of equations. It is shown that the proposed framework is effective in terms of quantitative and qualitative reconstructions of initial pressure distribution enabled via finding an optimal regularization parameter. The computational efficiency and performance of the proposed method are shown using a test case of numerical blood vessel phantom, where the initial pressure is exactly known for quantitative comparison.

  9. Development of Advanced Signal Processing and Source Imaging Methods for Superparamagnetic Relaxometry

    PubMed Central

    Huang, Ming-Xiong; Anderson, Bill; Huang, Charles W.; Kunde, Gerd J.; Vreeland, Erika C.; Huang, Jeffrey W.; Matlashov, Andrei N.; Karaulanov, Todor; Nettles, Christopher P.; Gomez, Andrew; Minser, Kayla; Weldon, Caroline; Paciotti, Giulio; Harsh, Michael; Lee, Roland R.; Flynn, Edward R.

    2017-01-01

    Superparamagnetic Relaxometry (SPMR) is a highly sensitive technique for the in vivo detection of tumor cells and may improve early stage detection of cancers. SPMR employs superparamagnetic iron oxide nanoparticles (SPION). After a brief magnetizing pulse is used to align the SPION, SPMR measures the time decay of SPION using Super-conducting Quantum Interference Device (SQUID) sensors. Substantial research has been carried out in developing the SQUID hardware and in improving the properties of the SPION. However, little research has been done in the pre-processing of sensor signals and post-processing source modeling in SPMR. In the present study, we illustrate new pre-processing tools that were developed to: 1) remove trials contaminated with artifacts, 2) evaluate and ensure that a single decay process associated with bounded SPION exists in the data, 3) automatically detect and correct flux jumps, and 4) accurately fit the sensor signals with different decay models. Furthermore, we developed an automated approach based on multi-start dipole imaging technique to obtain the locations and magnitudes of multiple magnetic sources, without initial guesses from the users. A regularization process was implemented to solve the ambiguity issue related to the SPMR source variables. A procedure based on reduced chi-square cost-function was introduced to objectively obtain the adequate number of dipoles that describe the data. The new pre-processing tools and multi-start source imaging approach have been successfully evaluated using phantom data. In conclusion, these tools and multi-start source modeling approach substantially enhance the accuracy and sensitivity in detecting and localizing sources from the SPMR signals. Furthermore, multi-start approach with regularization provided robust and accurate solutions for a poor SNR condition similar to the SPMR detection sensitivity in the order of 1000 cells. We believe such algorithms will help establishing the industrial standards for SPMR when applying the technique in pre-clinical and clinical settings. PMID:28072579

  10. Regularization and Approximation of a Class of Evolution Problems in Applied Mathematics

    DTIC Science & Technology

    1991-01-01

    8217 DT)IG AD-A242 223 FINAL REPORT Nov61991:ti -ll IN IImI 1OV1 Ml99 1 REGULARIZATION AND APPROXIMATION OF A-CLASS OF EVOLUTION -PROBLEMS IN APPLIED...The University of Texas at Austin Austin, TX 78712 91 10 30 050 FINAL REPORT "Regularization and Approximation of a Class of Evolution Problems in...micro-structured parabolic system. A mathematical analysis of the regularized equations-has been developed to support our approach. Supporting

  11. Seismic Sources for the Territory of Georgia

    NASA Astrophysics Data System (ADS)

    Tsereteli, N. S.; Varazanashvili, O.

    2011-12-01

    The southern Caucasus is an earthquake prone region where devastating earthquakes have repeatedly caused significant loss of lives, infrastructure and buildings. High geodynamic activity of the region expressed in both seismic and aseismic deformations, is conditioned by the still-ongoing convergence of lithospheric plates and northward propagation of the Afro-Arabian continental block at a rate of several cm/year. The geometry of tectonic deformations in the region is largely determined by the wedge-shaped rigid Arabian block intensively intended into the relatively mobile Middle East-Caucasian region. Georgia is partner of ongoing regional project EMME. The main objective of EMME is calculation of Earthquake hazard uniformly with heights standards. One approach used in the project is the probabilistic seismic hazard assessment. In this approach the first parameter requirement is the definition of seismic source zones. Seismic sources can be either faults or area sources. Seismoactive structures of Georgia are identified mainly on the basis of the correlation between neotectonic structures of the region and earthquakes. Requirements of modern PSH software to geometry of faults is very high. As our knowledge of active faults geometry is not sufficient, area sources were used. Seismic sources are defined as zones that are characterized with more or less uniform seismicity. Poor knowledge of the processes occurring in deep of the Earth is connected with complexity of direct measurement. From this point of view the reliable data obtained from earthquake fault plane solution is unique for understanding the character of a current tectonic life of investigated area. There are two methods of identification if seismic sources. The first is the seimsotectonic approach, based on identification of extensive homogeneous seismic sources (SS) with the definition of probability of occurrence of maximum earthquake Mmax. In the second method the identification of seismic sources will be obtained on the bases of structural geology, parameters of seismicity and seismotectonics. This last approach was used by us. For achievement of this purpose it was necessary to solve following problems: to calculate the parameters of seismotectonic deformation; to reveal regularities in character of earthquake fault plane solution; use obtained regularities to develop principles of an establishment of borders between various hierarchical and scale levels of seismic deformations fields and to give their geological interpretation; Three dimensional matching of active faults with real geometrical dimension and earthquake sources have been investigated. Finally each zone have been defined with the parameters: the geometry, the magnitude-frequency parameters, maximum magnitude, and depth distribution as well as modern dynamical characteristics widely used for complex processes

  12. Temporal sparsity exploiting nonlocal regularization for 4D computed tomography reconstruction

    PubMed Central

    Kazantsev, Daniil; Guo, Enyu; Kaestner, Anders; Lionheart, William R. B.; Bent, Julian; Withers, Philip J.; Lee, Peter D.

    2016-01-01

    X-ray imaging applications in medical and material sciences are frequently limited by the number of tomographic projections collected. The inversion of the limited projection data is an ill-posed problem and needs regularization. Traditional spatial regularization is not well adapted to the dynamic nature of time-lapse tomography since it discards the redundancy of the temporal information. In this paper, we propose a novel iterative reconstruction algorithm with a nonlocal regularization term to account for time-evolving datasets. The aim of the proposed nonlocal penalty is to collect the maximum relevant information in the spatial and temporal domains. With the proposed sparsity seeking approach in the temporal space, the computational complexity of the classical nonlocal regularizer is substantially reduced (at least by one order of magnitude). The presented reconstruction method can be directly applied to various big data 4D (x, y, z+time) tomographic experiments in many fields. We apply the proposed technique to modelled data and to real dynamic X-ray microtomography (XMT) data of high resolution. Compared to the classical spatio-temporal nonlocal regularization approach, the proposed method delivers reconstructed images of improved resolution and higher contrast while remaining significantly less computationally demanding. PMID:27002902

  13. Nonsmooth, nonconvex regularizers applied to linear electromagnetic inverse problems

    NASA Astrophysics Data System (ADS)

    Hidalgo-Silva, H.; Gomez-Trevino, E.

    2017-12-01

    Tikhonov's regularization method is the standard technique applied to obtain models of the subsurface conductivity distribution from electric or electromagnetic measurements by solving UT (m) = | F (m) - d |2 + λ P(m). The second term correspond to the stabilizing functional, with P (m) = | ∇ m |2 the usual approach, and λ the regularization parameter. Due to the roughness penalizer inclusion, the model developed by Tikhonov's algorithm tends to smear discontinuities, a feature that may be undesirable. An important requirement for the regularizer is to allow the recovery of edges, and smooth the homogeneous parts. As is well known, Total Variation (TV) is now the standard approach to meet this requirement. Recently, Wang et.al. proved convergence for alternating direction method of multipliers in nonconvex, nonsmooth optimization. In this work we present a study of several algorithms for model recovering of Geosounding data based on Infimal Convolution, and also on hybrid, TV and second order TV and nonsmooth, nonconvex regularizers, observing their performance on synthetic and real data. The algorithms are based on Bregman iteration and Split Bregman method, and the geosounding method is the low-induction numbers magnetic dipoles. Non-smooth regularizers are considered using the Legendre-Fenchel transform.

  14. X-ray computed tomography using curvelet sparse regularization.

    PubMed

    Wieczorek, Matthias; Frikel, Jürgen; Vogel, Jakob; Eggl, Elena; Kopp, Felix; Noël, Peter B; Pfeiffer, Franz; Demaret, Laurent; Lasser, Tobias

    2015-04-01

    Reconstruction of x-ray computed tomography (CT) data remains a mathematically challenging problem in medical imaging. Complementing the standard analytical reconstruction methods, sparse regularization is growing in importance, as it allows inclusion of prior knowledge. The paper presents a method for sparse regularization based on the curvelet frame for the application to iterative reconstruction in x-ray computed tomography. In this work, the authors present an iterative reconstruction approach based on the alternating direction method of multipliers using curvelet sparse regularization. Evaluation of the method is performed on a specifically crafted numerical phantom dataset to highlight the method's strengths. Additional evaluation is performed on two real datasets from commercial scanners with different noise characteristics, a clinical bone sample acquired in a micro-CT and a human abdomen scanned in a diagnostic CT. The results clearly illustrate that curvelet sparse regularization has characteristic strengths. In particular, it improves the restoration and resolution of highly directional, high contrast features with smooth contrast variations. The authors also compare this approach to the popular technique of total variation and to traditional filtered backprojection. The authors conclude that curvelet sparse regularization is able to improve reconstruction quality by reducing noise while preserving highly directional features.

  15. Rotational flow in tapered slab rocket motors

    NASA Astrophysics Data System (ADS)

    Saad, Tony; Sams, Oliver C.; Majdalani, Joseph

    2006-10-01

    Internal flow modeling is a requisite for obtaining critical parameters in the design and fabrication of modern solid rocket motors. In this work, the analytical formulation of internal flows particular to motors with tapered sidewalls is pursued. The analysis employs the vorticity-streamfunction approach to treat this problem assuming steady, incompressible, inviscid, and nonreactive flow conditions. The resulting solution is rotational following the analyses presented by Culick for a cylindrical motor. In an extension to Culick's work, Clayton has recently managed to incorporate the effect of tapered walls. Here, an approach similar to that of Clayton is applied to a slab motor in which the chamber is modeled as a rectangular channel with tapered sidewalls. The solutions are shown to be reducible, at leading order, to Taylor's inviscid profile in a porous channel. The analysis also captures the generation of vorticity at the surface of the propellant and its transport along the streamlines. It is from the axial pressure gradient that the proper form of the vorticity is ascertained. Regular perturbations are then used to solve the vorticity equation that prescribes the mean flow motion. Subsequently, numerical simulations via a finite volume solver are carried out to gain further confidence in the analytical approximations. In illustrating the effects of the taper on flow conditions, comparisons of total pressure and velocity profiles in tapered and nontapered chambers are entertained. Finally, a comparison with the axisymmetric flow analog is presented.

  16. Developing a Near Real-time System for Earthquake Slip Distribution Inversion

    NASA Astrophysics Data System (ADS)

    Zhao, Li; Hsieh, Ming-Che; Luo, Yan; Ji, Chen

    2016-04-01

    Advances in observational and computational seismology in the past two decades have enabled completely automatic and real-time determinations of the focal mechanisms of earthquake point sources. However, seismic radiations from moderate and large earthquakes often exhibit strong finite-source directivity effect, which is critically important for accurate ground motion estimations and earthquake damage assessments. Therefore, an effective procedure to determine earthquake rupture processes in near real-time is in high demand for hazard mitigation and risk assessment purposes. In this study, we develop an efficient waveform inversion approach for the purpose of solving for finite-fault models in 3D structure. Full slip distribution inversions are carried out based on the identified fault planes in the point-source solutions. To ensure efficiency in calculating 3D synthetics during slip distribution inversions, a database of strain Green tensors (SGT) is established for 3D structural model with realistic surface topography. The SGT database enables rapid calculations of accurate synthetic seismograms for waveform inversion on a regular desktop or even a laptop PC. We demonstrate our source inversion approach using two moderate earthquakes (Mw~6.0) in Taiwan and in mainland China. Our results show that 3D velocity model provides better waveform fitting with more spatially concentrated slip distributions. Our source inversion technique based on the SGT database is effective for semi-automatic, near real-time determinations of finite-source solutions for seismic hazard mitigation purposes.

  17. Standardized Six-Step Approach to the Performance of the Focused Basic Obstetric Ultrasound Examination.

    PubMed

    Abuhamad, Alfred; Zhao, Yili; Abuhamad, Sharon; Sinkovskaya, Elena; Rao, Rashmi; Kanaan, Camille; Platt, Lawrence

    2016-01-01

    This study aims to validate the feasibility and accuracy of a new standardized six-step approach to the performance of the focused basic obstetric ultrasound examination, and compare the new approach to the regular approach performed in the scheduled obstetric ultrasound examination. A new standardized six-step approach to the performance of the focused basic obstetric ultrasound examination, to evaluate fetal presentation, fetal cardiac activity, presence of multiple pregnancy, placental localization, amniotic fluid volume evaluation, and biometric measurements, was prospectively performed on 100 pregnant women between 18(+0) and 27(+6) weeks of gestation and another 100 pregnant women between 28(+0) and 36(+6) weeks of gestation. The agreement of findings for each of the six steps of the standardized six-step approach was evaluated against the regular approach. In all ultrasound examinations performed, substantial to perfect agreement (Kappa value between 0.64 and 1.00) was observed between the new standardized six-step approach and the regular approach. The new standardized six-step approach to the focused basic obstetric ultrasound examination can be performed successfully and accurately between 18(+0) and 36(+6) weeks of gestation. This standardized approach can be of significant benefit to limited resource settings and in point of care obstetric ultrasound applications. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  18. Detection of regularities in variation in geomechanical behavior of rock mass during multi-roadway preparation and mining of an extraction panel

    NASA Astrophysics Data System (ADS)

    Tsvetkov, AB; Pavlova, LD; Fryanov, VN

    2018-03-01

    The results of numerical simulation of the stress–strain state in a rock block and surrounding mass mass under multi-roadway preparation to mining are presented. The numerical solutions obtained by the nonlinear modeling and using the constitutive relations of the theory of elasticity are compared. The regularities of the stress distribution in the vicinity of the pillars located in the zone of the abutment pressure of are found.

  19. X-Ray Phase Imaging for Breast Cancer Detection

    DTIC Science & Technology

    2010-09-01

    regularization seeks the minimum- norm , least squares solution for phase retrieval. The retrieval result with Tikhonov regularization is still unsatisfactory...of norm , that can effectively reflect the accuracy of the retrieved data as an image, if ‖δ Ik+1−δ Ik‖ is less than a predefined threshold value β...pointed out that the proper norm for images is the total variation (TV) norm , which is the L1 norm of the gradient of the image function, and not the

  20. Cerebral perfusion computed tomography deconvolution via structure tensor total variation regularization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zeng, Dong; Zhang, Xinyu; Bian, Zhaoying, E-mail: zybian@smu.edu.cn, E-mail: jhma@smu.edu.cn

    Purpose: Cerebral perfusion computed tomography (PCT) imaging as an accurate and fast acute ischemic stroke examination has been widely used in clinic. Meanwhile, a major drawback of PCT imaging is the high radiation dose due to its dynamic scan protocol. The purpose of this work is to develop a robust perfusion deconvolution approach via structure tensor total variation (STV) regularization (PD-STV) for estimating an accurate residue function in PCT imaging with the low-milliampere-seconds (low-mAs) data acquisition. Methods: Besides modeling the spatio-temporal structure information of PCT data, the STV regularization of the present PD-STV approach can utilize the higher order derivativesmore » of the residue function to enhance denoising performance. To minimize the objective function, the authors propose an effective iterative algorithm with a shrinkage/thresholding scheme. A simulation study on a digital brain perfusion phantom and a clinical study on an old infarction patient were conducted to validate and evaluate the performance of the present PD-STV approach. Results: In the digital phantom study, visual inspection and quantitative metrics (i.e., the normalized mean square error, the peak signal-to-noise ratio, and the universal quality index) assessments demonstrated that the PD-STV approach outperformed other existing approaches in terms of the performance of noise-induced artifacts reduction and accurate perfusion hemodynamic maps (PHM) estimation. In the patient data study, the present PD-STV approach could yield accurate PHM estimation with several noticeable gains over other existing approaches in terms of visual inspection and correlation analysis. Conclusions: This study demonstrated the feasibility and efficacy of the present PD-STV approach in utilizing STV regularization to improve the accuracy of residue function estimation of cerebral PCT imaging in the case of low-mAs.« less

  1. Orbital theory in terms of KS elements with luni-solar perturbations

    NASA Astrophysics Data System (ADS)

    Sellamuthu, Harishkumar; Sharma, Ram

    2016-07-01

    Precise orbit computation of Earth orbiting satellites is essential for efficient mission planning of planetary exploration, navigation and satellite geodesy. The third-body perturbations of the Sun and the Moon predominantly affect the satellite motion in the high altitude and elliptical orbits, where the effect of atmospheric drag is negligible. The physics of the luni-solar gravity effect on Earth satellites have been studied extensively over the years. The combined luni-solar gravitational attraction will induce a cumulative effect on the dynamics of satellite orbits, which mainly oscillates the perigee altitude. Though accurate orbital parameters are computed by numerical integration with respect to complex force models, analytical theories are highly valued for the manifold of solutions restricted to relatively simple force models. During close approach, the classical equations of motion in celestial mechanics are almost singular and they are unstable for long-term orbit propagation. A new singularity-free analytical theory in terms of KS (Kustaanheimo and Stiefel) regular elements with respect to luni-solar perturbation is developed. These equations are regular everywhere and eccentric anomaly is the independent variable. Plataforma Solar de Almería (PSA) algorithm and a Fourier series algorithm are used to compute the accurate positions of the Sun and the Moon, respectively. Numerical studies are carried out for wide range of initial parameters and the analytical solutions are found to be satisfactory when compared with numerically integrated values. The symmetrical nature of the equations allows only two of the nine equations to be solved for computing the state vectors and the time. Only a change in the initial conditions is required to solve the other equations. This theory will find multiple applications including on-board software packages and for mission analysis purposes.

  2. Accelerometric comparison of the locomotor pattern of horses sedated with xylazine hydrochloride, detomidine hydrochloride, or romifidine hydrochloride.

    PubMed

    López-Sanromán, F Javier; Holmbak-Petersen, Ronald; Varela, Marta; del Alamo, Ana M; Santiago, Isabel

    2013-06-01

    To evaluate the duration of effects on movement patterns of horses after sedation with equipotent doses of xylazine hydrochloride, detomidine hydrochloride, or romifidine hydrochloride and determine whether accelerometry can be used to quantify differences among drug treatments. 6 healthy horses. Each horse was injected IV with saline (0.9% NaCl) solution (10 mL), xylazine diluted in saline solution (0.5 mg/kg), detomidine diluted in saline solution (0.01 mg/kg), or romifidine diluted in saline solution (0.04 mg/kg) in random order. A triaxial accelerometric device was used for gait assessment 15 minutes before and 5, 15, 30, 45, 60, 75, 90, 105, and 120 minutes after each treatment. Eight variables were calculated, including speed, stride frequency, stride length, regularity, dorsoventral power, propulsive power, mediolateral power, and total power; the force of acceleration and 3 components of power were then calculated. Significant differences were evident in stride frequency and regularity between treatments with saline solution and each α2-adrenoceptor agonist drug; in speed, dorsoventral power, propulsive power, total power, and force values between treatments with saline solution and detomidine or romifidine; and in mediolateral power between treatments with saline solution and detomidine. Stride length did not differ among treatments. Accelerometric evaluation of horses administered α2-adrenoceptor agonist drugs revealed more prolonged sedative effects of romifidine, compared with effects of xylazine or detomidine. Accelerometry could be useful in assessing the effects of other sedatives and analgesics. Accelerometric data may be helpful in drug selection for situations in which a horse's balance and coordination are important.

  3. Automatic atlas-based three-label cartilage segmentation from MR knee images

    PubMed Central

    Shan, Liang; Zach, Christopher; Charles, Cecil; Niethammer, Marc

    2016-01-01

    Osteoarthritis (OA) is the most common form of joint disease and often characterized by cartilage changes. Accurate quantitative methods are needed to rapidly screen large image databases to assess changes in cartilage morphology. We therefore propose a new automatic atlas-based cartilage segmentation method for future automatic OA studies. Atlas-based segmentation methods have been demonstrated to be robust and accurate in brain imaging and therefore also hold high promise to allow for reliable and high-quality segmentations of cartilage. Nevertheless, atlas-based methods have not been well explored for cartilage segmentation. A particular challenge is the thinness of cartilage, its relatively small volume in comparison to surrounding tissue and the difficulty to locate cartilage interfaces – for example the interface between femoral and tibial cartilage. This paper focuses on the segmentation of femoral and tibial cartilage, proposing a multi-atlas segmentation strategy with non-local patch-based label fusion which can robustly identify candidate regions of cartilage. This method is combined with a novel three-label segmentation method which guarantees the spatial separation of femoral and tibial cartilage, and ensures spatial regularity while preserving the thin cartilage shape through anisotropic regularization. Our segmentation energy is convex and therefore guarantees globally optimal solutions. We perform an extensive validation of the proposed method on 706 images of the Pfizer Longitudinal Study. Our validation includes comparisons of different atlas segmentation strategies, different local classifiers, and different types of regularizers. To compare to other cartilage segmentation approaches we validate based on the 50 images of the SKI10 dataset. PMID:25128683

  4. Regularized magnetotelluric inversion based on a minimum support gradient stabilizing functional

    NASA Astrophysics Data System (ADS)

    Xiang, Yang; Yu, Peng; Zhang, Luolei; Feng, Shaokong; Utada, Hisashi

    2017-11-01

    Regularization is used to solve the ill-posed problem of magnetotelluric inversion usually by adding a stabilizing functional to the objective functional that allows us to obtain a stable solution. Among a number of possible stabilizing functionals, smoothing constraints are most commonly used, which produce spatially smooth inversion results. However, in some cases, the focused imaging of a sharp electrical boundary is necessary. Although past works have proposed functionals that may be suitable for the imaging of a sharp boundary, such as minimum support and minimum gradient support (MGS) functionals, they involve some difficulties and limitations in practice. In this paper, we propose a minimum support gradient (MSG) stabilizing functional as another possible choice of focusing stabilizer. In this approach, we calculate the gradient of the model stabilizing functional of the minimum support, which affects both the stability and the sharp boundary focus of the inversion. We then apply the discrete weighted matrix form of each stabilizing functional to build a unified form of the objective functional, allowing us to perform a regularized inversion with variety of stabilizing functionals in the same framework. By comparing the one-dimensional and two-dimensional synthetic inversion results obtained using the MSG stabilizing functional and those obtained using other stabilizing functionals, we demonstrate that the MSG results are not only capable of clearly imaging a sharp geoelectrical interface but also quite stable and robust. Overall good performance in terms of both data fitting and model recovery suggests that this stabilizing functional is effective and useful in practical applications.[Figure not available: see fulltext.

  5. Circular geodesic of Bardeen and Ayon-Beato-Garcia regular black-hole and no-horizon spacetimes

    NASA Astrophysics Data System (ADS)

    Stuchlík, Zdeněk; Schee, Jan

    2015-12-01

    In this paper, we study circular geodesic motion of test particles and photons in the Bardeen and Ayon-Beato-Garcia (ABG) geometry describing spherically symmetric regular black-hole or no-horizon spacetimes. While the Bardeen geometry is not exact solution of Einstein's equations, the ABG spacetime is related to self-gravitating charged sources governed by Einstein's gravity and nonlinear electrodynamics. They both are characterized by the mass parameter m and the charge parameter g. We demonstrate that in similarity to the Reissner-Nordstrom (RN) naked singularity spacetimes an antigravity static sphere should exist in all the no-horizon Bardeen and ABG solutions that can be surrounded by a Keplerian accretion disc. However, contrary to the RN naked singularity spacetimes, the ABG no-horizon spacetimes with parameter g/m > 2 can contain also an additional inner Keplerian disc hidden under the static antigravity sphere. Properties of the geodesic structure are reflected by simple observationally relevant optical phenomena. We give silhouette of the regular black-hole and no-horizon spacetimes, and profiled spectral lines generated by Keplerian rings radiating at a fixed frequency and located in strong gravity region at or nearby the marginally stable circular geodesics. We demonstrate that the profiled spectral lines related to the regular black-holes are qualitatively similar to those of the Schwarzschild black-holes, giving only small quantitative differences. On the other hand, the regular no-horizon spacetimes give clear qualitative signatures of their presence while compared to the Schwarschild spacetimes. Moreover, it is possible to distinguish the Bardeen and ABG no-horizon spacetimes, if the inclination angle to the observer is known.

  6. A new solution procedure for a nonlinear infinite beam equation of motion

    NASA Astrophysics Data System (ADS)

    Jang, T. S.

    2016-10-01

    Our goal of this paper is of a purely theoretical question, however which would be fundamental in computational partial differential equations: Can a linear solution-structure for the equation of motion for an infinite nonlinear beam be directly manipulated for constructing its nonlinear solution? Here, the equation of motion is modeled as mathematically a fourth-order nonlinear partial differential equation. To answer the question, a pseudo-parameter is firstly introduced to modify the equation of motion. And then, an integral formalism for the modified equation is found here, being taken as a linear solution-structure. It enables us to formulate a nonlinear integral equation of second kind, equivalent to the original equation of motion. The fixed point approach, applied to the integral equation, results in proposing a new iterative solution procedure for constructing the nonlinear solution of the original beam equation of motion, which consists luckily of just the simple regular numerical integration for its iterative process; i.e., it appears to be fairly simple as well as straightforward to apply. A mathematical analysis is carried out on both natures of convergence and uniqueness of the iterative procedure by proving a contractive character of a nonlinear operator. It follows conclusively,therefore, that it would be one of the useful nonlinear strategies for integrating the equation of motion for a nonlinear infinite beam, whereby the preceding question may be answered. In addition, it may be worth noticing that the pseudo-parameter introduced here has double roles; firstly, it connects the original beam equation of motion with the integral equation, second, it is related with the convergence of the iterative method proposed here.

  7. Solutal Convection Around Growing Protein Crystal and Diffusional Purification in Space

    NASA Technical Reports Server (NTRS)

    Lee, Chun P.; Chernov, Alexander A.

    2004-01-01

    At least some protein crystals were found to preferentially trap microheterogeneous impurities. The latter are, for example, dimmer molecules of the crystallizing proteines (e.g. ferritin, lysozyme), or the regular molecules on which surfaces small molecules or ions are adsorbed (e.g. acetilated lysozyme) and modi@ molecular charge. Impurities may induce lattice defects and deteriorate structural resolution. Distribution of impurities between mother solution and gorwing crystal is defined by two interrelated distribution coefficients: kappa = rho(sup c, sub 2) and K = (rho(sup c, sub 2)/rho(sup c, sub 1)/rho(sub 2)/rho(sub 1). Here, rho(sub 2), rho(sub 1) and rho(sup c, sub 2) are densities of impurity (2) and regular protein (1) in solution at the growing interface and within the crystal ("c"). For the microheterogeneous impurities studied, K approx. = 2 - 4, so that kappa approx. - 10(exp 2) - 10(exp 3), since K = kappa (rho(sub 1)/rho(sup c, sub 1) and protein solubility ratio rho(sub 1)/rho(sub=p c, sub 2) much less than 1. Therefore, a crystal growing in absence of convection purifies mother solution around itself, grows cleaner and, probably, more perfect. If convection is present, the solution flow permanently brings new impurities to the crystal. This work theoretically addressed two subjects: 1) onset of convection, 2) distribution of impurities.

  8. MEG Connectivity and Power Detections with Minimum Norm Estimates Require Different Regularization Parameters.

    PubMed

    Hincapié, Ana-Sofía; Kujala, Jan; Mattout, Jérémie; Daligault, Sebastien; Delpuech, Claude; Mery, Domingo; Cosmelli, Diego; Jerbi, Karim

    2016-01-01

    Minimum Norm Estimation (MNE) is an inverse solution method widely used to reconstruct the source time series that underlie magnetoencephalography (MEG) data. MNE addresses the ill-posed nature of MEG source estimation through regularization (e.g., Tikhonov regularization). Selecting the best regularization parameter is a critical step. Generally, once set, it is common practice to keep the same coefficient throughout a study. However, it is yet to be known whether the optimal lambda for spectral power analysis of MEG source data coincides with the optimal regularization for source-level oscillatory coupling analysis. We addressed this question via extensive Monte-Carlo simulations of MEG data, where we generated 21,600 configurations of pairs of coupled sources with varying sizes, signal-to-noise ratio (SNR), and coupling strengths. Then, we searched for the Tikhonov regularization coefficients (lambda) that maximize detection performance for (a) power and (b) coherence. For coherence, the optimal lambda was two orders of magnitude smaller than the best lambda for power. Moreover, we found that the spatial extent of the interacting sources and SNR, but not the extent of coupling, were the main parameters affecting the best choice for lambda. Our findings suggest using less regularization when measuring oscillatory coupling compared to power estimation.

  9. MEG Connectivity and Power Detections with Minimum Norm Estimates Require Different Regularization Parameters

    PubMed Central

    Hincapié, Ana-Sofía; Kujala, Jan; Mattout, Jérémie; Daligault, Sebastien; Delpuech, Claude; Mery, Domingo; Cosmelli, Diego; Jerbi, Karim

    2016-01-01

    Minimum Norm Estimation (MNE) is an inverse solution method widely used to reconstruct the source time series that underlie magnetoencephalography (MEG) data. MNE addresses the ill-posed nature of MEG source estimation through regularization (e.g., Tikhonov regularization). Selecting the best regularization parameter is a critical step. Generally, once set, it is common practice to keep the same coefficient throughout a study. However, it is yet to be known whether the optimal lambda for spectral power analysis of MEG source data coincides with the optimal regularization for source-level oscillatory coupling analysis. We addressed this question via extensive Monte-Carlo simulations of MEG data, where we generated 21,600 configurations of pairs of coupled sources with varying sizes, signal-to-noise ratio (SNR), and coupling strengths. Then, we searched for the Tikhonov regularization coefficients (lambda) that maximize detection performance for (a) power and (b) coherence. For coherence, the optimal lambda was two orders of magnitude smaller than the best lambda for power. Moreover, we found that the spatial extent of the interacting sources and SNR, but not the extent of coupling, were the main parameters affecting the best choice for lambda. Our findings suggest using less regularization when measuring oscillatory coupling compared to power estimation. PMID:27092179

  10. Supporting Regularized Logistic Regression Privately and Efficiently.

    PubMed

    Li, Wenfa; Liu, Hongzhe; Yang, Peng; Xie, Wei

    2016-01-01

    As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Concerns over data privacy make it increasingly difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used statistical model while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluations on several studies validate the privacy guarantee, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc.

  11. Total-variation based velocity inversion with Bregmanized operator splitting algorithm

    NASA Astrophysics Data System (ADS)

    Zand, Toktam; Gholami, Ali

    2018-04-01

    Many problems in applied geophysics can be formulated as a linear inverse problem. The associated problems, however, are large-scale and ill-conditioned. Therefore, regularization techniques are needed to be employed for solving them and generating a stable and acceptable solution. We consider numerical methods for solving such problems in this paper. In order to tackle the ill-conditioning of the problem we use blockiness as a prior information of the subsurface parameters and formulate the problem as a constrained total variation (TV) regularization. The Bregmanized operator splitting (BOS) algorithm as a combination of the Bregman iteration and the proximal forward backward operator splitting method is developed to solve the arranged problem. Two main advantages of this new algorithm are that no matrix inversion is required and that a discrepancy stopping criterion is used to stop the iterations, which allow efficient solution of large-scale problems. The high performance of the proposed TV regularization method is demonstrated using two different experiments: 1) velocity inversion from (synthetic) seismic data which is based on Born approximation, 2) computing interval velocities from RMS velocities via Dix formula. Numerical examples are presented to verify the feasibility of the proposed method for high-resolution velocity inversion.

  12. Explicit B-spline regularization in diffeomorphic image registration

    PubMed Central

    Tustison, Nicholas J.; Avants, Brian B.

    2013-01-01

    Diffeomorphic mappings are central to image registration due largely to their topological properties and success in providing biologically plausible solutions to deformation and morphological estimation problems. Popular diffeomorphic image registration algorithms include those characterized by time-varying and constant velocity fields, and symmetrical considerations. Prior information in the form of regularization is used to enforce transform plausibility taking the form of physics-based constraints or through some approximation thereof, e.g., Gaussian smoothing of the vector fields [a la Thirion's Demons (Thirion, 1998)]. In the context of the original Demons' framework, the so-called directly manipulated free-form deformation (DMFFD) (Tustison et al., 2009) can be viewed as a smoothing alternative in which explicit regularization is achieved through fast B-spline approximation. This characterization can be used to provide B-spline “flavored” diffeomorphic image registration solutions with several advantages. Implementation is open source and available through the Insight Toolkit and our Advanced Normalization Tools (ANTs) repository. A thorough comparative evaluation with the well-known SyN algorithm (Avants et al., 2008), implemented within the same framework, and its B-spline analog is performed using open labeled brain data and open source evaluation tools. PMID:24409140

  13. Supporting Regularized Logistic Regression Privately and Efficiently

    PubMed Central

    Li, Wenfa; Liu, Hongzhe; Yang, Peng; Xie, Wei

    2016-01-01

    As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Concerns over data privacy make it increasingly difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used statistical model while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluations on several studies validate the privacy guarantee, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc. PMID:27271738

  14. Backscattering and Nonparaxiality Arrest Collapse of Damped Nonlinear Waves

    NASA Technical Reports Server (NTRS)

    Fibich, G.; Ilan, B.; Tsynkov, S.

    2002-01-01

    The critical nonlinear Schrodinger equation (NLS) models the propagation of intense laser light in Kerr media. This equation is derived from the more comprehensive nonlinear Helmholtz equation (NLH) by employing the paraxial approximation and neglecting the backscattered waves. It is known that if the input power of the laser beam (i.e., L(sub 2) norm of the initial solution) is sufficiently high, then the NLS model predicts that the beam will self-focus to a point (i.e.. collapse) at a finite propagation distance. Mathematically, this behavior corresponds to the formation of a singularity in the solution of the NLS. A key question which has been open for many years is whether the solution to the NLH, i.e., the 'parent' equation, may nonetheless exist and remain regular everywhere, in particular for those initial conditions (input powers) that lead to blowup in the NLS. In the current study, we address this question by introducing linear damping into both models and subsequently comparing the numerical solutions of the damped NLH (boundary-value problem) with the corresponding solutions of the damped NLS (initial-value problem). Linear damping is introduced in much the same way as done when analyzing the classical constant-coefficient Helmholtz equation using the limiting absorption principle. Numerically, we have found that it provides a very efficient tool for controlling the solutions of both the NLH and NHS. In particular, we have been able to identify initial conditions for which the NLS solution does become singular. whereas the NLH solution still remains regular everywhere. We believe that our finding of a larger domain of existence for the NLH than that for the NLS is accounted for by precisely those mechanisms, that have been neglected when deriving the NLS from the NLH, i.e., nonparaxiality and backscattering.

  15. Particlelike solutions of the Einstein-Dirac equations

    NASA Astrophysics Data System (ADS)

    Finster, Felix; Smoller, Joel; Yau, Shing-Tung

    1999-05-01

    The coupled Einstein-Dirac equations for a static, spherically symmetric system of two fermions in a singlet spinor state are derived. Using numerical methods, we construct an infinite number of solitonlike solutions of these equations. The stability of the solutions is analyzed. For weak coupling (i.e., small rest mass of the fermions), all the solutions are linearly stable (with respect to spherically symmetric perturbations), whereas for stronger coupling, both stable and unstable solutions exist. For the physical interpretation, we discuss how the energy of the fermions and the (ADM) mass behave as functions of the rest mass of the fermions. Although gravitation is not renormalizable, our solutions of the Einstein-Dirac equations are regular and well behaved even for strong coupling.

  16. One-loop corrections from higher dimensional tree amplitudes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cachazo, Freddy; He, Song; Yuan, Ellis Ye

    We show how one-loop corrections to scattering amplitudes of scalars and gauge bosons can be obtained from tree amplitudes in one higher dimension. Starting with a complete tree-level scattering amplitude of n + 2 particles in five dimensions, one assumes that two of them cannot be “detected” and therefore an integration over their LIPS is carried out. The resulting object, function of the remaining n particles, is taken to be four-dimensional by restricting the corresponding momenta. We perform this procedure in the context of the tree-level CHY formulation of amplitudes. The scattering equations obtained in the procedure coincide with thosemore » derived by Geyer et al. from ambitwistor constructions and recently studied by two of the authors for bi-adjoint scalars. They have two sectors of solutions: regular and singular. We prove that the contribution from regular solutions generically gives rise to unphysical poles. However, using a BCFW argument we prove that the unphysical contributions are always homogeneous functions of the loop momentum and can be discarded. We also show that the contribution from singular solutions turns out to be homogeneous as well.« less

  17. One-loop corrections from higher dimensional tree amplitudes

    DOE PAGES

    Cachazo, Freddy; He, Song; Yuan, Ellis Ye

    2016-08-01

    We show how one-loop corrections to scattering amplitudes of scalars and gauge bosons can be obtained from tree amplitudes in one higher dimension. Starting with a complete tree-level scattering amplitude of n + 2 particles in five dimensions, one assumes that two of them cannot be “detected” and therefore an integration over their LIPS is carried out. The resulting object, function of the remaining n particles, is taken to be four-dimensional by restricting the corresponding momenta. We perform this procedure in the context of the tree-level CHY formulation of amplitudes. The scattering equations obtained in the procedure coincide with thosemore » derived by Geyer et al. from ambitwistor constructions and recently studied by two of the authors for bi-adjoint scalars. They have two sectors of solutions: regular and singular. We prove that the contribution from regular solutions generically gives rise to unphysical poles. However, using a BCFW argument we prove that the unphysical contributions are always homogeneous functions of the loop momentum and can be discarded. We also show that the contribution from singular solutions turns out to be homogeneous as well.« less

  18. Endemic infections are always possible on regular networks

    NASA Astrophysics Data System (ADS)

    Del Genio, Charo I.; House, Thomas

    2013-10-01

    We study the dependence of the largest component in regular networks on the clustering coefficient, showing that its size changes smoothly without undergoing a phase transition. We explain this behavior via an analytical approach based on the network structure, and provide an exact equation describing the numerical results. Our work indicates that intrinsic structural properties always allow the spread of epidemics on regular networks.

  19. Contraction of high eccentricity satellite orbits using uniformly regular KS canonical elements with oblate diurnally varying atmosphere.

    NASA Astrophysics Data System (ADS)

    Raj, Xavier James

    2016-07-01

    Accurate orbit prediction of an artificial satellite under the influence of air drag is one of the most difficult and untraceable problem in orbital dynamics. The orbital decay of these satellites is mainly controlled by the atmospheric drag effects. The effects of the atmosphere are difficult to determine, since the atmospheric density undergoes large fluctuations. The classical Newtonian equations of motion, which is non linear is not suitable for long-term integration. Many transformations have emerged in the literature to stabilize the equations of motion either to reduce the accumulation of local numerical errors or allowing the use of large integration step sizes, or both in the transformed space. One such transformation is known as KS transformation by Kustaanheimo and Stiefel, who regularized the nonlinear Kepler equations of motion and reduced it into linear differential equations of a harmonic oscillator of constant frequency. The method of KS total energy element equations has been found to be a very powerful method for obtaining numerical as well as analytical solution with respect to any type of perturbing forces, as the equations are less sensitive to round off and truncation errors. The uniformly regular KS canonical equations are a particular canonical form of the KS differential equations, where all the ten KS Canonical elements αi and βi are constant for unperturbed motion. These equations permit the uniform formulation of the basic laws of elliptic, parabolic and hyperbolic motion. Using these equations, developed analytical solution for short term orbit predictions with respect to Earth's zonal harmonic terms J2, J3, J4. Further, these equations were utilized to include the canonical forces and analytical theories with air drag were developed for low eccentricity orbits (e < 0.2) with different atmospheric models. Using uniformly regular KS canonical elements developed analytical theory for high eccentricity (e > 0.2) orbits by assuming the atmosphere to be oblate only. In this paper a new non-singular analytical theory is developed for the motion of high eccentricity satellite orbits with oblate diurnally varying atmosphere in terms of the uniformly regular KS canonical elements. The analytical solutions are generated up to fourth-order terms using a new independent variable and c (a small parameter dependent on the flattening of the atmosphere). Due to symmetry, only two of the nine equations need to be solved analytically to compute the state vector and change in energy at the end of each revolution. The theory is developed on the assumption that density is constant on the surfaces of spheroids of fixed ellipticity ɛ (equal to the Earth's ellipticity, 0.00335) whose axes coincide with the Earth's axis. Numerical experimentation with the analytical solution for a wide range of perigee height, eccentricity, and orbital inclination has been carried out up to 100 revolutions. Comparisons are made with numerically integrated values and found that they match quite well. Effectiveness of the present analytical solutions will be demonstrated by comparing the results with other analytical solutions in the literature.

  20. Sparse Adaptive Iteratively-Weighted Thresholding Algorithm (SAITA) for Lp-Regularization Using the Multiple Sub-Dictionary Representation

    PubMed Central

    Zhang, Jie; Fan, Shangang; Xiong, Jian; Cheng, Xiefeng; Sari, Hikmet; Adachi, Fumiyuki

    2017-01-01

    Both L1/2 and L2/3 are two typical non-convex regularizations of Lp (0

Top