Sample records for regular solution model

  1. A trade-off solution between model resolution and covariance in surface-wave inversion

    USGS Publications Warehouse

    Xia, J.; Xu, Y.; Miller, R.D.; Zeng, C.

    2010-01-01

    Regularization is necessary for inversion of ill-posed geophysical problems. Appraisal of inverse models is essential for meaningful interpretation of these models. Because uncertainties are associated with regularization parameters, extra conditions are usually required to determine proper parameters for assessing inverse models. Commonly used techniques for assessment of a geophysical inverse model derived (generally iteratively) from a linear system are based on calculating the model resolution and the model covariance matrices. Because the model resolution and the model covariance matrices of the regularized solutions are controlled by the regularization parameter, direct assessment of inverse models using only the covariance matrix may provide incorrect results. To assess an inverted model, we use the concept of a trade-off between model resolution and covariance to find a proper regularization parameter with singular values calculated in the last iteration. We plot the singular values from large to small to form a singular value plot. A proper regularization parameter is normally the first singular value that approaches zero in the plot. With this regularization parameter, we obtain a trade-off solution between model resolution and model covariance in the vicinity of a regularized solution. The unit covariance matrix can then be used to calculate error bars of the inverse model at a resolution level determined by the regularization parameter. We demonstrate this approach with both synthetic and real surface-wave data. ?? 2010 Birkh??user / Springer Basel AG.

  2. Application of regular associated solution model to the liquidus curves of the Sn-Te and Sn-SnS systems

    NASA Astrophysics Data System (ADS)

    Eric, H.

    1982-12-01

    The liquidus curves of the Sn-Te and Sn-SnS systems were evaluated by the regular associated solution model (RAS). The main assumption of this theory is the existence of species A, B and associated complexes AB in the liquid phase. Thermodynamic properties of the binary A-B system are derived by ternary regular solution equations. Calculations based on this model for the Sn-Te and Sn-SnS systems are in agreement with published data.

  3. FAST TRACK COMMUNICATION: Regularized Kerr-Newman solution as a gravitating soliton

    NASA Astrophysics Data System (ADS)

    Burinskii, Alexander

    2010-10-01

    The charged, spinning and gravitating soliton is realized as a regular solution of the Kerr-Newman (KN) field coupled with a chiral Higgs model. A regular core of the solution is formed by a domain wall bubble interpolating between the external KN solution and a flat superconducting interior. An internal electromagnetic (em) field is expelled to the boundary of the bubble by the Higgs field. The solution reveals two new peculiarities: (i) the Higgs field is oscillating, similar to the known oscillon models; (ii) the em field forms on the edge of the bubble a Wilson loop, resulting in quantization of the total angular momentum.

  4. Thermodynamic Modeling of the YO(l.5)-ZrO2 System

    NASA Technical Reports Server (NTRS)

    Jacobson, Nathan S.; Liu, Zi-Kui; Kaufman, Larry; Zhang, Fan

    2003-01-01

    The YO1.5-ZrO2 system consists of five solid solutions, one liquid solution, and one intermediate compound. A thermodynamic description of this system is developed, which allows calculation of the phase diagram and thermodynamic properties. Two different solution models are used-a neutral species model with YO1.5 and ZrO2 as the components and a charged species model with Y(+3), Zr(+4), O(-2), and vacancies as components. For each model, regular and sub-regular solution parameters are derived fiom selected equilibrium phase and thermodynamic data.

  5. Primordial cosmology in mimetic born-infeld gravity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bouhmadi-Lopez, Mariam; Chen, Che -Yu; Chen, Pisin

    Here, the Eddington-inspired-Born-Infeld (EiBI) model is reformulated within the mimetic approach. In the presence of a mimetic field, the model contains non-trivial vacuum solutions which could be free of spacetime singularity because of the Born-Infeld nature of the theory. We study a realistic primordial vacuum universe and prove the existence of regular solutions, such as primordial inflationary solutions of de Sitter type or bouncing solutions. Besides, the linear instabilities present in the EiBI model are found to be avoidable for some interesting bouncing solutions in which the physical metric as well as the auxiliary metric are regular at the backgroundmore » level.« less

  6. Primordial cosmology in mimetic born-infeld gravity

    DOE PAGES

    Bouhmadi-Lopez, Mariam; Chen, Che -Yu; Chen, Pisin

    2017-11-29

    Here, the Eddington-inspired-Born-Infeld (EiBI) model is reformulated within the mimetic approach. In the presence of a mimetic field, the model contains non-trivial vacuum solutions which could be free of spacetime singularity because of the Born-Infeld nature of the theory. We study a realistic primordial vacuum universe and prove the existence of regular solutions, such as primordial inflationary solutions of de Sitter type or bouncing solutions. Besides, the linear instabilities present in the EiBI model are found to be avoidable for some interesting bouncing solutions in which the physical metric as well as the auxiliary metric are regular at the backgroundmore » level.« less

  7. Discrete Regularization for Calibration of Geologic Facies Against Dynamic Flow Data

    NASA Astrophysics Data System (ADS)

    Khaninezhad, Mohammad-Reza; Golmohammadi, Azarang; Jafarpour, Behnam

    2018-04-01

    Subsurface flow model calibration involves many more unknowns than measurements, leading to ill-posed problems with nonunique solutions. To alleviate nonuniqueness, the problem is regularized by constraining the solution space using prior knowledge. In certain sedimentary environments, such as fluvial systems, the contrast in hydraulic properties of different facies types tends to dominate the flow and transport behavior, making the effect of within facies heterogeneity less significant. Hence, flow model calibration in those formations reduces to delineating the spatial structure and connectivity of different lithofacies types and their boundaries. A major difficulty in calibrating such models is honoring the discrete, or piecewise constant, nature of facies distribution. The problem becomes more challenging when complex spatial connectivity patterns with higher-order statistics are involved. This paper introduces a novel formulation for calibration of complex geologic facies by imposing appropriate constraints to recover plausible solutions that honor the spatial connectivity and discreteness of facies models. To incorporate prior connectivity patterns, plausible geologic features are learned from available training models. This is achieved by learning spatial patterns from training data, e.g., k-SVD sparse learning or the traditional Principal Component Analysis. Discrete regularization is introduced as a penalty functions to impose solution discreteness while minimizing the mismatch between observed and predicted data. An efficient gradient-based alternating directions algorithm is combined with variable splitting to minimize the resulting regularized nonlinear least squares objective function. Numerical results show that imposing learned facies connectivity and discreteness as regularization functions leads to geologically consistent solutions that improve facies calibration quality.

  8. High-resolution CSR GRACE RL05 mascons

    NASA Astrophysics Data System (ADS)

    Save, Himanshu; Bettadpur, Srinivas; Tapley, Byron D.

    2016-10-01

    The determination of the gravity model for the Gravity Recovery and Climate Experiment (GRACE) is susceptible to modeling errors, measurement noise, and observability issues. The ill-posed GRACE estimation problem causes the unconstrained GRACE RL05 solutions to have north-south stripes. We discuss the development of global equal area mascon solutions to improve the GRACE gravity information for the study of Earth surface processes. These regularized mascon solutions are developed with a 1° resolution using Tikhonov regularization in a geodesic grid domain. These solutions are derived from GRACE information only, and no external model or data is used to inform the constraints. The regularization matrix is time variable and will not bias or attenuate future regional signals to some past statistics from GRACE or other models. The resulting Center for Space Research (CSR) mascon solutions have no stripe errors and capture all the signals observed by GRACE within the measurement noise level. The solutions are not tailored for specific applications and are global in nature. This study discusses the solution approach and compares the resulting solutions with postprocessed results from the RL05 spherical harmonic solutions and other global mascon solutions for studies of Arctic ice sheet processes, ocean bottom pressure variation, and land surface total water storage change. This suite of comparisons leads to the conclusion that the mascon solutions presented here are an enhanced representation of the RL05 GRACE solutions and provide accurate surface-based gridded information that can be used without further processing.

  9. Exact Solution of the Gyration Radius of an Individual's Trajectory for a Simplified Human Regular Mobility Model

    NASA Astrophysics Data System (ADS)

    Yan, Xiao-Yong; Han, Xiao-Pu; Zhou, Tao; Wang, Bing-Hong

    2011-12-01

    We propose a simplified human regular mobility model to simulate an individual's daily travel with three sequential activities: commuting to workplace, going to do leisure activities and returning home. With the assumption that the individual has a constant travel speed and inferior limit of time at home and in work, we prove that the daily moving area of an individual is an ellipse, and finally obtain an exact solution of the gyration radius. The analytical solution captures the empirical observation well.

  10. Regularity of Solutions of the Nonlinear Sigma Model with Gravitino

    NASA Astrophysics Data System (ADS)

    Jost, Jürgen; Keßler, Enno; Tolksdorf, Jürgen; Wu, Ruijun; Zhu, Miaomiao

    2018-02-01

    We propose a geometric setup to study analytic aspects of a variant of the super symmetric two-dimensional nonlinear sigma model. This functional extends the functional of Dirac-harmonic maps by gravitino fields. The system of Euler-Lagrange equations of the two-dimensional nonlinear sigma model with gravitino is calculated explicitly. The gravitino terms pose additional analytic difficulties to show smoothness of its weak solutions which are overcome using Rivière's regularity theory and Riesz potential theory.

  11. Reducing errors in the GRACE gravity solutions using regularization

    NASA Astrophysics Data System (ADS)

    Save, Himanshu; Bettadpur, Srinivas; Tapley, Byron D.

    2012-09-01

    The nature of the gravity field inverse problem amplifies the noise in the GRACE data, which creeps into the mid and high degree and order harmonic coefficients of the Earth's monthly gravity fields provided by GRACE. Due to the use of imperfect background models and data noise, these errors are manifested as north-south striping in the monthly global maps of equivalent water heights. In order to reduce these errors, this study investigates the use of the L-curve method with Tikhonov regularization. L-curve is a popular aid for determining a suitable value of the regularization parameter when solving linear discrete ill-posed problems using Tikhonov regularization. However, the computational effort required to determine the L-curve is prohibitively high for a large-scale problem like GRACE. This study implements a parameter-choice method, using Lanczos bidiagonalization which is a computationally inexpensive approximation to L-curve. Lanczos bidiagonalization is implemented with orthogonal transformation in a parallel computing environment and projects a large estimation problem on a problem of the size of about 2 orders of magnitude smaller for computing the regularization parameter. Errors in the GRACE solution time series have certain characteristics that vary depending on the ground track coverage of the solutions. These errors increase with increasing degree and order. In addition, certain resonant and near-resonant harmonic coefficients have higher errors as compared with the other coefficients. Using the knowledge of these characteristics, this study designs a regularization matrix that provides a constraint on the geopotential coefficients as a function of its degree and order. This regularization matrix is then used to compute the appropriate regularization parameter for each monthly solution. A 7-year time-series of the candidate regularized solutions (Mar 2003-Feb 2010) show markedly reduced error stripes compared with the unconstrained GRACE release 4 solutions (RL04) from the Center for Space Research (CSR). Post-fit residual analysis shows that the regularized solutions fit the data to within the noise level of GRACE. A time series of filtered hydrological model is used to confirm that signal attenuation for basins in the Total Runoff Integrating Pathways (TRIP) database over 320 km radii is less than 1 cm equivalent water height RMS, which is within the noise level of GRACE.

  12. Contribution of the GOCE gradiometer components to regional gravity solutions

    NASA Astrophysics Data System (ADS)

    Naeimi, Majid; Bouman, Johannes

    2017-05-01

    The contribution of the GOCE gravity gradients to regional gravity field solutions is investigated in this study. We employ radial basis functions to recover the gravity field on regional scales over Amazon and Himalayas as our test regions. In the first step, four individual solutions based on the more accurate gravity gradient components Txx, Tyy, Tzz and Txz are derived. The Tzz component gives better solution than the other single-component solutions despite the less accuracy of Tzz compared to Txx and Tyy. Furthermore, we determine five more solutions based on several selected combinations of the gravity gradient components including a combined solution using the four gradient components. The Tzz and Tyy components are shown to be the main contributors in all combined solutions whereas the Txz adds the least value to the regional gravity solutions. We also investigate the contribution of the regularization term. We show that the contribution of the regularization significantly decreases as more gravity gradients are included. For the solution using all gravity gradients, regularization term contributes to about 5 per cent of the total solution. Finally, we demonstrate that in our test areas, regional gravity modelling based on GOCE data provide more reliable gravity signal in medium wavelengths as compared to pre-GOCE global gravity field models such as the EGM2008.

  13. Dynamics from a mathematical model of a two-state gas laser

    NASA Astrophysics Data System (ADS)

    Kleanthous, Antigoni; Hua, Tianshu; Manai, Alexandre; Yawar, Kamran; Van Gorder, Robert A.

    2018-05-01

    Motivated by recent work in the area, we consider the behavior of solutions to a nonlinear PDE model of a two-state gas laser. We first review the derivation of the two-state gas laser model, before deriving a non-dimensional model given in terms of coupled nonlinear partial differential equations. We then classify the steady states of this system, in order to determine the possible long-time asymptotic solutions to this model, as well as corresponding stability results, showing that the only uniform steady state (the zero motion state) is unstable, while a linear profile in space is stable. We then provide numerical simulations for the full unsteady model. We show for a wide variety of initial conditions that the solutions tend toward the stable linear steady state profiles. We also consider traveling wave solutions, and determine the unique wave speed (in terms of the other model parameters) which allows wave-like solutions to exist. Despite some similarities between the model and the inviscid Burger's equation, the solutions we obtain are much more regular than the solutions to the inviscid Burger's equation, with no evidence of shock formation or loss of regularity.

  14. Ca-Rich Carbonate Melts: A Regular-Solution Model, with Applications to Carbonatite Magma + Vapor Equilibria and Carbonate Lavas on Venus

    NASA Technical Reports Server (NTRS)

    Treiman, Allan H.

    1995-01-01

    A thermochemical model of the activities of species in carbonate-rich melts would be useful in quantifying chemical equilibria between carbonatite magmas and vapors and in extrapolating liquidus equilibria to unexplored PTX. A regular-solution model of Ca-rich carbonate melts is developed here, using the fact that they are ionic liquids, and can be treated (to a first approximation) as interpenetrating regular solutions of cations and of anions. Thermochemical data on systems of alkali metal cations with carbonate and other anions are drawn from the literature; data on systems with alkaline earth (and other) cations and carbonate (and other) anions are derived here from liquidus phase equilibria. The model is validated in that all available data (at 1 kbar) are consistent with single values for the melting temperature and heat of fusion for calcite, and all liquidi are consistent with the liquids acting as regular solutions. At 1 kbar, the metastable congruent melting temperature of calcite (CaCO3) is inferred to be 1596 K, with (Delta)bar-H(sub fus)(calcite) = 31.5 +/- 1 kJ/mol. Regular solution interaction parameters (W) for Ca(2+) and alkali metal cations are in the range -3 to -12 kJ/sq mol; W for Ca(2+)-Ba(2+) is approximately -11 kJ/sq mol; W for Ca(2+)-Mg(2+) is approximately -40 kJ/sq mol, and W for Ca(2+)-La(3+) is approximately +85 kJ/sq mol. Solutions of carbonate and most anions (including OH(-), F(-), and SO4(2-)) are nearly ideal, with W between 0(ideal) and -2.5 kJ/sq mol. The interaction of carbonate and phosphate ions is strongly nonideal, which is consistent with the suggestion of carbonate-phosphate liquid immiscibility. Interaction of carbonate and sulfide ions is also nonideal and suggestive of carbonate-sulfide liquid immiscibility. Solution of H2O, for all but the most H2O-rich compositions, can be modeled as a disproportionation to hydronium (H3O(+)) and hydroxyl (OH(-)) ions with W for Ca(2+)-H3O(+) (approximately) equals 33 kJ/sq mol. The regular-solution model of carbonate melts can be applied to problems of carbonatite magma + vapor equilibria and of extrapolating liquidus equilibria to unstudied systems. Calculations on one carbonatite (the Husereau dike, Oka complex, Quebec, Canada) show that the anion solution of its magma contained an OH mole fraction of (approximately) 0.07, although the vapor in equilibrium with the magma had P(H2O) = 8.5 x P(CO2). F in carbonatite systems is calculated to be strongly partitioned into the magma (as F(-)) relative to coexisting vapor. In the Husereau carbonatite magma, the anion solution contained an F(-) mole fraction of (approximately) 6 x 10(exp -5).

  15. Regularization Paths for Cox's Proportional Hazards Model via Coordinate Descent.

    PubMed

    Simon, Noah; Friedman, Jerome; Hastie, Trevor; Tibshirani, Rob

    2011-03-01

    We introduce a pathwise algorithm for the Cox proportional hazards model, regularized by convex combinations of ℓ 1 and ℓ 2 penalties (elastic net). Our algorithm fits via cyclical coordinate descent, and employs warm starts to find a solution along a regularization path. We demonstrate the efficacy of our algorithm on real and simulated data sets, and find considerable speedup between our algorithm and competing methods.

  16. A simple homogeneous model for regular and irregular metallic wire media samples

    NASA Astrophysics Data System (ADS)

    Kosulnikov, S. Y.; Mirmoosa, M. S.; Simovski, C. R.

    2018-02-01

    To simplify the solution of electromagnetic problems with wire media samples, it is reasonable to treat them as the samples of a homogeneous material without spatial dispersion. The account of spatial dispersion implies additional boundary conditions and makes the solution of boundary problems difficult especially if the sample is not an infinitely extended layer. Moreover, for a novel type of wire media - arrays of randomly tilted wires - a spatially dispersive model has not been developed. Here, we introduce a simplistic heuristic model of wire media samples shaped as bricks. Our model covers WM of both regularly and irregularly stretched wires.

  17. Improvements in GRACE Gravity Fields Using Regularization

    NASA Astrophysics Data System (ADS)

    Save, H.; Bettadpur, S.; Tapley, B. D.

    2008-12-01

    The unconstrained global gravity field models derived from GRACE are susceptible to systematic errors that show up as broad "stripes" aligned in a North-South direction on the global maps of mass flux. These errors are believed to be a consequence of both systematic and random errors in the data that are amplified by the nature of the gravity field inverse problem. These errors impede scientific exploitation of the GRACE data products, and limit the realizable spatial resolution of the GRACE global gravity fields in certain regions. We use regularization techniques to reduce these "stripe" errors in the gravity field products. The regularization criteria are designed such that there is no attenuation of the signal and that the solutions fit the observations as well as an unconstrained solution. We have used a computationally inexpensive method, normally referred to as "L-ribbon", to find the regularization parameter. This paper discusses the characteristics and statistics of a 5-year time-series of regularized gravity field solutions. The solutions show markedly reduced stripes, are of uniformly good quality over time, and leave little or no systematic observation residuals, which is a frequent consequence of signal suppression from regularization. Up to degree 14, the signal in regularized solution shows correlation greater than 0.8 with the un-regularized CSR Release-04 solutions. Signals from large-amplitude and small-spatial extent events - such as the Great Sumatra Andaman Earthquake of 2004 - are visible in the global solutions without using special post-facto error reduction techniques employed previously in the literature. Hydrological signals as small as 5 cm water-layer equivalent in the small river basins, like Indus and Nile for example, are clearly evident, in contrast to noisy estimates from RL04. The residual variability over the oceans relative to a seasonal fit is small except at higher latitudes, and is evident without the need for de-striping or spatial smoothing.

  18. Numerical modeling of the radiative transfer in a turbid medium using the synthetic iteration.

    PubMed

    Budak, Vladimir P; Kaloshin, Gennady A; Shagalov, Oleg V; Zheltov, Victor S

    2015-07-27

    In this paper we propose the fast, but the accurate algorithm for numerical modeling of light fields in the turbid media slab. For the numerical solution of the radiative transfer equation (RTE) it is required its discretization based on the elimination of the solution anisotropic part and the replacement of the scattering integral by a finite sum. The solution regular part is determined numerically. A good choice of the method of the solution anisotropic part elimination determines the high convergence of the algorithm in the mean square metric. The method of synthetic iterations can be used to improve the convergence in the uniform metric. A significant increase in the solution accuracy with the use of synthetic iterations allows applying the two-stream approximation for the regular part determination. This approach permits to generalize the proposed method in the case of an arbitrary 3D geometry of the medium.

  19. A regularized vortex-particle mesh method for large eddy simulation

    NASA Astrophysics Data System (ADS)

    Spietz, H. J.; Walther, J. H.; Hejlesen, M. M.

    2017-11-01

    We present recent developments of the remeshed vortex particle-mesh method for simulating incompressible fluid flow. The presented method relies on a parallel higher-order FFT based solver for the Poisson equation. Arbitrary high order is achieved through regularization of singular Green's function solutions to the Poisson equation and recently we have derived novel high order solutions for a mixture of open and periodic domains. With this approach the simulated variables may formally be viewed as the approximate solution to the filtered Navier Stokes equations, hence we use the method for Large Eddy Simulation by including a dynamic subfilter-scale model based on test-filters compatible with the aforementioned regularization functions. Further the subfilter-scale model uses Lagrangian averaging, which is a natural candidate in light of the Lagrangian nature of vortex particle methods. A multiresolution variation of the method is applied to simulate the benchmark problem of the flow past a square cylinder at Re = 22000 and the obtained results are compared to results from the literature.

  20. Chaotic and regular instantons in helical shell models of turbulence

    NASA Astrophysics Data System (ADS)

    De Pietro, Massimo; Mailybaev, Alexei A.; Biferale, Luca

    2017-03-01

    Shell models of turbulence have a finite-time blowup in the inviscid limit, i.e., the enstrophy diverges while the single-shell velocities stay finite. The signature of this blowup is represented by self-similar instantonic structures traveling coherently through the inertial range. These solutions might influence the energy transfer and the anomalous scaling properties empirically observed for the forced and viscous models. In this paper we present a study of the instantonic solutions for a set of four shell models of turbulence based on the exact decomposition of the Navier-Stokes equations in helical eigenstates. We find that depending on the helical structure of each model, instantons are chaotic or regular. Some instantonic solutions tend to recover mirror symmetry for scales small enough. Models that have anomalous scaling develop regular nonchaotic instantons. Conversely, models that have nonanomalous scaling in the stationary regime are those that have chaotic instantons. The direction of the energy carried by each single instanton tends to coincide with the direction of the energy cascade in the stationary regime. Finally, we find that whenever the small-scale stationary statistics is intermittent, the instanton is less steep than the dimensional Kolmogorov scaling, independently of whether or not it is chaotic. Our findings further support the idea that instantons might be crucial to describe some aspects of the multiscale anomalous statistics of shell models.

  1. Chemical interactions and thermodynamic studies in aluminum alloy/molten salt systems

    NASA Astrophysics Data System (ADS)

    Narayanan, Ramesh

    The recycling of aluminum and aluminum alloys such as Used Beverage Container (UBC) is done under a cover of molten salt flux based on (NaCl-KCl+fluorides). The reactions of aluminum alloys with molten salt fluxes have been investigated. Thermodynamic calculations are performed in the alloy/salt flux systems which allow quantitative predictions of the equilibrium compositions. There is preferential reaction of Mg in Al-Mg alloy with molten salt fluxes, especially those containing fluorides like NaF. An exchange reaction between Al-Mg alloy and molten salt flux has been demonstrated. Mg from the Al-Mg alloy transfers into the salt flux while Na from the salt flux transfers into the metal. Thermodynamic calculations indicated that the amount of Na in metal increases as the Mg content in alloy and/or NaF content in the reacting flux increases. This is an important point because small amounts of Na have a detrimental effect on the mechanical properties of the Al-Mg alloy. The reactions of Al alloys with molten salt fluxes result in the formation of bluish purple colored "streamers". It was established that the streamer is liquid alkali metal (Na and K in the case of NaCl-KCl-NaF systems) dissipating into the melt. The melts in which such streamers were observed are identified. The metal losses occurring due to reactions have been quantified, both by thermodynamic calculations and experimentally. A computer program has been developed to calculate ternary phase diagrams in molten salt systems from the constituting binary phase diagrams, based on a regular solution model. The extent of deviation of the binary systems from regular solution has been quantified. The systems investigated in which good agreement was found between the calculated and experimental phase diagrams included NaF-KF-LiF, NaCl-NaF-NaI and KNOsb3-TINOsb3-LiNOsb3. Furthermore, an insight has been provided on the interrelationship between the regular solution parameters and the topology of the phase diagram. The isotherms are flat (i.e. no skewness) when the regular solution parameters are zero. When the regular solution parameters are non-zero, the isotherms are skewed. A regular solution model is not adequate to accurately model the molten salt systems used in recycling like NaCl-KCl-LiF and NaCl-KCl-NaF.

  2. Complex optimization for big computational and experimental neutron datasets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bao, Feng; Oak Ridge National Lab.; Archibald, Richard

    Here, we present a framework to use high performance computing to determine accurate solutions to the inverse optimization problem of big experimental data against computational models. We demonstrate how image processing, mathematical regularization, and hierarchical modeling can be used to solve complex optimization problems on big data. We also demonstrate how both model and data information can be used to further increase solution accuracy of optimization by providing confidence regions for the processing and regularization algorithms. Finally, we use the framework in conjunction with the software package SIMPHONIES to analyze results from neutron scattering experiments on silicon single crystals, andmore » refine first principles calculations to better describe the experimental data.« less

  3. Complex optimization for big computational and experimental neutron datasets

    DOE PAGES

    Bao, Feng; Oak Ridge National Lab.; Archibald, Richard; ...

    2016-11-07

    Here, we present a framework to use high performance computing to determine accurate solutions to the inverse optimization problem of big experimental data against computational models. We demonstrate how image processing, mathematical regularization, and hierarchical modeling can be used to solve complex optimization problems on big data. We also demonstrate how both model and data information can be used to further increase solution accuracy of optimization by providing confidence regions for the processing and regularization algorithms. Finally, we use the framework in conjunction with the software package SIMPHONIES to analyze results from neutron scattering experiments on silicon single crystals, andmore » refine first principles calculations to better describe the experimental data.« less

  4. Alpha models for rotating Navier-Stokes equations in geophysics with nonlinear dispersive regularization

    NASA Astrophysics Data System (ADS)

    Kim, Bong-Sik

    Three dimensional (3D) Navier-Stokes-alpha equations are considered for uniformly rotating geophysical fluid flows (large Coriolis parameter f = 2O). The Navier-Stokes-alpha equations are a nonlinear dispersive regularization of usual Navier-Stokes equations obtained by Lagrangian averaging. The focus is on the existence and global regularity of solutions of the 3D rotating Navier-Stokes-alpha equations and the uniform convergence of these solutions to those of the original 3D rotating Navier-Stokes equations for large Coriolis parameters f as alpha → 0. Methods are based on fast singular oscillating limits and results are obtained for periodic boundary conditions for all domain aspect ratios, including the case of three wave resonances which yields nonlinear "2½-dimensional" limit resonant equations for f → 0. The existence and global regularity of solutions of limit resonant equations is established, uniformly in alpha. Bootstrapping from global regularity of the limit equations, the existence of a regular solution of the full 3D rotating Navier-Stokes-alpha equations for large f for an infinite time is established. Then, the uniform convergence of a regular solution of the 3D rotating Navier-Stokes-alpha equations (alpha ≠ 0) to the one of the original 3D rotating NavierStokes equations (alpha = 0) for f large but fixed as alpha → 0 follows; this implies "shadowing" of trajectories of the limit dynamical systems by those of the perturbed alpha-dynamical systems. All the estimates are uniform in alpha, in contrast with previous estimates in the literature which blow up as alpha → 0. Finally, the existence of global attractors as well as exponential attractors is established for large f and the estimates are uniform in alpha.

  5. Model-Averaged ℓ1 Regularization using Markov Chain Monte Carlo Model Composition

    PubMed Central

    Fraley, Chris; Percival, Daniel

    2014-01-01

    Bayesian Model Averaging (BMA) is an effective technique for addressing model uncertainty in variable selection problems. However, current BMA approaches have computational difficulty dealing with data in which there are many more measurements (variables) than samples. This paper presents a method for combining ℓ1 regularization and Markov chain Monte Carlo model composition techniques for BMA. By treating the ℓ1 regularization path as a model space, we propose a method to resolve the model uncertainty issues arising in model averaging from solution path point selection. We show that this method is computationally and empirically effective for regression and classification in high-dimensional datasets. We apply our technique in simulations, as well as to some applications that arise in genomics. PMID:25642001

  6. Nonminimal Wu-Yang wormhole

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balakin, A. B.; Zayats, A. E.; Sushkov, S. V.

    2007-04-15

    We discuss exact solutions of a three-parameter nonminimal Einstein-Yang-Mills model, which describe the wormholes of a new type. These wormholes are considered to be supported by the SU(2)-symmetric Yang-Mills field, nonminimally coupled to gravity, the Wu-Yang ansatz for the gauge field being used. We distinguish between regular solutions, describing traversable nonminimal Wu-Yang wormholes, and black wormholes possessing one or two event horizons. The relation between the asymptotic mass of the regular traversable Wu-Yang wormhole and its throat radius is analyzed.

  7. Estimates of the Modeling Error of the α -Models of Turbulence in Two and Three Space Dimensions

    NASA Astrophysics Data System (ADS)

    Dunca, Argus A.

    2017-12-01

    This report investigates the convergence rate of the weak solutions w^{α } of the Leray-α , modified Leray-α , Navier-Stokes-α and the zeroth ADM turbulence models to a weak solution u of the Navier-Stokes equations. It is assumed that this weak solution u of the NSE belongs to the space L^4(0, T; H^1) . It is shown that under this regularity condition the error u-w^{α } is O(α ) in the norms L^2(0, T; H^1) and L^{∞}(0, T; L^2) , thus improving related known results. It is also shown that the averaged error \\overline{u}-\\overline{w^{α }} is higher order, O(α ^{1.5}) , in the same norms, therefore the α -regularizations considered herein approximate better filtered flow structures than the exact (unfiltered) flow velocities.

  8. Bardeen regular black hole with an electric source

    NASA Astrophysics Data System (ADS)

    Rodrigues, Manuel E.; Silva, Marcos V. de S.

    2018-06-01

    If some energy conditions on the stress-energy tensor are violated, is possible construct regular black holes in General Relativity and in alternative theories of gravity. This type of solution has horizons but does not present singularities. The first regular black hole was presented by Bardeen and can be obtained from Einstein equations in the presence of an electromagnetic field. E. Ayon-Beato and A. Garcia reinterpreted the Bardeen metric as a magnetic solution of General Relativity coupled to a nonlinear electrodynamics. In this work, we show that the Bardeen model may also be interpreted as a solution of Einstein equations in the presence of an electric source, whose electric field does not behave as a Coulomb field. We analyzed the asymptotic forms of the Lagrangian for the electric case and also analyzed the energy conditions.

  9. Estimation of High-Dimensional Graphical Models Using Regularized Score Matching

    PubMed Central

    Lin, Lina; Drton, Mathias; Shojaie, Ali

    2017-01-01

    Graphical models are widely used to model stochastic dependences among large collections of variables. We introduce a new method of estimating undirected conditional independence graphs based on the score matching loss, introduced by Hyvärinen (2005), and subsequently extended in Hyvärinen (2007). The regularized score matching method we propose applies to settings with continuous observations and allows for computationally efficient treatment of possibly non-Gaussian exponential family models. In the well-explored Gaussian setting, regularized score matching avoids issues of asymmetry that arise when applying the technique of neighborhood selection, and compared to existing methods that directly yield symmetric estimates, the score matching approach has the advantage that the considered loss is quadratic and gives piecewise linear solution paths under ℓ1 regularization. Under suitable irrepresentability conditions, we show that ℓ1-regularized score matching is consistent for graph estimation in sparse high-dimensional settings. Through numerical experiments and an application to RNAseq data, we confirm that regularized score matching achieves state-of-the-art performance in the Gaussian case and provides a valuable tool for computationally efficient estimation in non-Gaussian graphical models. PMID:28638498

  10. Novel harmonic regularization approach for variable selection in Cox's proportional hazards model.

    PubMed

    Chu, Ge-Jin; Liang, Yong; Wang, Jia-Xuan

    2014-01-01

    Variable selection is an important issue in regression and a number of variable selection methods have been proposed involving nonconvex penalty functions. In this paper, we investigate a novel harmonic regularization method, which can approximate nonconvex Lq  (1/2 < q < 1) regularizations, to select key risk factors in the Cox's proportional hazards model using microarray gene expression data. The harmonic regularization method can be efficiently solved using our proposed direct path seeking approach, which can produce solutions that closely approximate those for the convex loss function and the nonconvex regularization. Simulation results based on the artificial datasets and four real microarray gene expression datasets, such as real diffuse large B-cell lymphoma (DCBCL), the lung cancer, and the AML datasets, show that the harmonic regularization method can be more accurate for variable selection than existing Lasso series methods.

  11. Mixed linear-non-linear inversion of crustal deformation data: Bayesian inference of model, weighting and regularization parameters

    NASA Astrophysics Data System (ADS)

    Fukuda, Jun'ichi; Johnson, Kaj M.

    2010-06-01

    We present a unified theoretical framework and solution method for probabilistic, Bayesian inversions of crustal deformation data. The inversions involve multiple data sets with unknown relative weights, model parameters that are related linearly or non-linearly through theoretic models to observations, prior information on model parameters and regularization priors to stabilize underdetermined problems. To efficiently handle non-linear inversions in which some of the model parameters are linearly related to the observations, this method combines both analytical least-squares solutions and a Monte Carlo sampling technique. In this method, model parameters that are linearly and non-linearly related to observations, relative weights of multiple data sets and relative weights of prior information and regularization priors are determined in a unified Bayesian framework. In this paper, we define the mixed linear-non-linear inverse problem, outline the theoretical basis for the method, provide a step-by-step algorithm for the inversion, validate the inversion method using synthetic data and apply the method to two real data sets. We apply the method to inversions of multiple geodetic data sets with unknown relative data weights for interseismic fault slip and locking depth. We also apply the method to the problem of estimating the spatial distribution of coseismic slip on faults with unknown fault geometry, relative data weights and smoothing regularization weight.

  12. Regularization of moving boundaries in a laplacian field by a mixed Dirichlet-Neumann boundary condition: exact results.

    PubMed

    Meulenbroek, Bernard; Ebert, Ute; Schäfer, Lothar

    2005-11-04

    The dynamics of ionization fronts that generate a conducting body are in the simplest approximation equivalent to viscous fingering without regularization. Going beyond this approximation, we suggest that ionization fronts can be modeled by a mixed Dirichlet-Neumann boundary condition. We derive exact uniformly propagating solutions of this problem in 2D and construct a single partial differential equation governing small perturbations of these solutions. For some parameter value, this equation can be solved analytically, which shows rigorously that the uniformly propagating solution is linearly convectively stable and that the asymptotic relaxation is universal and exponential in time.

  13. Optimal guidance law development for an advanced launch system

    NASA Technical Reports Server (NTRS)

    Calise, Anthony J.; Leung, Martin S. K.

    1995-01-01

    The objective of this research effort was to develop a real-time guidance approach for launch vehicles ascent to orbit injection. Various analytical approaches combined with a variety of model order and model complexity reduction have been investigated. Singular perturbation methods were first attempted and found to be unsatisfactory. The second approach based on regular perturbation analysis was subsequently investigated. It also fails because the aerodynamic effects (ignored in the zero order solution) are too large to be treated as perturbations. Therefore, the study demonstrates that perturbation methods alone (both regular and singular perturbations) are inadequate for use in developing a guidance algorithm for the atmospheric flight phase of a launch vehicle. During a second phase of the research effort, a hybrid analytic/numerical approach was developed and evaluated. The approach combines the numerical methods of collocation and the analytical method of regular perturbations. The concept of choosing intelligent interpolating functions is also introduced. Regular perturbation analysis allows the use of a crude representation for the collocation solution, and intelligent interpolating functions further reduce the number of elements without sacrificing the approximation accuracy. As a result, the combined method forms a powerful tool for solving real-time optimal control problems. Details of the approach are illustrated in a fourth order nonlinear example. The hybrid approach is then applied to the launch vehicle problem. The collocation solution is derived from a bilinear tangent steering law, and results in a guidance solution for the entire flight regime that includes both atmospheric and exoatmospheric flight phases.

  14. 4D-tomographic reconstruction of water vapor using the hybrid regularization technique with application to the North West of Iran

    NASA Astrophysics Data System (ADS)

    Adavi, Zohre; Mashhadi-Hossainali, Masoud

    2015-04-01

    Water vapor is considered as one of the most important weather parameter in meteorology. Its non-uniform distribution, which is due to the atmospheric phenomena above the surface of the earth, depends both on space and time. Due to the limited spatial and temporal coverage of observations, estimating water vapor is still a challenge in meteorology and related fields such as positioning and geodetic techniques. Tomography is a method for modeling the spatio-temporal variations of this parameter. By analyzing the impact of troposphere on the Global Navigation Satellite (GNSS) signals, inversion techniques are used for modeling the water vapor in this approach. Non-uniqueness and instability of solution are the two characteristic features of this problem. Horizontal and/or vertical constraints are usually used to compute a unique solution for this problem. Here, a hybrid regularization method is used for computing a regularized solution. The adopted method is based on the Least-Square QR (LSQR) and Tikhonov regularization techniques. This method benefits from the advantages of both the iterative and direct techniques. Moreover, it is independent of initial values. Based on this property and using an appropriate resolution for the model, firstly the number of model elements which are not constrained by GPS measurement are minimized and then; water vapor density is only estimated at the voxels which are constrained by these measurements. In other words, no constraint is added to solve the problem. Reconstructed profiles of water vapor are validated using radiosonde measurements.

  15. Feasibility of inverse problem solution for determination of city emission function from night sky radiance measurements

    NASA Astrophysics Data System (ADS)

    Petržala, Jaromír

    2018-07-01

    The knowledge of the emission function of a city is crucial for simulation of sky glow in its vicinity. The indirect methods to achieve this function from radiances measured over a part of the sky have been recently developed. In principle, such methods represent an ill-posed inverse problem. This paper deals with the theoretical feasibility study of various approaches to solving of given inverse problem. Particularly, it means testing of fitness of various stabilizing functionals within the Tikhonov's regularization. Further, the L-curve and generalized cross validation methods were investigated as indicators of an optimal regularization parameter. At first, we created the theoretical model for calculation of a sky spectral radiance in the form of a functional of an emission spectral radiance. Consequently, all the mentioned approaches were examined in numerical experiments with synthetical data generated for the fictitious city and loaded by random errors. The results demonstrate that the second order Tikhonov's regularization method together with regularization parameter choice by the L-curve maximum curvature criterion provide solutions which are in good agreement with the supposed model emission functions.

  16. Asymptotic traveling wave solution for a credit rating migration problem

    NASA Astrophysics Data System (ADS)

    Liang, Jin; Wu, Yuan; Hu, Bei

    2016-07-01

    In this paper, an asymptotic traveling wave solution of a free boundary model for pricing a corporate bond with credit rating migration risk is studied. This is the first study to associate the asymptotic traveling wave solution to the credit rating migration problem. The pricing problem with credit rating migration risk is modeled by a free boundary problem. The existence, uniqueness and regularity of the solution are obtained. Under some condition, we proved that the solution of our credit rating problem is convergent to a traveling wave solution, which has an explicit form. Furthermore, numerical examples are presented.

  17. Global Regularity for Several Incompressible Fluid Models with Partial Dissipation

    NASA Astrophysics Data System (ADS)

    Wu, Jiahong; Xu, Xiaojing; Ye, Zhuan

    2017-09-01

    This paper examines the global regularity problem on several 2D incompressible fluid models with partial dissipation. They are the surface quasi-geostrophic (SQG) equation, the 2D Euler equation and the 2D Boussinesq equations. These are well-known models in fluid mechanics and geophysics. The fundamental issue of whether or not they are globally well-posed has attracted enormous attention. The corresponding models with partial dissipation may arise in physical circumstances when the dissipation varies in different directions. We show that the SQG equation with either horizontal or vertical dissipation always has global solutions. This is in sharp contrast with the inviscid SQG equation for which the global regularity problem remains outstandingly open. Although the 2D Euler is globally well-posed for sufficiently smooth data, the associated equations with partial dissipation no longer conserve the vorticity and the global regularity is not trivial. We are able to prove the global regularity for two partially dissipated Euler equations. Several global bounds are also obtained for a partially dissipated Boussinesq system.

  18. Novel Harmonic Regularization Approach for Variable Selection in Cox's Proportional Hazards Model

    PubMed Central

    Chu, Ge-Jin; Liang, Yong; Wang, Jia-Xuan

    2014-01-01

    Variable selection is an important issue in regression and a number of variable selection methods have been proposed involving nonconvex penalty functions. In this paper, we investigate a novel harmonic regularization method, which can approximate nonconvex Lq  (1/2 < q < 1) regularizations, to select key risk factors in the Cox's proportional hazards model using microarray gene expression data. The harmonic regularization method can be efficiently solved using our proposed direct path seeking approach, which can produce solutions that closely approximate those for the convex loss function and the nonconvex regularization. Simulation results based on the artificial datasets and four real microarray gene expression datasets, such as real diffuse large B-cell lymphoma (DCBCL), the lung cancer, and the AML datasets, show that the harmonic regularization method can be more accurate for variable selection than existing Lasso series methods. PMID:25506389

  19. Scalar field cosmologies with inverted potentials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boisseau, B.; Giacomini, H.; Polarski, D., E-mail: bruno.boisseau@lmpt.univ-tours.fr, E-mail: hector.giacomini@lmpt.univ-tours.fr, E-mail: david.polarski@umontpellier.fr

    Regular bouncing solutions in the framework of a scalar-tensor gravity model were found in a recent work. We reconsider the problem in the Einstein frame (EF) in the present work. Singularities arising at the limit of physical viability of the model in the Jordan frame (JF) are either of the Big Bang or of the Big Crunch type in the EF. As a result we obtain integrable scalar field cosmological models in general relativity (GR) with inverted double-well potentials unbounded from below which possess solutions regular in the future, tending to a de Sitter space, and starting with a Bigmore » Bang. The existence of the two fixed points for the field dynamics at late times found earlier in the JF becomes transparent in the EF.« less

  20. Swimming in a two-dimensional Brinkman fluid: Computational modeling and regularized solutions

    NASA Astrophysics Data System (ADS)

    Leiderman, Karin; Olson, Sarah D.

    2016-02-01

    The incompressible Brinkman equation represents the homogenized fluid flow past obstacles that comprise a small volume fraction. In nondimensional form, the Brinkman equation can be characterized by a single parameter that represents the friction or resistance due to the obstacles. In this work, we derive an exact fundamental solution for 2D Brinkman flow driven by a regularized point force and describe the numerical method to use it in practice. To test our solution and method, we compare numerical results with an analytic solution of a stationary cylinder in a uniform Brinkman flow. Our method is also compared to asymptotic theory; for an infinite-length, undulating sheet of small amplitude, we recover an increasing swimming speed as the resistance is increased. With this computational framework, we study a model swimmer of finite length and observe an enhancement in propulsion and efficiency for small to moderate resistance. Finally, we study the interaction of two swimmers where attraction does not occur when the initial separation distance is larger than the screening length.

  1. Exact solutions of unsteady Korteweg-de Vries and time regularized long wave equations.

    PubMed

    Islam, S M Rayhanul; Khan, Kamruzzaman; Akbar, M Ali

    2015-01-01

    In this paper, we implement the exp(-Φ(ξ))-expansion method to construct the exact traveling wave solutions for nonlinear evolution equations (NLEEs). Here we consider two model equations, namely the Korteweg-de Vries (KdV) equation and the time regularized long wave (TRLW) equation. These equations play significant role in nonlinear sciences. We obtained four types of explicit function solutions, namely hyperbolic, trigonometric, exponential and rational function solutions of the variables in the considered equations. It has shown that the applied method is quite efficient and is practically well suited for the aforementioned problems and so for the other NLEEs those arise in mathematical physics and engineering fields. PACS numbers: 02.30.Jr, 02.70.Wz, 05.45.Yv, 94.05.Fq.

  2. Control for well-posedness about a class of non-Newtonian incompressible porous medium fluid equations

    NASA Astrophysics Data System (ADS)

    Deng, Shuxian; Ge, Xinxin

    2017-10-01

    Considering the non-Newtonian fluid equation of incompressible porous media, using the properties of operator semigroup and measure space and the principle of squeezed image, Fourier analysis and a priori estimate in the measurement space are used to discuss the non-compressible porous media, the properness of the solution of the equation, its gradual behavior and its topological properties. Through the diffusion regularization method and the compressed limit compact method, we study the overall decay rate of the solution of the equation in a certain space when the initial value is sufficient. The decay estimation of the solution of the incompressible seepage equation is obtained, and the asymptotic behavior of the solution is obtained by using the double regularization model and the Duhamel principle.

  3. A Variational Approach to the Denoising of Images Based on Different Variants of the TV-Regularization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bildhauer, Michael, E-mail: bibi@math.uni-sb.de; Fuchs, Martin, E-mail: fuchs@math.uni-sb.de

    2012-12-15

    We discuss several variants of the TV-regularization model used in image recovery. The proposed alternatives are either of nearly linear growth or even of linear growth, but with some weak ellipticity properties. The main feature of the paper is the investigation of the analytic properties of the corresponding solutions.

  4. Dynamic experiment design regularization approach to adaptive imaging with array radar/SAR sensor systems.

    PubMed

    Shkvarko, Yuriy; Tuxpan, José; Santos, Stewart

    2011-01-01

    We consider a problem of high-resolution array radar/SAR imaging formalized in terms of a nonlinear ill-posed inverse problem of nonparametric estimation of the power spatial spectrum pattern (SSP) of the random wavefield scattered from a remotely sensed scene observed through a kernel signal formation operator and contaminated with random Gaussian noise. First, the Sobolev-type solution space is constructed to specify the class of consistent kernel SSP estimators with the reproducing kernel structures adapted to the metrics in such the solution space. Next, the "model-free" variational analysis (VA)-based image enhancement approach and the "model-based" descriptive experiment design (DEED) regularization paradigm are unified into a new dynamic experiment design (DYED) regularization framework. Application of the proposed DYED framework to the adaptive array radar/SAR imaging problem leads to a class of two-level (DEED-VA) regularized SSP reconstruction techniques that aggregate the kernel adaptive anisotropic windowing with the projections onto convex sets to enforce the consistency and robustness of the overall iterative SSP estimators. We also show how the proposed DYED regularization method may be considered as a generalization of the MVDR, APES and other high-resolution nonparametric adaptive radar sensing techniques. A family of the DYED-related algorithms is constructed and their effectiveness is finally illustrated via numerical simulations.

  5. A hybrid inventory management system respondingto regular demand and surge demand

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohammad S. Roni; Mingzhou Jin; Sandra D. Eksioglu

    2014-06-01

    This paper proposes a hybrid policy for a stochastic inventory system facing regular demand and surge demand. The combination of two different demand patterns can be observed in many areas, such as healthcare inventory and humanitarian supply chain management. The surge demand has a lower arrival rate but higher demand volume per arrival. The solution approach proposed in this paper incorporates the level crossing method and mixed integer programming technique to optimize the hybrid inventory policy with both regular orders and emergency orders. The level crossing method is applied to obtain the equilibrium distributions of inventory levels under a givenmore » policy. The model is further transformed into a mixed integer program to identify an optimal hybrid policy. A sensitivity analysis is conducted to investigate the impact of parameters on the optimal inventory policy and minimum cost. Numerical results clearly show the benefit of using the proposed hybrid inventory model. The model and solution approach could help healthcare providers or humanitarian logistics providers in managing their emergency supplies in responding to surge demands.« less

  6. Efficient field-theoretic simulation of polymer solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Villet, Michael C.; Fredrickson, Glenn H., E-mail: ghf@mrl.ucsb.edu; Department of Materials, University of California, Santa Barbara, California 93106

    2014-12-14

    We present several developments that facilitate the efficient field-theoretic simulation of polymers by complex Langevin sampling. A regularization scheme using finite Gaussian excluded volume interactions is used to derive a polymer solution model that appears free of ultraviolet divergences and hence is well-suited for lattice-discretized field theoretic simulation. We show that such models can exhibit ultraviolet sensitivity, a numerical pathology that dramatically increases sampling error in the continuum lattice limit, and further show that this pathology can be eliminated by appropriate model reformulation by variable transformation. We present an exponential time differencing algorithm for integrating complex Langevin equations for fieldmore » theoretic simulation, and show that the algorithm exhibits excellent accuracy and stability properties for our regularized polymer model. These developments collectively enable substantially more efficient field-theoretic simulation of polymers, and illustrate the importance of simultaneously addressing analytical and numerical pathologies when implementing such computations.« less

  7. Total Variation with Overlapping Group Sparsity for Image Deblurring under Impulse Noise

    PubMed Central

    Liu, Gang; Huang, Ting-Zhu; Liu, Jun; Lv, Xiao-Guang

    2015-01-01

    The total variation (TV) regularization method is an effective method for image deblurring in preserving edges. However, the TV based solutions usually have some staircase effects. In order to alleviate the staircase effects, we propose a new model for restoring blurred images under impulse noise. The model consists of an ℓ1-fidelity term and a TV with overlapping group sparsity (OGS) regularization term. Moreover, we impose a box constraint to the proposed model for getting more accurate solutions. The solving algorithm for our model is under the framework of the alternating direction method of multipliers (ADMM). We use an inner loop which is nested inside the majorization minimization (MM) iteration for the subproblem of the proposed method. Compared with other TV-based methods, numerical results illustrate that the proposed method can significantly improve the restoration quality, both in terms of peak signal-to-noise ratio (PSNR) and relative error (ReE). PMID:25874860

  8. Selection of regularization parameter for l1-regularized damage detection

    NASA Astrophysics Data System (ADS)

    Hou, Rongrong; Xia, Yong; Bao, Yuequan; Zhou, Xiaoqing

    2018-06-01

    The l1 regularization technique has been developed for structural health monitoring and damage detection through employing the sparsity condition of structural damage. The regularization parameter, which controls the trade-off between data fidelity and solution size of the regularization problem, exerts a crucial effect on the solution. However, the l1 regularization problem has no closed-form solution, and the regularization parameter is usually selected by experience. This study proposes two strategies of selecting the regularization parameter for the l1-regularized damage detection problem. The first method utilizes the residual and solution norms of the optimization problem and ensures that they are both small. The other method is based on the discrepancy principle, which requires that the variance of the discrepancy between the calculated and measured responses is close to the variance of the measurement noise. The two methods are applied to a cantilever beam and a three-story frame. A range of the regularization parameter, rather than one single value, can be determined. When the regularization parameter in this range is selected, the damage can be accurately identified even for multiple damage scenarios. This range also indicates the sensitivity degree of the damage identification problem to the regularization parameter.

  9. Nonminimal coupling for the gravitational and electromagnetic fields: Black hole solutions and solitons

    NASA Astrophysics Data System (ADS)

    Balakin, Alexander B.; Bochkarev, Vladimir V.; Lemos, José P. S.

    2008-04-01

    Using a Lagrangian formalism, a three-parameter nonminimal Einstein-Maxwell theory is established. The three parameters q1, q2, and q3 characterize the cross-terms in the Lagrangian, between the Maxwell field and terms linear in the Ricci scalar, Ricci tensor, and Riemann tensor, respectively. Static spherically symmetric equations are set up, and the three parameters are interrelated and chosen so that effectively the system reduces to a one parameter only, q. Specific black hole and other type of one-parameter solutions are studied. First, as a preparation, the Reissner-Nordström solution, with q1=q2=q3=0, is displayed. Then, we search for solutions in which the electric field is regular everywhere as well as asymptotically Coulombian, and the metric potentials are regular at the center as well as asymptotically flat. In this context, the one-parameter model with q1≡-q, q2=2q, q3=-q, called the Gauss-Bonnet model, is analyzed in detail. The study is done through the solution of the Abel equation (the key equation), and the dynamical system associated with the model. There is extra focus on an exact solution of the model and its critical properties. Finally, an exactly integrable one-parameter model, with q1≡-q, q2=q, q3=0, is considered also in detail. A special submodel, in which the Fibonacci number appears naturally, of this one-parameter model is shown, and the corresponding exact solution is presented. Interestingly enough, it is a soliton of the theory, the Fibonacci soliton, without horizons and with a mild conical singularity at the center.

  10. Solving the Rational Polynomial Coefficients Based on L Curve

    NASA Astrophysics Data System (ADS)

    Zhou, G.; Li, X.; Yue, T.; Huang, W.; He, C.; Huang, Y.

    2018-05-01

    The rational polynomial coefficients (RPC) model is a generalized sensor model, which can achieve high approximation accuracy. And it is widely used in the field of photogrammetry and remote sensing. Least square method is usually used to determine the optimal parameter solution of the rational function model. However the distribution of control points is not uniform or the model is over-parameterized, which leads to the singularity of the coefficient matrix of the normal equation. So the normal equation becomes ill conditioned equation. The obtained solutions are extremely unstable and even wrong. The Tikhonov regularization can effectively improve and solve the ill conditioned equation. In this paper, we calculate pathological equations by regularization method, and determine the regularization parameters by L curve. The results of the experiments on aerial format photos show that the accuracy of the first-order RPC with the equal denominators has the highest accuracy. The high order RPC model is not necessary in the processing of dealing with frame images, as the RPC model and the projective model are almost the same. The result shows that the first-order RPC model is basically consistent with the strict sensor model of photogrammetry. Orthorectification results both the firstorder RPC model and Camera Model (ERDAS9.2 platform) are similar to each other, and the maximum residuals of X and Y are 0.8174 feet and 0.9272 feet respectively. This result shows that RPC model can be used in the aerial photographic compensation replacement sensor model.

  11. Evaluation of global equal-area mass grid solutions from GRACE

    NASA Astrophysics Data System (ADS)

    Save, Himanshu; Bettadpur, Srinivas; Tapley, Byron

    2015-04-01

    The Gravity Recovery and Climate Experiment (GRACE) range-rate data was inverted into global equal-area mass grid solutions at the Center for Space Research (CSR) using Tikhonov Regularization to stabilize the ill-posed inversion problem. These solutions are intended to be used for applications in Hydrology, Oceanography, Cryosphere etc without any need for post-processing. This paper evaluates these solutions with emphasis on spatial and temporal characteristics of the signal content. These solutions will be validated against multiple models and in-situ data sets.

  12. An Onsager Singularity Theorem for Turbulent Solutions of Compressible Euler Equations

    NASA Astrophysics Data System (ADS)

    Drivas, Theodore D.; Eyink, Gregory L.

    2017-12-01

    We prove that bounded weak solutions of the compressible Euler equations will conserve thermodynamic entropy unless the solution fields have sufficiently low space-time Besov regularity. A quantity measuring kinetic energy cascade will also vanish for such Euler solutions, unless the same singularity conditions are satisfied. It is shown furthermore that strong limits of solutions of compressible Navier-Stokes equations that are bounded and exhibit anomalous dissipation are weak Euler solutions. These inviscid limit solutions have non-negative anomalous entropy production and kinetic energy dissipation, with both vanishing when solutions are above the critical degree of Besov regularity. Stationary, planar shocks in Euclidean space with an ideal-gas equation of state provide simple examples that satisfy the conditions of our theorems and which demonstrate sharpness of our L 3-based conditions. These conditions involve space-time Besov regularity, but we show that they are satisfied by Euler solutions that possess similar space regularity uniformly in time.

  13. ELASTIC NET FOR COX'S PROPORTIONAL HAZARDS MODEL WITH A SOLUTION PATH ALGORITHM.

    PubMed

    Wu, Yichao

    2012-01-01

    For least squares regression, Efron et al. (2004) proposed an efficient solution path algorithm, the least angle regression (LAR). They showed that a slight modification of the LAR leads to the whole LASSO solution path. Both the LAR and LASSO solution paths are piecewise linear. Recently Wu (2011) extended the LAR to generalized linear models and the quasi-likelihood method. In this work we extend the LAR further to handle Cox's proportional hazards model. The goal is to develop a solution path algorithm for the elastic net penalty (Zou and Hastie (2005)) in Cox's proportional hazards model. This goal is achieved in two steps. First we extend the LAR to optimizing the log partial likelihood plus a fixed small ridge term. Then we define a path modification, which leads to the solution path of the elastic net regularized log partial likelihood. Our solution path is exact and piecewise determined by ordinary differential equation systems.

  14. Dynamic Experiment Design Regularization Approach to Adaptive Imaging with Array Radar/SAR Sensor Systems

    PubMed Central

    Shkvarko, Yuriy; Tuxpan, José; Santos, Stewart

    2011-01-01

    We consider a problem of high-resolution array radar/SAR imaging formalized in terms of a nonlinear ill-posed inverse problem of nonparametric estimation of the power spatial spectrum pattern (SSP) of the random wavefield scattered from a remotely sensed scene observed through a kernel signal formation operator and contaminated with random Gaussian noise. First, the Sobolev-type solution space is constructed to specify the class of consistent kernel SSP estimators with the reproducing kernel structures adapted to the metrics in such the solution space. Next, the “model-free” variational analysis (VA)-based image enhancement approach and the “model-based” descriptive experiment design (DEED) regularization paradigm are unified into a new dynamic experiment design (DYED) regularization framework. Application of the proposed DYED framework to the adaptive array radar/SAR imaging problem leads to a class of two-level (DEED-VA) regularized SSP reconstruction techniques that aggregate the kernel adaptive anisotropic windowing with the projections onto convex sets to enforce the consistency and robustness of the overall iterative SSP estimators. We also show how the proposed DYED regularization method may be considered as a generalization of the MVDR, APES and other high-resolution nonparametric adaptive radar sensing techniques. A family of the DYED-related algorithms is constructed and their effectiveness is finally illustrated via numerical simulations. PMID:22163859

  15. Regularities in the association of polymethacrylic acid with benzethonium chloride in aqueous solutions

    NASA Astrophysics Data System (ADS)

    Tugay, A. V.; Zakordonskiy, V. P.

    2006-06-01

    The association of cationogenic benzethonium chloride with polymethacrylic acid in aqueous solutions was studied by nephelometry, conductometry, tensiometry, viscometry, and pH-metry. The critical concentrations of aggregation and polymer saturation with the surface-active substance were determined. A model describing processes in such systems step by step was suggested.

  16. Symmetry-plane model of 3D Euler flows: Mapping to regular systems and numerical solutions of blowup

    NASA Astrophysics Data System (ADS)

    Mulungye, Rachel M.; Lucas, Dan; Bustamante, Miguel D.

    2014-11-01

    We introduce a family of 2D models describing the dynamics on the so-called symmetry plane of the full 3D Euler fluid equations. These models depend on a free real parameter and can be solved analytically. For selected representative values of the free parameter, we apply the method introduced in [M.D. Bustamante, Physica D: Nonlinear Phenom. 240, 1092 (2011)] to map the fluid equations bijectively to globally regular systems. By comparing the analytical solutions with the results of numerical simulations, we establish that the numerical simulations of the mapped regular systems are far more accurate than the numerical simulations of the original systems, at the same spatial resolution and CPU time. In particular, the numerical integrations of the mapped regular systems produce robust estimates for the growth exponent and singularity time of the main blowup quantity (vorticity stretching rate), converging well to the analytically-predicted values even beyond the time at which the flow becomes under-resolved (i.e. the reliability time). In contrast, direct numerical integrations of the original systems develop unstable oscillations near the reliability time. We discuss the reasons for this improvement in accuracy, and explain how to extend the analysis to the full 3D case. Supported under the programme for Research in Third Level Institutions (PRTLI) Cycle 5 and co-funded by the European Regional Development Fund.

  17. A practical method to assess model sensitivity and parameter uncertainty in C cycle models

    NASA Astrophysics Data System (ADS)

    Delahaies, Sylvain; Roulstone, Ian; Nichols, Nancy

    2015-04-01

    The carbon cycle combines multiple spatial and temporal scales, from minutes to hours for the chemical processes occurring in plant cells to several hundred of years for the exchange between the atmosphere and the deep ocean and finally to millennia for the formation of fossil fuels. Together with our knowledge of the transformation processes involved in the carbon cycle, many Earth Observation systems are now available to help improving models and predictions using inverse modelling techniques. A generic inverse problem consists in finding a n-dimensional state vector x such that h(x) = y, for a given N-dimensional observation vector y, including random noise, and a given model h. The problem is well posed if the three following conditions hold: 1) there exists a solution, 2) the solution is unique and 3) the solution depends continuously on the input data. If at least one of these conditions is violated the problem is said ill-posed. The inverse problem is often ill-posed, a regularization method is required to replace the original problem with a well posed problem and then a solution strategy amounts to 1) constructing a solution x, 2) assessing the validity of the solution, 3) characterizing its uncertainty. The data assimilation linked ecosystem carbon (DALEC) model is a simple box model simulating the carbon budget allocation for terrestrial ecosystems. Intercomparison experiments have demonstrated the relative merit of various inverse modelling strategies (MCMC, ENKF) to estimate model parameters and initial carbon stocks for DALEC using eddy covariance measurements of net ecosystem exchange of CO2 and leaf area index observations. Most results agreed on the fact that parameters and initial stocks directly related to fast processes were best estimated with narrow confidence intervals, whereas those related to slow processes were poorly estimated with very large uncertainties. While other studies have tried to overcome this difficulty by adding complementary data streams or by considering longer observation windows no systematic analysis has been carried out so far to explain the large differences among results. We consider adjoint based methods to investigate inverse problems using DALEC and various data streams. Using resolution matrices we study the nature of the inverse problems (solution existence, uniqueness and stability) and show how standard regularization techniques affect resolution and stability properties. Instead of using standard prior information as a penalty term in the cost function to regularize the problems we constraint the parameter space using ecological balance conditions and inequality constraints. The efficiency and rapidity of this approach allows us to compute ensembles of solutions to the inverse problems from which we can establish the robustness of the variational method and obtain non Gaussian posterior distributions for the model parameters and initial carbon stocks.

  18. Phase-locked patterns of the Kuramoto model on 3-regular graphs

    NASA Astrophysics Data System (ADS)

    DeVille, Lee; Ermentrout, Bard

    2016-09-01

    We consider the existence of non-synchronized fixed points to the Kuramoto model defined on sparse networks: specifically, networks where each vertex has degree exactly three. We show that "most" such networks support multiple attracting phase-locked solutions that are not synchronized and study the depth and width of the basins of attraction of these phase-locked solutions. We also show that it is common in "large enough" graphs to find phase-locked solutions where one or more of the links have angle difference greater than π/2.

  19. Phase-locked patterns of the Kuramoto model on 3-regular graphs.

    PubMed

    DeVille, Lee; Ermentrout, Bard

    2016-09-01

    We consider the existence of non-synchronized fixed points to the Kuramoto model defined on sparse networks: specifically, networks where each vertex has degree exactly three. We show that "most" such networks support multiple attracting phase-locked solutions that are not synchronized and study the depth and width of the basins of attraction of these phase-locked solutions. We also show that it is common in "large enough" graphs to find phase-locked solutions where one or more of the links have angle difference greater than π/2.

  20. Black hole solution in the framework of arctan-electrodynamics

    NASA Astrophysics Data System (ADS)

    Kruglov, S. I.

    An arctan-electrodynamics coupled with the gravitational field is investigated. We obtain the regular black hole solution that at r →∞ gives corrections to the Reissner-Nordström solution. The corrections to Coulomb’s law at r →∞ are found. We evaluate the mass of the black hole that is a function of the dimensional parameter β introduced in the model. The magnetically charged black hole was investigated and we have obtained the magnetic mass of the black hole and the metric function at r →∞. The regular black hole solution is obtained at r → 0 with the de Sitter core. We show that there is no singularity of the Ricci scalar for electrically and magnetically charged black holes. Restrictions on the electric and magnetic fields are found that follow from the requirement of the absence of superluminal sound speed and the requirement of a classical stability.

  1. Construction of normal-regular decisions of Bessel typed special system

    NASA Astrophysics Data System (ADS)

    Tasmambetov, Zhaksylyk N.; Talipova, Meiramgul Zh.

    2017-09-01

    Studying a special system of differential equations in the separate production of the second order is solved by the degenerate hypergeometric function reducing to the Bessel functions of two variables. To construct a solution of this system near regular and irregular singularities, we use the method of Frobenius-Latysheva applying the concepts of rank and antirank. There is proved the basic theorem that establishes the existence of four linearly independent solutions of studying system type of Bessel. To prove the existence of normal-regular solutions we establish necessary conditions for the existence of such solutions. The existence and convergence of a normally regular solution are shown using the notion of rank and antirank.

  2. Bayesian Inversion of 2D Models from Airborne Transient EM Data

    NASA Astrophysics Data System (ADS)

    Blatter, D. B.; Key, K.; Ray, A.

    2016-12-01

    The inherent non-uniqueness in most geophysical inverse problems leads to an infinite number of Earth models that fit observed data to within an adequate tolerance. To resolve this ambiguity, traditional inversion methods based on optimization techniques such as the Gauss-Newton and conjugate gradient methods rely on an additional regularization constraint on the properties that an acceptable model can possess, such as having minimal roughness. While allowing such an inversion scheme to converge on a solution, regularization makes it difficult to estimate the uncertainty associated with the model parameters. This is because regularization biases the inversion process toward certain models that satisfy the regularization constraint and away from others that don't, even when both may suitably fit the data. By contrast, a Bayesian inversion framework aims to produce not a single `most acceptable' model but an estimate of the posterior likelihood of the model parameters, given the observed data. In this work, we develop a 2D Bayesian framework for the inversion of transient electromagnetic (TEM) data. Our method relies on a reversible-jump Markov Chain Monte Carlo (RJ-MCMC) Bayesian inverse method with parallel tempering. Previous gradient-based inversion work in this area used a spatially constrained scheme wherein individual (1D) soundings were inverted together and non-uniqueness was tackled by using lateral and vertical smoothness constraints. By contrast, our work uses a 2D model space of Voronoi cells whose parameterization (including number of cells) is fully data-driven. To make the problem work practically, we approximate the forward solution for each TEM sounding using a local 1D approximation where the model is obtained from the 2D model by retrieving a vertical profile through the Voronoi cells. The implicit parsimony of the Bayesian inversion process leads to the simplest models that adequately explain the data, obviating the need for explicit smoothness constraints. In addition, credible intervals in model space are directly obtained, resolving some of the uncertainty introduced by regularization. An example application shows how the method can be used to quantify the uncertainty in airborne EM soundings for imaging subglacial brine channels and groundwater systems.

  3. Reconstruction of dynamic image series from undersampled MRI data using data-driven model consistency condition (MOCCO).

    PubMed

    Velikina, Julia V; Samsonov, Alexey A

    2015-11-01

    To accelerate dynamic MR imaging through development of a novel image reconstruction technique using low-rank temporal signal models preestimated from training data. We introduce the model consistency condition (MOCCO) technique, which utilizes temporal models to regularize reconstruction without constraining the solution to be low-rank, as is performed in related techniques. This is achieved by using a data-driven model to design a transform for compressed sensing-type regularization. The enforcement of general compliance with the model without excessively penalizing deviating signal allows recovery of a full-rank solution. Our method was compared with a standard low-rank approach utilizing model-based dimensionality reduction in phantoms and patient examinations for time-resolved contrast-enhanced angiography (CE-MRA) and cardiac CINE imaging. We studied the sensitivity of all methods to rank reduction and temporal subspace modeling errors. MOCCO demonstrated reduced sensitivity to modeling errors compared with the standard approach. Full-rank MOCCO solutions showed significantly improved preservation of temporal fidelity and aliasing/noise suppression in highly accelerated CE-MRA (acceleration up to 27) and cardiac CINE (acceleration up to 15) data. MOCCO overcomes several important deficiencies of previously proposed methods based on pre-estimated temporal models and allows high quality image restoration from highly undersampled CE-MRA and cardiac CINE data. © 2014 Wiley Periodicals, Inc.

  4. RECONSTRUCTION OF DYNAMIC IMAGE SERIES FROM UNDERSAMPLED MRI DATA USING DATA-DRIVEN MODEL CONSISTENCY CONDITION (MOCCO)

    PubMed Central

    Velikina, Julia V.; Samsonov, Alexey A.

    2014-01-01

    Purpose To accelerate dynamic MR imaging through development of a novel image reconstruction technique using low-rank temporal signal models pre-estimated from training data. Theory We introduce the MOdel Consistency COndition (MOCCO) technique that utilizes temporal models to regularize the reconstruction without constraining the solution to be low-rank as performed in related techniques. This is achieved by using a data-driven model to design a transform for compressed sensing-type regularization. The enforcement of general compliance with the model without excessively penalizing deviating signal allows recovery of a full-rank solution. Methods Our method was compared to standard low-rank approach utilizing model-based dimensionality reduction in phantoms and patient examinations for time-resolved contrast-enhanced angiography (CE MRA) and cardiac CINE imaging. We studied sensitivity of all methods to rank-reduction and temporal subspace modeling errors. Results MOCCO demonstrated reduced sensitivity to modeling errors compared to the standard approach. Full-rank MOCCO solutions showed significantly improved preservation of temporal fidelity and aliasing/noise suppression in highly accelerated CE MRA (acceleration up to 27) and cardiac CINE (acceleration up to 15) data. Conclusions MOCCO overcomes several important deficiencies of previously proposed methods based on pre-estimated temporal models and allows high quality image restoration from highly undersampled CE-MRA and cardiac CINE data. PMID:25399724

  5. Application of thermodynamics to silicate crystalline solutions

    NASA Technical Reports Server (NTRS)

    Saxena, S. K.

    1972-01-01

    A review of thermodynamic relations is presented, describing Guggenheim's regular solution models, the simple mixture, the zeroth approximation, and the quasi-chemical model. The possibilities of retrieving useful thermodynamic quantities from phase equilibrium studies are discussed. Such quantities include the activity-composition relations and the free energy of mixing in crystalline solutions. Theory and results of the study of partitioning of elements in coexisting minerals are briefly reviewed. A thermodynamic study of the intercrystalline and intracrystalline ion exchange relations gives useful information on the thermodynamic behavior of the crystalline solutions involved. Such information is necessary for the solution of most petrogenic problems and for geothermometry. Thermodynamic quantities for tungstates (CaWO4-SrWO4) are calculated.

  6. Cross Validation Through Two-Dimensional Solution Surface for Cost-Sensitive SVM.

    PubMed

    Gu, Bin; Sheng, Victor S; Tay, Keng Yeow; Romano, Walter; Li, Shuo

    2017-06-01

    Model selection plays an important role in cost-sensitive SVM (CS-SVM). It has been proven that the global minimum cross validation (CV) error can be efficiently computed based on the solution path for one parameter learning problems. However, it is a challenge to obtain the global minimum CV error for CS-SVM based on one-dimensional solution path and traditional grid search, because CS-SVM is with two regularization parameters. In this paper, we propose a solution and error surfaces based CV approach (CV-SES). More specifically, we first compute a two-dimensional solution surface for CS-SVM based on a bi-parameter space partition algorithm, which can fit solutions of CS-SVM for all values of both regularization parameters. Then, we compute a two-dimensional validation error surface for each CV fold, which can fit validation errors of CS-SVM for all values of both regularization parameters. Finally, we obtain the CV error surface by superposing K validation error surfaces, which can find the global minimum CV error of CS-SVM. Experiments are conducted on seven datasets for cost sensitive learning and on four datasets for imbalanced learning. Experimental results not only show that our proposed CV-SES has a better generalization ability than CS-SVM with various hybrids between grid search and solution path methods, and than recent proposed cost-sensitive hinge loss SVM with three-dimensional grid search, but also show that CV-SES uses less running time.

  7. Convex blind image deconvolution with inverse filtering

    NASA Astrophysics Data System (ADS)

    Lv, Xiao-Guang; Li, Fang; Zeng, Tieyong

    2018-03-01

    Blind image deconvolution is the process of estimating both the original image and the blur kernel from the degraded image with only partial or no information about degradation and the imaging system. It is a bilinear ill-posed inverse problem corresponding to the direct problem of convolution. Regularization methods are used to handle the ill-posedness of blind deconvolution and get meaningful solutions. In this paper, we investigate a convex regularized inverse filtering method for blind deconvolution of images. We assume that the support region of the blur object is known, as has been done in a few existing works. By studying the inverse filters of signal and image restoration problems, we observe the oscillation structure of the inverse filters. Inspired by the oscillation structure of the inverse filters, we propose to use the star norm to regularize the inverse filter. Meanwhile, we use the total variation to regularize the resulting image obtained by convolving the inverse filter with the degraded image. The proposed minimization model is shown to be convex. We employ the first-order primal-dual method for the solution of the proposed minimization model. Numerical examples for blind image restoration are given to show that the proposed method outperforms some existing methods in terms of peak signal-to-noise ratio (PSNR), structural similarity (SSIM), visual quality and time consumption.

  8. The cost of uniqueness in groundwater model calibration

    NASA Astrophysics Data System (ADS)

    Moore, Catherine; Doherty, John

    2006-04-01

    Calibration of a groundwater model requires that hydraulic properties be estimated throughout a model domain. This generally constitutes an underdetermined inverse problem, for which a solution can only be found when some kind of regularization device is included in the inversion process. Inclusion of regularization in the calibration process can be implicit, for example through the use of zones of constant parameter value, or explicit, for example through solution of a constrained minimization problem in which parameters are made to respect preferred values, or preferred relationships, to the degree necessary for a unique solution to be obtained. The "cost of uniqueness" is this: no matter which regularization methodology is employed, the inevitable consequence of its use is a loss of detail in the calibrated field. This, in turn, can lead to erroneous predictions made by a model that is ostensibly "well calibrated". Information made available as a by-product of the regularized inversion process allows the reasons for this loss of detail to be better understood. In particular, it is easily demonstrated that the estimated value for an hydraulic property at any point within a model domain is, in fact, a weighted average of the true hydraulic property over a much larger area. This averaging process causes loss of resolution in the estimated field. Where hydraulic conductivity is the hydraulic property being estimated, high averaging weights exist in areas that are strategically disposed with respect to measurement wells, while other areas may contribute very little to the estimated hydraulic conductivity at any point within the model domain, this possibly making the detection of hydraulic conductivity anomalies in these latter areas almost impossible. A study of the post-calibration parameter field covariance matrix allows further insights into the loss of system detail incurred through the calibration process to be gained. A comparison of pre- and post-calibration parameter covariance matrices shows that the latter often possess a much smaller spectral bandwidth than the former. It is also demonstrated that, as an inevitable consequence of the fact that a calibrated model cannot replicate every detail of the true system, model-to-measurement residuals can show a high degree of spatial correlation, a fact which must be taken into account when assessing these residuals either qualitatively, or quantitatively in the exploration of model predictive uncertainty. These principles are demonstrated using a synthetic case in which spatial parameter definition is based on pilot points, and calibration is implemented using both zones of piecewise constancy and constrained minimization regularization.

  9. Convection Regularization of High Wavenumbers in Turbulence ANS Shocks

    DTIC Science & Technology

    2011-07-31

    dynamics of particles that adhere to one another upon collision and has been studied as a simple cosmological model for describing the nonlinear formation of...solution we mean a solution to the Cauchy problem in the following sense. Definition 5.1. A function u : R × [0, T ] 7→ RN is a weak solution of the...step 2 the limit function in the α → 0 limit is shown to satisfy the definition of a weak solution for the Cauchy problem. Without loss of generality

  10. Bianchi type-I magnetized cosmological models for the Einstein-Boltzmann equation with the cosmological constant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ayissi, Raoul Domingo, E-mail: raoulayissi@yahoo.fr; Noutchegueme, Norbert, E-mail: nnoutch@yahoo.fr

    Global solutions regular for the Einstein-Boltzmann equation on a magnetized Bianchi type-I cosmological model with the cosmological constant are investigated. We suppose that the metric is locally rotationally symmetric. The Einstein-Boltzmann equation has been already considered by some authors. But, in general Bancel and Choquet-Bruhat [Ann. Henri Poincaré XVIII(3), 263 (1973); Commun. Math. Phys. 33, 83 (1973)], they proved only the local existence, and in the case of the nonrelativistic Boltzmann equation. Mucha [Global existence of solutions of the Einstein-Boltzmann equation in the spatially homogeneous case. Evolution equation, existence, regularity and singularities (Banach Center Publications, Institute of Mathematics, Polish Academymore » of Science, 2000), Vol. 52] obtained a global existence result, for the relativistic Boltzmann equation coupled with the Einstein equations and using the Yosida operator, but confusing unfortunately with the nonrelativistic case. Noutchegueme and Dongho [Classical Quantum Gravity 23, 2979 (2006)] and Noutchegueme, Dongho, and Takou [Gen. Relativ. Gravitation 37, 2047 (2005)], have obtained a global solution in time, but still using the Yosida operator and considering only the uncharged case. Noutchegueme and Ayissi [Adv. Stud. Theor. Phys. 4, 855 (2010)] also proved a global existence of solutions to the Maxwell-Boltzmann system using the characteristic method. In this paper, we obtain using a method totally different from those used in the works of Noutchegueme and Dongho [Classical Quantum Gravity 23, 2979 (2006)], Noutchegueme, Dongho, and Takou [Gen. Relativ. Gravitation 37, 2047 (2005)], Noutchegueme and Ayissi [Adv. Stud. Theor. Phys. 4, 855 (2010)], and Mucha [Global existence of solutions of the Einstein-Boltzmann equation in the spatially homogeneous case. Evolution equation, existence, regularity and singularities (Banach Center Publications, Institute of Mathematics, Polish Academy of Science, 2000), Vol. 52] the global in time existence and uniqueness of a regular solution to the Einstein-Maxwell-Boltzmann system with the cosmological constant. We define and we use the weighted Sobolev separable spaces for the Boltzmann equation; some special spaces for the Einstein equations, then we clearly display all the proofs leading to the global existence theorems.« less

  11. Bianchi type-I magnetized cosmological models for the Einstein-Boltzmann equation with the cosmological constant

    NASA Astrophysics Data System (ADS)

    Ayissi, Raoul Domingo; Noutchegueme, Norbert

    2015-01-01

    Global solutions regular for the Einstein-Boltzmann equation on a magnetized Bianchi type-I cosmological model with the cosmological constant are investigated. We suppose that the metric is locally rotationally symmetric. The Einstein-Boltzmann equation has been already considered by some authors. But, in general Bancel and Choquet-Bruhat [Ann. Henri Poincaré XVIII(3), 263 (1973); Commun. Math. Phys. 33, 83 (1973)], they proved only the local existence, and in the case of the nonrelativistic Boltzmann equation. Mucha [Global existence of solutions of the Einstein-Boltzmann equation in the spatially homogeneous case. Evolution equation, existence, regularity and singularities (Banach Center Publications, Institute of Mathematics, Polish Academy of Science, 2000), Vol. 52] obtained a global existence result, for the relativistic Boltzmann equation coupled with the Einstein equations and using the Yosida operator, but confusing unfortunately with the nonrelativistic case. Noutchegueme and Dongho [Classical Quantum Gravity 23, 2979 (2006)] and Noutchegueme, Dongho, and Takou [Gen. Relativ. Gravitation 37, 2047 (2005)], have obtained a global solution in time, but still using the Yosida operator and considering only the uncharged case. Noutchegueme and Ayissi [Adv. Stud. Theor. Phys. 4, 855 (2010)] also proved a global existence of solutions to the Maxwell-Boltzmann system using the characteristic method. In this paper, we obtain using a method totally different from those used in the works of Noutchegueme and Dongho [Classical Quantum Gravity 23, 2979 (2006)], Noutchegueme, Dongho, and Takou [Gen. Relativ. Gravitation 37, 2047 (2005)], Noutchegueme and Ayissi [Adv. Stud. Theor. Phys. 4, 855 (2010)], and Mucha [Global existence of solutions of the Einstein-Boltzmann equation in the spatially homogeneous case. Evolution equation, existence, regularity and singularities (Banach Center Publications, Institute of Mathematics, Polish Academy of Science, 2000), Vol. 52] the global in time existence and uniqueness of a regular solution to the Einstein-Maxwell-Boltzmann system with the cosmological constant. We define and we use the weighted Sobolev separable spaces for the Boltzmann equation; some special spaces for the Einstein equations, then we clearly display all the proofs leading to the global existence theorems.

  12. A dynamic regularized gradient model of the subgrid-scale stress tensor for large-eddy simulation

    NASA Astrophysics Data System (ADS)

    Vollant, A.; Balarac, G.; Corre, C.

    2016-02-01

    Large-eddy simulation (LES) solves only the large scales part of turbulent flows by using a scales separation based on a filtering operation. The solution of the filtered Navier-Stokes equations requires then to model the subgrid-scale (SGS) stress tensor to take into account the effect of scales smaller than the filter size. In this work, a new model is proposed for the SGS stress model. The model formulation is based on a regularization procedure of the gradient model to correct its unstable behavior. The model is developed based on a priori tests to improve the accuracy of the modeling for both structural and functional performances, i.e., the model ability to locally approximate the SGS unknown term and to reproduce enough global SGS dissipation, respectively. LES is then performed for a posteriori validation. This work is an extension to the SGS stress tensor of the regularization procedure proposed by Balarac et al. ["A dynamic regularized gradient model of the subgrid-scale scalar flux for large eddy simulations," Phys. Fluids 25(7), 075107 (2013)] to model the SGS scalar flux. A set of dynamic regularized gradient (DRG) models is thus made available for both the momentum and the scalar equations. The second objective of this work is to compare this new set of DRG models with direct numerical simulations (DNS), filtered DNS in the case of classic flows simulated with a pseudo-spectral solver and with the standard set of models based on the dynamic Smagorinsky model. Various flow configurations are considered: decaying homogeneous isotropic turbulence, turbulent plane jet, and turbulent channel flows. These tests demonstrate the stable behavior provided by the regularization procedure, along with substantial improvement for velocity and scalar statistics predictions.

  13. An efficient and flexible Abel-inversion method for noisy data

    NASA Astrophysics Data System (ADS)

    Antokhin, Igor I.

    2016-12-01

    We propose an efficient and flexible method for solving the Abel integral equation of the first kind, frequently appearing in many fields of astrophysics, physics, chemistry, and applied sciences. This equation represents an ill-posed problem, thus solving it requires some kind of regularization. Our method is based on solving the equation on a so-called compact set of functions and/or using Tikhonov's regularization. A priori constraints on the unknown function, defining a compact set, are very loose and can be set using simple physical considerations. Tikhonov's regularization in itself does not require any explicit a priori constraints on the unknown function and can be used independently of such constraints or in combination with them. Various target degrees of smoothness of the unknown function may be set, as required by the problem at hand. The advantage of the method, apart from its flexibility, is that it gives uniform convergence of the approximate solution to the exact solution, as the errors of input data tend to zero. The method is illustrated on several simulated models with known solutions. An example of astrophysical application of the method is also given.

  14. Singular Value Decomposition Method to Determine Distance Distributions in Pulsed Dipolar Electron Spin Resonance.

    PubMed

    Srivastava, Madhur; Freed, Jack H

    2017-11-16

    Regularization is often utilized to elicit the desired physical results from experimental data. The recent development of a denoising procedure yielding about 2 orders of magnitude in improvement in SNR obviates the need for regularization, which achieves a compromise between canceling effects of noise and obtaining an estimate of the desired physical results. We show how singular value decomposition (SVD) can be employed directly on the denoised data, using pulse dipolar electron spin resonance experiments as an example. Such experiments are useful in measuring distances and their distributions, P(r) between spin labels on proteins. In noise-free model cases exact results are obtained, but even a small amount of noise (e.g., SNR = 850 after denoising) corrupts the solution. We develop criteria that precisely determine an optimum approximate solution, which can readily be automated. This method is applicable to any signal that is currently processed with regularization of its SVD analysis.

  15. Towards adjoint-based inversion for rheological parameters in nonlinear viscous mantle flow

    NASA Astrophysics Data System (ADS)

    Worthen, Jennifer; Stadler, Georg; Petra, Noemi; Gurnis, Michael; Ghattas, Omar

    2014-09-01

    We address the problem of inferring mantle rheological parameter fields from surface velocity observations and instantaneous nonlinear mantle flow models. We formulate this inverse problem as an infinite-dimensional nonlinear least squares optimization problem governed by nonlinear Stokes equations. We provide expressions for the gradient of the cost functional of this optimization problem with respect to two spatially-varying rheological parameter fields: the viscosity prefactor and the exponent of the second invariant of the strain rate tensor. Adjoint (linearized) Stokes equations, which are characterized by a 4th order anisotropic viscosity tensor, facilitates efficient computation of the gradient. A quasi-Newton method for the solution of this optimization problem is presented, which requires the repeated solution of both nonlinear forward Stokes and linearized adjoint Stokes equations. For the solution of the nonlinear Stokes equations, we find that Newton’s method is significantly more efficient than a Picard fixed point method. Spectral analysis of the inverse operator given by the Hessian of the optimization problem reveals that the numerical eigenvalues collapse rapidly to zero, suggesting a high degree of ill-posedness of the inverse problem. To overcome this ill-posedness, we employ Tikhonov regularization (favoring smooth parameter fields) or total variation (TV) regularization (favoring piecewise-smooth parameter fields). Solution of two- and three-dimensional finite element-based model inverse problems show that a constant parameter in the constitutive law can be recovered well from surface velocity observations. Inverting for a spatially-varying parameter field leads to its reasonable recovery, in particular close to the surface. When inferring two spatially varying parameter fields, only an effective viscosity field and the total viscous dissipation are recoverable. Finally, a model of a subducting plate shows that a localized weak zone at the plate boundary can be partially recovered, especially with TV regularization.

  16. Regularized Chapman-Enskog expansion for scalar conservation laws

    NASA Technical Reports Server (NTRS)

    Schochet, Steven; Tadmor, Eitan

    1990-01-01

    Rosenau has recently proposed a regularized version of the Chapman-Enskog expansion of hydrodynamics. This regularized expansion resembles the usual Navier-Stokes viscosity terms at law wave-numbers, but unlike the latter, it has the advantage of being a bounded macroscopic approximation to the linearized collision operator. The behavior of Rosenau regularization of the Chapman-Enskog expansion (RCE) is studied in the context of scalar conservation laws. It is shown that thie RCE model retains the essential properties of the usual viscosity approximation, e.g., existence of traveling waves, monotonicity, upper-Lipschitz continuity..., and at the same time, it sharpens the standard viscous shock layers. It is proved that the regularized RCE approximation converges to the underlying inviscid entropy solution as its mean-free-path epsilon approaches 0, and the convergence rate is estimated.

  17. Time-reversibility and particle sedimentation

    NASA Technical Reports Server (NTRS)

    Golubitsky, Martin; Krupa, Martin; Lim, Chjan

    1991-01-01

    This paper studies an ODE model, called the Stokeslet model, and describes sedimentation of small clusters of particles in a highly viscous fluid. This model has a trivial solution in which the n particles arrange themselves at the vertices of a regular n-sided polygon. When n = 3, Hocking and Caflisch et al. (1988) proved the existence of periodic motion (in the frame moving with the center of gravity in the cluster) in which the particles form an isosceles triangle. Here, the study of periodic and quasi-periodic solutions of the Stokeslet model is continued, with emphasis on the spatial and time-reversal symmetry of the model. For three particles, the existence of a second family of periodic solutions and a family of quasi-periodic solutions is proved. It is also indicated how the methods generalize to the case of n particles.

  18. ELASTIC NET FOR COX’S PROPORTIONAL HAZARDS MODEL WITH A SOLUTION PATH ALGORITHM

    PubMed Central

    Wu, Yichao

    2012-01-01

    For least squares regression, Efron et al. (2004) proposed an efficient solution path algorithm, the least angle regression (LAR). They showed that a slight modification of the LAR leads to the whole LASSO solution path. Both the LAR and LASSO solution paths are piecewise linear. Recently Wu (2011) extended the LAR to generalized linear models and the quasi-likelihood method. In this work we extend the LAR further to handle Cox’s proportional hazards model. The goal is to develop a solution path algorithm for the elastic net penalty (Zou and Hastie (2005)) in Cox’s proportional hazards model. This goal is achieved in two steps. First we extend the LAR to optimizing the log partial likelihood plus a fixed small ridge term. Then we define a path modification, which leads to the solution path of the elastic net regularized log partial likelihood. Our solution path is exact and piecewise determined by ordinary differential equation systems. PMID:23226932

  19. s-SMOOTH: Sparsity and Smoothness Enhanced EEG Brain Tomography

    PubMed Central

    Li, Ying; Qin, Jing; Hsin, Yue-Loong; Osher, Stanley; Liu, Wentai

    2016-01-01

    EEG source imaging enables us to reconstruct current density in the brain from the electrical measurements with excellent temporal resolution (~ ms). The corresponding EEG inverse problem is an ill-posed one that has infinitely many solutions. This is due to the fact that the number of EEG sensors is usually much smaller than that of the potential dipole locations, as well as noise contamination in the recorded signals. To obtain a unique solution, regularizations can be incorporated to impose additional constraints on the solution. An appropriate choice of regularization is critically important for the reconstruction accuracy of a brain image. In this paper, we propose a novel Sparsity and SMOOthness enhanced brain TomograpHy (s-SMOOTH) method to improve the reconstruction accuracy by integrating two recently proposed regularization techniques: Total Generalized Variation (TGV) regularization and ℓ1−2 regularization. TGV is able to preserve the source edge and recover the spatial distribution of the source intensity with high accuracy. Compared to the relevant total variation (TV) regularization, TGV enhances the smoothness of the image and reduces staircasing artifacts. The traditional TGV defined on a 2D image has been widely used in the image processing field. In order to handle 3D EEG source images, we propose a voxel-based Total Generalized Variation (vTGV) regularization that extends the definition of second-order TGV from 2D planar images to 3D irregular surfaces such as cortex surface. In addition, the ℓ1−2 regularization is utilized to promote sparsity on the current density itself. We demonstrate that ℓ1−2 regularization is able to enhance sparsity and accelerate computations than ℓ1 regularization. The proposed model is solved by an efficient and robust algorithm based on the difference of convex functions algorithm (DCA) and the alternating direction method of multipliers (ADMM). Numerical experiments using synthetic data demonstrate the advantages of the proposed method over other state-of-the-art methods in terms of total reconstruction accuracy, localization accuracy and focalization degree. The application to the source localization of event-related potential data further demonstrates the performance of the proposed method in real-world scenarios. PMID:27965529

  20. On the Solutions of a 2+1-Dimensional Model for Epitaxial Growth with Axial Symmetry

    NASA Astrophysics Data System (ADS)

    Lu, Xin Yang

    2018-04-01

    In this paper, we study the evolution equation derived by Xu and Xiang (SIAM J Appl Math 69(5):1393-1414, 2009) to describe heteroepitaxial growth in 2+1 dimensions with elastic forces on vicinal surfaces is in the radial case and uniform mobility. This equation is strongly nonlinear and contains two elliptic integrals and defined via Cauchy principal value. We will first derive a formally equivalent parabolic evolution equation (i.e., full equivalence when sufficient regularity is assumed), and the main aim is to prove existence, uniqueness and regularity of strong solutions. We will extensively use techniques from the theory of evolution equations governed by maximal monotone operators in Banach spaces.

  1. Regular black holes: Electrically charged solutions, Reissner-Nordstroem outside a de Sitter core

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lemos, Jose P. S.; Zanchin, Vilson T.; Centro de Ciencias Naturais e Humanas, Universidade Federal do ABC, Rua Santa Adelia, 166, 09210-170, Santo Andre, Sao Paulo

    2011-06-15

    To have the correct picture of a black hole as a whole, it is of crucial importance to understand its interior. The singularities that lurk inside the horizon of the usual Kerr-Newman family of black hole solutions signal an endpoint to the physical laws and, as such, should be substituted in one way or another. A proposal that has been around for sometime is to replace the singular region of the spacetime by a region containing some form of matter or false vacuum configuration that can also cohabit with the black hole interior. Black holes without singularities are called regularmore » black holes. In the present work, regular black hole solutions are found within general relativity coupled to Maxwell's electromagnetism and charged matter. We show that there are objects which correspond to regular charged black holes, whose interior region is de Sitter, whose exterior region is Reissner-Nordstroem, and the boundary between both regions is made of an electrically charged spherically symmetric coat. There are several types of solutions: regular nonextremal black holes with a null matter boundary, regular nonextremal black holes with a timelike matter boundary, regular extremal black holes with a timelike matter boundary, and regular overcharged stars with a timelike matter boundary. The main physical and geometrical properties of such charged regular solutions are analyzed.« less

  2. Backscattering and Nonparaxiality Arrest Collapse of Damped Nonlinear Waves

    NASA Technical Reports Server (NTRS)

    Fibich, G.; Ilan, B.; Tsynkov, S.

    2002-01-01

    The critical nonlinear Schrodinger equation (NLS) models the propagation of intense laser light in Kerr media. This equation is derived from the more comprehensive nonlinear Helmholtz equation (NLH) by employing the paraxial approximation and neglecting the backscattered waves. It is known that if the input power of the laser beam (i.e., L(sub 2) norm of the initial solution) is sufficiently high, then the NLS model predicts that the beam will self-focus to a point (i.e.. collapse) at a finite propagation distance. Mathematically, this behavior corresponds to the formation of a singularity in the solution of the NLS. A key question which has been open for many years is whether the solution to the NLH, i.e., the 'parent' equation, may nonetheless exist and remain regular everywhere, in particular for those initial conditions (input powers) that lead to blowup in the NLS. In the current study, we address this question by introducing linear damping into both models and subsequently comparing the numerical solutions of the damped NLH (boundary-value problem) with the corresponding solutions of the damped NLS (initial-value problem). Linear damping is introduced in much the same way as done when analyzing the classical constant-coefficient Helmholtz equation using the limiting absorption principle. Numerically, we have found that it provides a very efficient tool for controlling the solutions of both the NLH and NHS. In particular, we have been able to identify initial conditions for which the NLS solution does become singular. whereas the NLH solution still remains regular everywhere. We believe that our finding of a larger domain of existence for the NLH than that for the NLS is accounted for by precisely those mechanisms, that have been neglected when deriving the NLS from the NLH, i.e., nonparaxiality and backscattering.

  3. Class of regular bouncing cosmologies

    NASA Astrophysics Data System (ADS)

    Vasilić, Milovan

    2017-06-01

    In this paper, I construct a class of everywhere regular geometric sigma models that possess bouncing solutions. Precisely, I show that every bouncing metric can be made a solution of such a model. My previous attempt to do so by employing one scalar field has failed due to the appearance of harmful singularities near the bounce. In this work, I use four scalar fields to construct a class of geometric sigma models which are free of singularities. The models within the class are parametrized by their background geometries. I prove that, whatever background is chosen, the dynamics of its small perturbations is classically stable on the whole time axis. Contrary to what one expects from the structure of the initial Lagrangian, the physics of background fluctuations is found to carry two tensor, two vector, and two scalar degrees of freedom. The graviton mass, which naturally appears in these models, is shown to be several orders of magnitude smaller than its experimental bound. I provide three simple examples to demonstrate how this is done in practice. In particular, I show that graviton mass can be made arbitrarily small.

  4. On the regularity criterion of weak solutions for the 3D MHD equations

    NASA Astrophysics Data System (ADS)

    Gala, Sadek; Ragusa, Maria Alessandra

    2017-12-01

    The paper deals with the 3D incompressible MHD equations and aims at improving a regularity criterion in terms of the horizontal gradient of velocity and magnetic field. It is proved that the weak solution ( u, b) becomes regular provided that ( \

  5. Size-distribution analysis of macromolecules by sedimentation velocity ultracentrifugation and lamm equation modeling.

    PubMed

    Schuck, P

    2000-03-01

    A new method for the size-distribution analysis of polymers by sedimentation velocity analytical ultracentrifugation is described. It exploits the ability of Lamm equation modeling to discriminate between the spreading of the sedimentation boundary arising from sample heterogeneity and from diffusion. Finite element solutions of the Lamm equation for a large number of discrete noninteracting species are combined with maximum entropy regularization to represent a continuous size-distribution. As in the program CONTIN, the parameter governing the regularization constraint is adjusted by variance analysis to a predefined confidence level. Estimates of the partial specific volume and the frictional ratio of the macromolecules are used to calculate the diffusion coefficients, resulting in relatively high-resolution sedimentation coefficient distributions c(s) or molar mass distributions c(M). It can be applied to interference optical data that exhibit systematic noise components, and it does not require solution or solvent plateaus to be established. More details on the size-distribution can be obtained than from van Holde-Weischet analysis. The sensitivity to the values of the regularization parameter and to the shape parameters is explored with the help of simulated sedimentation data of discrete and continuous model size distributions, and by applications to experimental data of continuous and discrete protein mixtures.

  6. Assessment of First- and Second-Order Wave-Excitation Load Models for Cylindrical Substructures: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pereyra, Brandon; Wendt, Fabian; Robertson, Amy

    2017-03-09

    The hydrodynamic loads on an offshore wind turbine's support structure present unique engineering challenges for offshore wind. Two typical approaches used for modeling these hydrodynamic loads are potential flow (PF) and strip theory (ST), the latter via Morison's equation. This study examines the first- and second-order wave-excitation surge forces on a fixed cylinder in regular waves computed by the PF and ST approaches to (1) verify their numerical implementations in HydroDyn and (2) understand when the ST approach breaks down. The numerical implementation of PF and ST in HydroDyn, a hydrodynamic time-domain solver implemented as a module in the FASTmore » wind turbine engineering tool, was verified by showing the consistency in the first- and second-order force output between the two methods across a range of wave frequencies. ST is known to be invalid at high frequencies, and this study investigates where the ST solution diverges from the PF solution. Regular waves across a range of frequencies were run in HydroDyn for a monopile substructure. As expected, the solutions for the first-order (linear) wave-excitation loads resulting from these regular waves are similar for PF and ST when the diameter of the cylinder is small compared to the length of the waves (generally when the diameter-to-wavelength ratio is less than 0.2). The same finding applies to the solutions for second-order wave-excitation loads, but for much smaller diameter-to-wavelength ratios (based on wavelengths of first-order waves).« less

  7. Assessment of First- and Second-Order Wave-Excitation Load Models for Cylindrical Substructures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pereyra, Brandon; Wendt, Fabian; Robertson, Amy

    2016-07-01

    The hydrodynamic loads on an offshore wind turbine's support structure present unique engineering challenges for offshore wind. Two typical approaches used for modeling these hydrodynamic loads are potential flow (PF) and strip theory (ST), the latter via Morison's equation. This study examines the first- and second-order wave-excitation surge forces on a fixed cylinder in regular waves computed by the PF and ST approaches to (1) verify their numerical implementations in HydroDyn and (2) understand when the ST approach breaks down. The numerical implementation of PF and ST in HydroDyn, a hydrodynamic time-domain solver implemented as a module in the FASTmore » wind turbine engineering tool, was verified by showing the consistency in the first- and second-order force output between the two methods across a range of wave frequencies. ST is known to be invalid at high frequencies, and this study investigates where the ST solution diverges from the PF solution. Regular waves across a range of frequencies were run in HydroDyn for a monopile substructure. As expected, the solutions for the first-order (linear) wave-excitation loads resulting from these regular waves are similar for PF and ST when the diameter of the cylinder is small compared to the length of the waves (generally when the diameter-to-wavelength ratio is less than 0.2). The same finding applies to the solutions for second-order wave-excitation loads, but for much smaller diameter-to-wavelength ratios (based on wavelengths of first-order waves).« less

  8. Gravitating lepton bag model

    NASA Astrophysics Data System (ADS)

    Burinskii, A.

    2015-08-01

    The Kerr-Newman (KN) black hole (BH) solution exhibits the external gravitational and electromagnetic field corresponding to that of the Dirac electron. For the large spin/mass ratio, a ≫ m, the BH loses horizons and acquires a naked singular ring creating two-sheeted topology. This space is regularized by the Higgs mechanism of symmetry breaking, leading to an extended particle that has a regular spinning core compatible with the external KN solution. We show that this core has much in common with the known MIT and SLAC bag models, but has the important advantage of being in accordance with the external gravitational and electromagnetic fields of the KN solution. A peculiar two-sheeted structure of Kerr's gravity provides a framework for the implementation of the Higgs mechanism of symmetry breaking in configuration space in accordance with the concept of the electroweak sector of the Standard Model. Similar to other bag models, the KN bag is flexible and pliant to deformations. For parameters of a spinning electron, the bag takes the shape of a thin rotating disk of the Compton radius, with a ring-string structure and a quark-like singular pole formed at the sharp edge of this disk, indicating that the considered lepton bag forms a single bag-string-quark system.

  9. Gravitating lepton bag model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burinskii, A., E-mail: burinskii@mail.ru

    The Kerr–Newman (KN) black hole (BH) solution exhibits the external gravitational and electromagnetic field corresponding to that of the Dirac electron. For the large spin/mass ratio, a ≫ m, the BH loses horizons and acquires a naked singular ring creating two-sheeted topology. This space is regularized by the Higgs mechanism of symmetry breaking, leading to an extended particle that has a regular spinning core compatible with the external KN solution. We show that this core has much in common with the known MIT and SLAC bag models, but has the important advantage of being in accordance with the external gravitationalmore » and electromagnetic fields of the KN solution. A peculiar two-sheeted structure of Kerr’s gravity provides a framework for the implementation of the Higgs mechanism of symmetry breaking in configuration space in accordance with the concept of the electroweak sector of the Standard Model. Similar to other bag models, the KN bag is flexible and pliant to deformations. For parameters of a spinning electron, the bag takes the shape of a thin rotating disk of the Compton radius, with a ring–string structure and a quark-like singular pole formed at the sharp edge of this disk, indicating that the considered lepton bag forms a single bag–string–quark system.« less

  10. Development of daily "swath" mascon solutions from GRACE

    NASA Astrophysics Data System (ADS)

    Save, Himanshu; Bettadpur, Srinivas

    2016-04-01

    The Gravity Recovery and Climate Experiment (GRACE) mission has provided invaluable and the only data of its kind over the past 14 years that measures the total water column in the Earth System. The GRACE project provides monthly average solutions and there are experimental quick-look solutions and regularized sliding window solutions available from Center for Space Research (CSR) that implement a sliding window approach and variable daily weights. The need for special handling of these solutions in data assimilation and the possibility of capturing the total water storage (TWS) signal at sub-monthly time scales motivated this study. This study discusses the progress of the development of true daily high resolution "swath" mascon total water storage estimate from GRACE using Tikhonov regularization. These solutions include the estimates of daily total water storage (TWS) for the mascon elements that were "observed" by the GRACE satellites on a given day. This paper discusses the computation techniques, signal, error and uncertainty characterization of these daily solutions. We discuss the comparisons with the official GRACE RL05 solutions and with CSR mascon solution to characterize the impact on science results especially at the sub-monthly time scales. The evaluation is done with emphasis on the temporal signal characteristics and validated against in-situ data set and multiple models.

  11. Deforming regular black holes

    NASA Astrophysics Data System (ADS)

    Neves, J. C. S.

    2017-06-01

    In this work, we have deformed regular black holes which possess a general mass term described by a function which generalizes the Bardeen and Hayward mass functions. By using linear constraints in the energy-momentum tensor to generate metrics, the solutions presented in this work are either regular or singular. That is, within this approach, it is possible to generate regular or singular black holes from regular or singular black holes. Moreover, contrary to the Bardeen and Hayward regular solutions, the deformed regular black holes may violate the weak energy condition despite the presence of the spherical symmetry. Some comments on accretion of deformed black holes in cosmological scenarios are made.

  12. Supporting Regularized Logistic Regression Privately and Efficiently.

    PubMed

    Li, Wenfa; Liu, Hongzhe; Yang, Peng; Xie, Wei

    2016-01-01

    As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Concerns over data privacy make it increasingly difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used statistical model while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluations on several studies validate the privacy guarantee, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc.

  13. Supporting Regularized Logistic Regression Privately and Efficiently

    PubMed Central

    Li, Wenfa; Liu, Hongzhe; Yang, Peng; Xie, Wei

    2016-01-01

    As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Concerns over data privacy make it increasingly difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used statistical model while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluations on several studies validate the privacy guarantee, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc. PMID:27271738

  14. Mechanical behavior of regular open-cell porous biomaterials made of diamond lattice unit cells.

    PubMed

    Ahmadi, S M; Campoli, G; Amin Yavari, S; Sajadi, B; Wauthle, R; Schrooten, J; Weinans, H; Zadpoor, A A

    2014-06-01

    Cellular structures with highly controlled micro-architectures are promising materials for orthopedic applications that require bone-substituting biomaterials or implants. The availability of additive manufacturing techniques has enabled manufacturing of biomaterials made of one or multiple types of unit cells. The diamond lattice unit cell is one of the relatively new types of unit cells that are used in manufacturing of regular porous biomaterials. As opposed to many other types of unit cells, there is currently no analytical solution that could be used for prediction of the mechanical properties of cellular structures made of the diamond lattice unit cells. In this paper, we present new analytical solutions and closed-form relationships for predicting the elastic modulus, Poisson׳s ratio, critical buckling load, and yield (plateau) stress of cellular structures made of the diamond lattice unit cell. The mechanical properties predicted using the analytical solutions are compared with those obtained using finite element models. A number of solid and porous titanium (Ti6Al4V) specimens were manufactured using selective laser melting. A series of experiments were then performed to determine the mechanical properties of the matrix material and cellular structures. The experimentally measured mechanical properties were compared with those obtained using analytical solutions and finite element (FE) models. It has been shown that, for small apparent density values, the mechanical properties obtained using analytical and numerical solutions are in agreement with each other and with experimental observations. The properties estimated using an analytical solution based on the Euler-Bernoulli theory markedly deviated from experimental results for large apparent density values. The mechanical properties estimated using FE models and another analytical solution based on the Timoshenko beam theory better matched the experimental observations. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. 3D first-arrival traveltime tomography with modified total variation regularization

    NASA Astrophysics Data System (ADS)

    Jiang, Wenbin; Zhang, Jie

    2018-02-01

    Three-dimensional (3D) seismic surveys have become a major tool in the exploration and exploitation of hydrocarbons. 3D seismic first-arrival traveltime tomography is a robust method for near-surface velocity estimation. A common approach for stabilizing the ill-posed inverse problem is to apply Tikhonov regularization to the inversion. However, the Tikhonov regularization method recovers smooth local structures while blurring the sharp features in the model solution. We present a 3D first-arrival traveltime tomography method with modified total variation (MTV) regularization to preserve sharp velocity contrasts and improve the accuracy of velocity inversion. To solve the minimization problem of the new traveltime tomography method, we decouple the original optimization problem into two following subproblems: a standard traveltime tomography problem with the traditional Tikhonov regularization and a L2 total variation problem. We apply the conjugate gradient method and split-Bregman iterative method to solve these two subproblems, respectively. Our synthetic examples show that the new method produces higher resolution models than the conventional traveltime tomography with Tikhonov regularization. We apply the technique to field data. The stacking section shows significant improvements with static corrections from the MTV traveltime tomography.

  16. Partial regularity of weak solutions to a PDE system with cubic nonlinearity

    NASA Astrophysics Data System (ADS)

    Liu, Jian-Guo; Xu, Xiangsheng

    2018-04-01

    In this paper we investigate regularity properties of weak solutions to a PDE system that arises in the study of biological transport networks. The system consists of a possibly singular elliptic equation for the scalar pressure of the underlying biological network coupled to a diffusion equation for the conductance vector of the network. There are several different types of nonlinearities in the system. Of particular mathematical interest is a term that is a polynomial function of solutions and their partial derivatives and this polynomial function has degree three. That is, the system contains a cubic nonlinearity. Only weak solutions to the system have been shown to exist. The regularity theory for the system remains fundamentally incomplete. In particular, it is not known whether or not weak solutions develop singularities. In this paper we obtain a partial regularity theorem, which gives an estimate for the parabolic Hausdorff dimension of the set of possible singular points.

  17. Regularized lattice Boltzmann model for immiscible two-phase flows with power-law rheology

    NASA Astrophysics Data System (ADS)

    Ba, Yan; Wang, Ningning; Liu, Haihu; Li, Qiang; He, Guoqiang

    2018-03-01

    In this work, a regularized lattice Boltzmann color-gradient model is developed for the simulation of immiscible two-phase flows with power-law rheology. This model is as simple as the Bhatnagar-Gross-Krook (BGK) color-gradient model except that an additional regularization step is introduced prior to the collision step. In the regularization step, the pseudo-inverse method is adopted as an alternative solution for the nonequilibrium part of the total distribution function, and it can be easily extended to other discrete velocity models no matter whether a forcing term is considered or not. The obtained expressions for the nonequilibrium part are merely related to macroscopic variables and velocity gradients that can be evaluated locally. Several numerical examples, including the single-phase and two-phase layered power-law fluid flows between two parallel plates, and the droplet deformation and breakup in a simple shear flow, are conducted to test the capability and accuracy of the proposed color-gradient model. Results show that the present model is more stable and accurate than the BGK color-gradient model for power-law fluids with a wide range of power-law indices. Compared to its multiple-relaxation-time counterpart, the present model can increase the computing efficiency by around 15%, while keeping the same accuracy and stability. Also, the present model is found to be capable of reasonably predicting the critical capillary number of droplet breakup.

  18. An adaptive regularization parameter choice strategy for multispectral bioluminescence tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feng Jinchao; Qin Chenghu; Jia Kebin

    2011-11-15

    Purpose: Bioluminescence tomography (BLT) provides an effective tool for monitoring physiological and pathological activities in vivo. However, the measured data in bioluminescence imaging are corrupted by noise. Therefore, regularization methods are commonly used to find a regularized solution. Nevertheless, for the quality of the reconstructed bioluminescent source obtained by regularization methods, the choice of the regularization parameters is crucial. To date, the selection of regularization parameters remains challenging. With regards to the above problems, the authors proposed a BLT reconstruction algorithm with an adaptive parameter choice rule. Methods: The proposed reconstruction algorithm uses a diffusion equation for modeling the bioluminescentmore » photon transport. The diffusion equation is solved with a finite element method. Computed tomography (CT) images provide anatomical information regarding the geometry of the small animal and its internal organs. To reduce the ill-posedness of BLT, spectral information and the optimal permissible source region are employed. Then, the relationship between the unknown source distribution and multiview and multispectral boundary measurements is established based on the finite element method and the optimal permissible source region. Since the measured data are noisy, the BLT reconstruction is formulated as l{sub 2} data fidelity and a general regularization term. When choosing the regularization parameters for BLT, an efficient model function approach is proposed, which does not require knowledge of the noise level. This approach only requests the computation of the residual and regularized solution norm. With this knowledge, we construct the model function to approximate the objective function, and the regularization parameter is updated iteratively. Results: First, the micro-CT based mouse phantom was used for simulation verification. Simulation experiments were used to illustrate why multispectral data were used rather than monochromatic data. Furthermore, the study conducted using an adaptive regularization parameter demonstrated our ability to accurately localize the bioluminescent source. With the adaptively estimated regularization parameter, the reconstructed center position of the source was (20.37, 31.05, 12.95) mm, and the distance to the real source was 0.63 mm. The results of the dual-source experiments further showed that our algorithm could localize the bioluminescent sources accurately. The authors then presented experimental evidence that the proposed algorithm exhibited its calculated efficiency over the heuristic method. The effectiveness of the new algorithm was also confirmed by comparing it with the L-curve method. Furthermore, various initial speculations regarding the regularization parameter were used to illustrate the convergence of our algorithm. Finally, in vivo mouse experiment further illustrates the effectiveness of the proposed algorithm. Conclusions: Utilizing numerical, physical phantom and in vivo examples, we demonstrated that the bioluminescent sources could be reconstructed accurately with automatic regularization parameters. The proposed algorithm exhibited superior performance than both the heuristic regularization parameter choice method and L-curve method based on the computational speed and localization error.« less

  19. Using Tikhonov Regularization for Spatial Projections from CSR Regularized Spherical Harmonic GRACE Solutions

    NASA Astrophysics Data System (ADS)

    Save, H.; Bettadpur, S. V.

    2013-12-01

    It has been demonstrated before that using Tikhonov regularization produces spherical harmonic solutions from GRACE that have very little residual stripes while capturing all the signal observed by GRACE within the noise level. This paper demonstrates a two-step process and uses Tikhonov regularization to remove the residual stripes in the CSR regularized spherical harmonic coefficients when computing the spatial projections. We discuss methods to produce mass anomaly grids that have no stripe features while satisfying the necessary condition of capturing all observed signal within the GRACE noise level.

  20. Born-Infeld Gravity Revisited

    NASA Astrophysics Data System (ADS)

    Setare, M. R.; Sahraee, M.

    2013-12-01

    In this paper, we investigate the behavior of linearized gravitational excitation in the Born-Infeld gravity in AdS3 space. We obtain the linearized equation of motion and show that this higher-order gravity propagate two gravitons, massless and massive, on the AdS3 background. In contrast to the R2 models, such as TMG or NMG, Born-Infeld gravity does not have a critical point for any regular choice of parameters. So the logarithmic solution is not a solution of this model, due to this one cannot find a logarithmic conformal field theory as a dual model for Born-Infeld gravity.

  1. Nonconvex Sparse Logistic Regression With Weakly Convex Regularization

    NASA Astrophysics Data System (ADS)

    Shen, Xinyue; Gu, Yuantao

    2018-06-01

    In this work we propose to fit a sparse logistic regression model by a weakly convex regularized nonconvex optimization problem. The idea is based on the finding that a weakly convex function as an approximation of the $\\ell_0$ pseudo norm is able to better induce sparsity than the commonly used $\\ell_1$ norm. For a class of weakly convex sparsity inducing functions, we prove the nonconvexity of the corresponding sparse logistic regression problem, and study its local optimality conditions and the choice of the regularization parameter to exclude trivial solutions. Despite the nonconvexity, a method based on proximal gradient descent is used to solve the general weakly convex sparse logistic regression, and its convergence behavior is studied theoretically. Then the general framework is applied to a specific weakly convex function, and a necessary and sufficient local optimality condition is provided. The solution method is instantiated in this case as an iterative firm-shrinkage algorithm, and its effectiveness is demonstrated in numerical experiments by both randomly generated and real datasets.

  2. Detection of regularities in variation in geomechanical behavior of rock mass during multi-roadway preparation and mining of an extraction panel

    NASA Astrophysics Data System (ADS)

    Tsvetkov, AB; Pavlova, LD; Fryanov, VN

    2018-03-01

    The results of numerical simulation of the stress–strain state in a rock block and surrounding mass mass under multi-roadway preparation to mining are presented. The numerical solutions obtained by the nonlinear modeling and using the constitutive relations of the theory of elasticity are compared. The regularities of the stress distribution in the vicinity of the pillars located in the zone of the abutment pressure of are found.

  3. Solid/liquid interfacial free energies in binary systems

    NASA Technical Reports Server (NTRS)

    Nason, D.; Tiller, W. A.

    1973-01-01

    Description of a semiquantitative technique for predicting the segregation characteristics of smooth interfaces between binary solid and liquid solutions in terms of readily available thermodynamic parameters of the bulk solutions. A lattice-liquid interfacial model and a pair-bonded regular solution model are employed in the treatment with an accommodation for liquid interfacial entropy. The method is used to calculate the interfacial segregation and the free energy of segregation for solid-liquid interfaces between binary solutions for the (111) boundary of fcc crystals. The zone of compositional transition across the interface is shown to be on the order of a few atomic layers in width, being moderately narrower for ideal solutions. The free energy of the segregated interface depends primarily upon the solid composition and the heats of fusion of the component atoms, the composition difference of the solutions, and the difference of the heats of mixing of the solutions.

  4. Substructural Regularization With Data-Sensitive Granularity for Sequence Transfer Learning.

    PubMed

    Sun, Shichang; Liu, Hongbo; Meng, Jiana; Chen, C L Philip; Yang, Yu

    2018-06-01

    Sequence transfer learning is of interest in both academia and industry with the emergence of numerous new text domains from Twitter and other social media tools. In this paper, we put forward the data-sensitive granularity for transfer learning, and then, a novel substructural regularization transfer learning model (STLM) is proposed to preserve target domain features at substructural granularity in the light of the condition of labeled data set size. Our model is underpinned by hidden Markov model and regularization theory, where the substructural representation can be integrated as a penalty after measuring the dissimilarity of substructures between target domain and STLM with relative entropy. STLM can achieve the competing goals of preserving the target domain substructure and utilizing the observations from both the target and source domains simultaneously. The estimation of STLM is very efficient since an analytical solution can be derived as a necessary and sufficient condition. The relative usability of substructures to act as regularization parameters and the time complexity of STLM are also analyzed and discussed. Comprehensive experiments of part-of-speech tagging with both Brown and Twitter corpora fully justify that our model can make improvements on all the combinations of source and target domains.

  5. FOREWORD: Tackling inverse problems in a Banach space environment: from theory to applications Tackling inverse problems in a Banach space environment: from theory to applications

    NASA Astrophysics Data System (ADS)

    Schuster, Thomas; Hofmann, Bernd; Kaltenbacher, Barbara

    2012-10-01

    Inverse problems can usually be modelled as operator equations in infinite-dimensional spaces with a forward operator acting between Hilbert or Banach spaces—a formulation which quite often also serves as the basis for defining and analyzing solution methods. The additional amount of structure and geometric interpretability provided by the concept of an inner product has rendered these methods amenable to a convergence analysis, a fact which has led to a rigorous and comprehensive study of regularization methods in Hilbert spaces over the last three decades. However, for numerous problems such as x-ray diffractometry, certain inverse scattering problems and a number of parameter identification problems in PDEs, the reasons for using a Hilbert space setting seem to be based on conventions rather than an appropriate and realistic model choice, so often a Banach space setting would be closer to reality. Furthermore, non-Hilbertian regularization and data fidelity terms incorporating a priori information on solution and noise, such as general Lp-norms, TV-type norms, or the Kullback-Leibler divergence, have recently become very popular. These facts have motivated intensive investigations on regularization methods in Banach spaces, a topic which has emerged as a highly active research field within the area of inverse problems. Meanwhile some of the most well-known regularization approaches, such as Tikhonov-type methods requiring the solution of extremal problems, and iterative ones like the Landweber method, the Gauss-Newton method, as well as the approximate inverse method, have been investigated for linear and nonlinear operator equations in Banach spaces. Convergence with rates has been proven and conditions on the solution smoothness and on the structure of nonlinearity have been formulated. Still, beyond the existing results a large number of challenging open questions have arisen, due to the more involved handling of general Banach spaces and the larger variety of concrete instances with special properties. The aim of this special section is to provide a forum for highly topical ongoing work in the area of regularization in Banach spaces, its numerics and its applications. Indeed, we have been lucky enough to obtain a number of excellent papers both from colleagues who have previously been contributing to this topic and from researchers entering the field due to its relevance in practical inverse problems. We would like to thank all contributers for enabling us to present a high quality collection of papers on topics ranging from various aspects of regularization via efficient numerical solution to applications in PDE models. We give a brief overview of the contributions included in this issue (here ordered alphabetically by first author). In their paper, Iterative regularization with general penalty term—theory and application to L1 and TV regularization, Radu Bot and Torsten Hein provide an extension of the Landweber iteration for linear operator equations in Banach space to general operators in place of the inverse duality mapping, which corresponds to the use of general regularization functionals in variational regularization. The L∞ topology in data space corresponds to the frequently occuring situation of uniformly distributed data noise. A numerically efficient solution of the resulting Tikhonov regularization problem via a Moreau-Yosida appriximation and a semismooth Newton method, along with a δ-free regularization parameter choice rule, is the topic of the paper L∞ fitting for inverse problems with uniform noise by Christian Clason. Extension of convergence rates results from classical source conditions to their generalization via variational inequalities with a priori and a posteriori stopping rules is the main contribution of the paper Regularization of linear ill-posed problems by the augmented Lagrangian method and variational inequalities by Klaus Frick and Markus Grasmair, again in the context of some iterative method. A powerful tool for proving convergence rates of Tikhonov type but also other regularization methods in Banach spaces are assumptions of the type of variational inequalities that combine conditions on solution smoothness (i.e., source conditions in the Hilbert space case) and nonlinearity of the forward operator. In Parameter choice in Banach space regularization under variational inequalities, Bernd Hofmann and Peter Mathé provide results with general error measures and especially study the question of regularization parameter choice. Daijun Jiang, Hui Feng, and Jun Zou consider an application of Banach space ideas in the context of an application problem in their paper Convergence rates of Tikhonov regularizations for parameter identifiation in a parabolic-elliptic system, namely the identification of a distributed diffusion coefficient in a coupled elliptic-parabolic system. In particular, they show convergence rates of Lp-H1 (variational) regularization for the application under consideration via the use and verification of certain source and nonlinearity conditions. In computational practice, the Lp norm with p close to one is often used as a substitute for the actually sparsity promoting L1 norm. In Norm sensitivity of sparsity regularization with respect to p, Kamil S Kazimierski, Peter Maass and Robin Strehlow consider the question of how sensitive the Tikhonov regularized solution is with respect to p. They do so by computing the derivative via the implicit function theorem, particularly at the crucial value, p=1. Another iterative regularization method in Banach space is considered by Qinian Jin and Linda Stals in Nonstationary iterated Tikhonov regularization for ill-posed problems in Banach spaces. Using a variational formulation and under some smoothness and convexity assumption on the preimage space, they extend the convergence analysis of the well-known iterative Tikhonov method for linear problems in Hilbert space to a more general Banach space framework. Systems of linear or nonlinear operators can be efficiently treated by cyclic iterations, thus several variants of gradient and Newton-type Kaczmarz methods have already been studied in the Hilbert space setting. Antonio Leitão and M Marques Alves in their paper On Landweber---Kaczmarz methods for regularizing systems of ill-posed equations in Banach spaces carry out an extension to Banach spaces for the fundamental Landweber version. The impact of perturbations in the evaluation of the forward operator and its derivative on the convergence behaviour of regularization methods is a practically and highly relevant issue. It is treated in the paper Convergence rates analysis of Tikhonov regularization for nonlinear ill-posed problems with noisy operators by Shuai Lu and Jens Flemming for variational regularization of nonlinear problems in Banach spaces. In The approximate inverse in action: IV. Semi-discrete equations in a Banach space setting, Thomas Schuster, Andreas Rieder and Frank Schöpfer extend the concept of approximate inverse to the practically and highly relevant situation of finitely many measurements and a general smooth and convex Banach space as preimage space. They devise two approaches for computing the reconstruction kernels required in the method and provide convergence and regularization results. Frank Werner and Thorsten Hohage in Convergence rates in expectation for Tikhonov-type regularization of inverse problems with Poisson data prove convergence rates results for variational regularization with general convex regularization term and the Kullback-Leibler distance as data fidelity term by combining a new result on Poisson distributed data with a deterministic rates analysis. Finally, we would like to thank the Inverse Problems team, especially Joanna Evangelides and Chris Wileman, for their extraordinary smooth and productive cooperation, as well as Alfred K Louis for his kind support of our initiative.

  6. Direct Regularized Estimation of Retinal Vascular Oxygen Tension Based on an Experimental Model

    PubMed Central

    Yildirim, Isa; Ansari, Rashid; Yetik, I. Samil; Shahidi, Mahnaz

    2014-01-01

    Phosphorescence lifetime imaging is commonly used to generate oxygen tension maps of retinal blood vessels by classical least squares (LS) estimation method. A spatial regularization method was later proposed and provided improved results. However, both methods obtain oxygen tension values from the estimates of intermediate variables, and do not yield an optimum estimate of oxygen tension values, due to their nonlinear dependence on the ratio of intermediate variables. In this paper, we provide an improved solution by devising a regularized direct least squares (RDLS) method that exploits available knowledge in studies that provide models of oxygen tension in retinal arteries and veins, unlike the earlier regularized LS approach where knowledge about intermediate variables is limited. The performance of the proposed RDLS method is evaluated by investigating and comparing the bias, variance, oxygen tension maps, 1-D profiles of arterial oxygen tension, and mean absolute error with those of earlier methods, and its superior performance both quantitatively and qualitatively is demonstrated. PMID:23732915

  7. The Role of the Pressure in the Partial Regularity Theory for Weak Solutions of the Navier-Stokes Equations

    NASA Astrophysics Data System (ADS)

    Chamorro, Diego; Lemarié-Rieusset, Pierre-Gilles; Mayoufi, Kawther

    2018-04-01

    We study the role of the pressure in the partial regularity theory for weak solutions of the Navier-Stokes equations. By introducing the notion of dissipative solutions, due to D uchon and R obert (Nonlinearity 13:249-255, 2000), we will provide a generalization of the Caffarelli, Kohn and Nirenberg theory. Our approach sheels new light on the role of the pressure in this theory in connection to Serrin's local regularity criterion.

  8. A dynamical regularization algorithm for solving inverse source problems of elliptic partial differential equations

    NASA Astrophysics Data System (ADS)

    Zhang, Ye; Gong, Rongfang; Cheng, Xiaoliang; Gulliksson, Mårten

    2018-06-01

    This study considers the inverse source problem for elliptic partial differential equations with both Dirichlet and Neumann boundary data. The unknown source term is to be determined by additional boundary conditions. Unlike the existing methods found in the literature, which usually employ the first-order in time gradient-like system (such as the steepest descent methods) for numerically solving the regularized optimization problem with a fixed regularization parameter, we propose a novel method with a second-order in time dissipative gradient-like system and a dynamical selected regularization parameter. A damped symplectic scheme is proposed for the numerical solution. Theoretical analysis is given for both the continuous model and the numerical algorithm. Several numerical examples are provided to show the robustness of the proposed algorithm.

  9. Nonpolynomial Lagrangian approach to regular black holes

    NASA Astrophysics Data System (ADS)

    Colléaux, Aimeric; Chinaglia, Stefano; Zerbini, Sergio

    We present a review on Lagrangian models admitting spherically symmetric regular black holes (RBHs), and cosmological bounce solutions. Nonlinear electrodynamics, nonpolynomial gravity, and fluid approaches are explained in details. They consist respectively in a gauge invariant generalization of the Maxwell-Lagrangian, in modifications of the Einstein-Hilbert action via nonpolynomial curvature invariants, and finally in the reconstruction of density profiles able to cure the central singularity of black holes. The nonpolynomial gravity curvature invariants have the special property to be second-order and polynomial in the metric field, in spherically symmetric spacetimes. Along the way, other models and results are discussed, and some general properties that RBHs should satisfy are mentioned. A covariant Sakharov criterion for the absence of singularities in dynamical spherically symmetric spacetimes is also proposed and checked for some examples of such regular metric fields.

  10. Ab initio calculation of excess properties of La{sub 1−x}(Ln,An){sub x}PO{sub 4} solid solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Yan; JARA High-Performance Computing, Schinkelstrasse 2, 52062 Aachen; Kowalski, Piotr M., E-mail: p.kowalski@fz-juelich.de

    2014-12-15

    We used ab initio computational approach to predict the excess enthalpy of mixing and the corresponding regular/subregular model parameters for La{sub 1−x}Ln{sub x}PO{sub 4} (Ln=Ce,…, Tb) and La{sub 1−x}An{sub x}PO{sub 4} (An=Pu, Am and Cm) monazite-type solid solutions. We found that the regular model interaction parameter W computed for La{sub 1−x}Ln{sub x}PO{sub 4} solid solutions matches the few existing experimental data. Within the lanthanide series W increases quadratically with the volume mismatch between LaPO{sub 4} and LnPO{sub 4} endmembers (ΔV=V{sub LaPO{sub 4}}−V{sub LnPO{sub 4}}), so that W(kJ/mol)=0.618(ΔV(cm{sup 3}/mol)){sup 2}. We demonstrate that this relationship also fits the interaction parameters computedmore » for La{sub 1−x}An{sub x}PO{sub 4} solid solutions. This shows that lanthanides can be used as surrogates for investigation of the thermodynamic mixing properties of actinide-bearing solid solutions. - Highlights: • The excess enthalpies of mixing for monazite-type solid solutions are computed. • The excess enthalpies increase with the endmembers volume mismatch. • The relationship derived for lanthanides is transferable to La{sub 1−x}An{sub x}PO{sub 4} systems.« less

  11. Statistical analysis of nonlinearly reconstructed near-infrared tomographic images: Part I--Theory and simulations.

    PubMed

    Pogue, Brian W; Song, Xiaomei; Tosteson, Tor D; McBride, Troy O; Jiang, Shudong; Paulsen, Keith D

    2002-07-01

    Near-infrared (NIR) diffuse tomography is an emerging method for imaging the interior of tissues to quantify concentrations of hemoglobin and exogenous chromophores non-invasively in vivo. It often exploits an optical diffusion model-based image reconstruction algorithm to estimate spatial property values from measurements of the light flux at the surface of the tissue. In this study, mean-squared error (MSE) over the image is used to evaluate methods for regularizing the ill-posed inverse image reconstruction problem in NIR tomography. Estimates of image bias and image standard deviation were calculated based upon 100 repeated reconstructions of a test image with randomly distributed noise added to the light flux measurements. It was observed that the bias error dominates at high regularization parameter values while variance dominates as the algorithm is allowed to approach the optimal solution. This optimum does not necessarily correspond to the minimum projection error solution, but typically requires further iteration with a decreasing regularization parameter to reach the lowest image error. Increasing measurement noise causes a need to constrain the minimum regularization parameter to higher values in order to achieve a minimum in the overall image MSE.

  12. W-phase estimation of first-order rupture distribution for megathrust earthquakes

    NASA Astrophysics Data System (ADS)

    Benavente, Roberto; Cummins, Phil; Dettmer, Jan

    2014-05-01

    Estimating the rupture pattern for large earthquakes during the first hour after the origin time can be crucial for rapid impact assessment and tsunami warning. However, the estimation of coseismic slip distribution models generally involves complex methodologies that are difficult to implement rapidly. Further, while model parameter uncertainty can be crucial for meaningful estimation, they are often ignored. In this work we develop a finite fault inversion for megathrust earthquakes which rapidly generates good first order estimates and uncertainties of spatial slip distributions. The algorithm uses W-phase waveforms and a linear automated regularization approach to invert for rupture models of some recent megathrust earthquakes. The W phase is a long period (100-1000 s) wave which arrives together with the P wave. Because it is fast, has small amplitude and a long-period character, the W phase is regularly used to estimate point source moment tensors by the NEIC and PTWC, among others, within an hour of earthquake occurrence. We use W-phase waveforms processed in a manner similar to that used for such point-source solutions. The inversion makes use of 3 component W-phase records retrieved from the Global Seismic Network. The inverse problem is formulated by a multiple time window method, resulting in a linear over-parametrized problem. The over-parametrization is addressed by Tikhonov regularization and regularization parameters are chosen according to the discrepancy principle by grid search. Noise on the data is addressed by estimating the data covariance matrix from data residuals. The matrix is obtained by starting with an a priori covariance matrix and then iteratively updating the matrix based on the residual errors of consecutive inversions. Then, a covariance matrix for the parameters is computed using a Bayesian approach. The application of this approach to recent megathrust earthquakes produces models which capture the most significant features of their slip distributions. Also, reliable solutions are generally obtained with data in a 30-minute window following the origin time, suggesting that a real-time system could obtain solutions in less than one hour following the origin time.

  13. Source term identification in atmospheric modelling via sparse optimization

    NASA Astrophysics Data System (ADS)

    Adam, Lukas; Branda, Martin; Hamburger, Thomas

    2015-04-01

    Inverse modelling plays an important role in identifying the amount of harmful substances released into atmosphere during major incidents such as power plant accidents or volcano eruptions. Another possible application of inverse modelling lies in the monitoring the CO2 emission limits where only observations at certain places are available and the task is to estimate the total releases at given locations. This gives rise to minimizing the discrepancy between the observations and the model predictions. There are two standard ways of solving such problems. In the first one, this discrepancy is regularized by adding additional terms. Such terms may include Tikhonov regularization, distance from a priori information or a smoothing term. The resulting, usually quadratic, problem is then solved via standard optimization solvers. The second approach assumes that the error term has a (normal) distribution and makes use of Bayesian modelling to identify the source term. Instead of following the above-mentioned approaches, we utilize techniques from the field of compressive sensing. Such techniques look for a sparsest solution (solution with the smallest number of nonzeros) of a linear system, where a maximal allowed error term may be added to this system. Even though this field is a developed one with many possible solution techniques, most of them do not consider even the simplest constraints which are naturally present in atmospheric modelling. One of such examples is the nonnegativity of release amounts. We believe that the concept of a sparse solution is natural in both problems of identification of the source location and of the time process of the source release. In the first case, it is usually assumed that there are only few release points and the task is to find them. In the second case, the time window is usually much longer than the duration of the actual release. In both cases, the optimal solution should contain a large amount of zeros, giving rise to the concept of sparsity. In the paper, we summarize several optimization techniques which are used for finding sparse solutions and propose their modifications to handle selected constraints such as nonnegativity constraints and simple linear constraints, for example the minimal or maximal amount of total release. These techniques range from successive convex approximations to solution of one nonconvex problem. On simple examples, we explain these techniques and compare them from the point of implementation simplicity, approximation capability and convergence properties. Finally, these methods will be applied on the European Tracer Experiment (ETEX) data and the results will be compared with the current state of arts techniques such as regularized least squares or Bayesian approach. The obtained results show the surprisingly good results of these techniques. This research is supported by EEA/Norwegian Financial Mechanism under project 7F14287 STRADI.

  14. Skyrmions, Skyrme stars and black holes with Skyrme hair in five spacetime dimension

    NASA Astrophysics Data System (ADS)

    Brihaye, Yves; Herdeiro, Carlos; Radu, Eugen; Tchrakian, D. H.

    2017-11-01

    We consider a class of generalizations of the Skyrme model to five spacetime dimensions ( d = 5), which is defined in terms of an O(5) sigma model. A special ansatz for the Skyrme field allows angular momentum to be present and equations of motion with a radial dependence only. Using it, we obtain: 1) everywhere regular solutions describing localised energy lumps ( Skyrmions); 2) Self-gravitating, asymptotically flat, everywhere non-singular solitonic solutions ( Skyrme stars), upon minimally coupling the model to Einstein's gravity; 3) both static and spinning black holes with Skyrme hair, the latter with rotation in two orthogonal planes, with both angular momenta of equal magnitude. In the absence of gravity we present an analytic solution that satisfies a BPS-type bound and explore numerically some of the non-BPS solutions. In the presence of gravity, we contrast the solutions to this model with solutions to a complex scalar field model, namely boson stars and black holes with synchronised hair. Remarkably, even though the two models present key differences, and in particular the Skyrme model allows static hairy black holes, when introducing rotation, the synchronisation condition becomes mandatory, providing further evidence for its generality in obtaining rotating hairy black holes.

  15. Regular black holes in f(T) Gravity through a nonlinear electrodynamics source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Junior, Ednaldo L.B.; Rodrigues, Manuel E.; Houndjo, Mahouton J.S., E-mail: ednaldobarrosjr@gmail.com, E-mail: esialg@gmail.com, E-mail: sthoundjo@yahoo.fr

    2015-10-01

    We seek to obtain a new class of exact solutions of regular black holes in f(T) Gravity with non-linear electrodynamics material content, with spherical symmetry in 4D. The equations of motion provide the regaining of various solutions of General Relativity, as a particular case where the function f(T)=T. We developed a powerful method for finding exact solutions, where we get the first new class of regular black holes solutions in the f(T) Theory, where all the geometrics scalars disappear at the origin of the radial coordinate and are finite everywhere, as well as a new class of singular black holes.

  16. Bayesian Recurrent Neural Network for Language Modeling.

    PubMed

    Chien, Jen-Tzung; Ku, Yuan-Chu

    2016-02-01

    A language model (LM) is calculated as the probability of a word sequence that provides the solution to word prediction for a variety of information systems. A recurrent neural network (RNN) is powerful to learn the large-span dynamics of a word sequence in the continuous space. However, the training of the RNN-LM is an ill-posed problem because of too many parameters from a large dictionary size and a high-dimensional hidden layer. This paper presents a Bayesian approach to regularize the RNN-LM and apply it for continuous speech recognition. We aim to penalize the too complicated RNN-LM by compensating for the uncertainty of the estimated model parameters, which is represented by a Gaussian prior. The objective function in a Bayesian classification network is formed as the regularized cross-entropy error function. The regularized model is constructed not only by calculating the regularized parameters according to the maximum a posteriori criterion but also by estimating the Gaussian hyperparameter by maximizing the marginal likelihood. A rapid approximation to a Hessian matrix is developed to implement the Bayesian RNN-LM (BRNN-LM) by selecting a small set of salient outer-products. The proposed BRNN-LM achieves a sparser model than the RNN-LM. Experiments on different corpora show the robustness of system performance by applying the rapid BRNN-LM under different conditions.

  17. Approaches to highly parameterized inversion-A guide to using PEST for groundwater-model calibration

    USGS Publications Warehouse

    Doherty, John E.; Hunt, Randall J.

    2010-01-01

    Highly parameterized groundwater models can create calibration difficulties. Regularized inversion-the combined use of large numbers of parameters with mathematical approaches for stable parameter estimation-is becoming a common approach to address these difficulties and enhance the transfer of information contained in field measurements to parameters used to model that system. Though commonly used in other industries, regularized inversion is somewhat imperfectly understood in the groundwater field. There is concern that this unfamiliarity can lead to underuse, and misuse, of the methodology. This document is constructed to facilitate the appropriate use of regularized inversion for calibrating highly parameterized groundwater models. The presentation is directed at an intermediate- to advanced-level modeler, and it focuses on the PEST software suite-a frequently used tool for highly parameterized model calibration and one that is widely supported by commercial graphical user interfaces. A brief overview of the regularized inversion approach is provided, and techniques for mathematical regularization offered by PEST are outlined, including Tikhonov, subspace, and hybrid schemes. Guidelines for applying regularized inversion techniques are presented after a logical progression of steps for building suitable PEST input. The discussion starts with use of pilot points as a parameterization device and processing/grouping observations to form multicomponent objective functions. A description of potential parameter solution methodologies and resources available through the PEST software and its supporting utility programs follows. Directing the parameter-estimation process through PEST control variables is then discussed, including guidance for monitoring and optimizing the performance of PEST. Comprehensive listings of PEST control variables, and of the roles performed by PEST utility support programs, are presented in the appendixes.

  18. Managing a closed-loop supply chain inventory system with learning effects

    NASA Astrophysics Data System (ADS)

    Jauhari, Wakhid Ahmad; Dwicahyani, Anindya Rachma; Hendaryani, Oktiviandri; Kurdhi, Nughthoh Arfawi

    2018-02-01

    In this paper, we propose a closed-loop supply chain model consisting of a retailer and a manufacturer. We intend to investigate the impact of learning in regular production, remanufacturing and reworking. The customer demand is assumed deterministic and will be satisfied from both regular production and remanufacturing process. The return rate of used items depends on quality. We propose a mathematical model with the objective is to maximize the joint total profit by simultaneously determining the length of ordering cycle for the retailer and the number of regular production and remanufacturing cycle. The algorithm is suggested for finding the optimal solution. A numerical example is presented to illustrate the application of using a proposed model. The results show that the integrated model performs better in reducing total cost compared to the independent model. The total cost is most affected by the changes in the values of unit production cost and acceptable quality level. In addition, the changes in the defective items proportion and the fraction of holding costs significantly influence the retailer's ordering period.

  19. Regularity theory for general stable operators

    NASA Astrophysics Data System (ADS)

    Ros-Oton, Xavier; Serra, Joaquim

    2016-06-01

    We establish sharp regularity estimates for solutions to Lu = f in Ω ⊂Rn, L being the generator of any stable and symmetric Lévy process. Such nonlocal operators L depend on a finite measure on S n - 1, called the spectral measure. First, we study the interior regularity of solutions to Lu = f in B1. We prove that if f is Cα then u belong to C α + 2 s whenever α + 2 s is not an integer. In case f ∈L∞, we show that the solution u is C2s when s ≠ 1 / 2, and C 2 s - ɛ for all ɛ > 0 when s = 1 / 2. Then, we study the boundary regularity of solutions to Lu = f in Ω, u = 0 in Rn ∖ Ω, in C 1 , 1 domains Ω. We show that solutions u satisfy u /ds ∈C s - ɛ (Ω ‾) for all ɛ > 0, where d is the distance to ∂Ω. Finally, we show that our results are sharp by constructing two counterexamples.

  20. Identifing Atmospheric Pollutant Sources Using Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Paes, F. F.; Campos, H. F.; Luz, E. P.; Carvalho, A. R.

    2008-05-01

    The estimation of the area source pollutant strength is a relevant issue for atmospheric environment. This characterizes an inverse problem in the atmospheric pollution dispersion. In the inverse analysis, an area source domain is considered, where the strength of such area source term is assumed unknown. The inverse problem is solved by using a supervised artificial neural network: multi-layer perceptron. The conection weights of the neural network are computed from delta rule - learning process. The neural network inversion is compared with results from standard inverse analysis (regularized inverse solution). In the regularization method, the inverse problem is formulated as a non-linear optimization approach, whose the objective function is given by the square difference between the measured pollutant concentration and the mathematical models, associated with a regularization operator. In our numerical experiments, the forward problem is addressed by a source-receptor scheme, where a regressive Lagrangian model is applied to compute the transition matrix. The second order maximum entropy regularization is used, and the regularization parameter is calculated by the L-curve technique. The objective function is minimized employing a deterministic scheme (a quasi-Newton algorithm) [1] and a stochastic technique (PSO: particle swarm optimization) [2]. The inverse problem methodology is tested with synthetic observational data, from six measurement points in the physical domain. The best inverse solutions were obtained with neural networks. References: [1] D. R. Roberti, D. Anfossi, H. F. Campos Velho, G. A. Degrazia (2005): Estimating Emission Rate and Pollutant Source Location, Ciencia e Natura, p. 131-134. [2] E.F.P. da Luz, H.F. de Campos Velho, J.C. Becceneri, D.R. Roberti (2007): Estimating Atmospheric Area Source Strength Through Particle Swarm Optimization. Inverse Problems, Desing and Optimization Symposium IPDO-2007, April 16-18, Miami (FL), USA, vol 1, p. 354-359.

  1. Moving mesh finite element simulation for phase-field modeling of brittle fracture and convergence of Newton's iteration

    NASA Astrophysics Data System (ADS)

    Zhang, Fei; Huang, Weizhang; Li, Xianping; Zhang, Shicheng

    2018-03-01

    A moving mesh finite element method is studied for the numerical solution of a phase-field model for brittle fracture. The moving mesh partial differential equation approach is employed to dynamically track crack propagation. Meanwhile, the decomposition of the strain tensor into tensile and compressive components is essential for the success of the phase-field modeling of brittle fracture but results in a non-smooth elastic energy and stronger nonlinearity in the governing equation. This makes the governing equation much more difficult to solve and, in particular, Newton's iteration often fails to converge. Three regularization methods are proposed to smooth out the decomposition of the strain tensor. Numerical examples of fracture propagation under quasi-static load demonstrate that all of the methods can effectively improve the convergence of Newton's iteration for relatively small values of the regularization parameter but without compromising the accuracy of the numerical solution. They also show that the moving mesh finite element method is able to adaptively concentrate the mesh elements around propagating cracks and handle multiple and complex crack systems.

  2. Interplay between gravity and quintessence: a set of new GR solutions

    NASA Astrophysics Data System (ADS)

    Chernin, Arthur D.; Santiago, David I.; Silbergleit, Alexander S.

    2002-02-01

    A set of new exact analytical general relativity (GR) solutions with time-dependent and spatially inhomogeneous quintessence demonstrate (1) a static non-empty space-time with a horizon-type singular surface; (2) time-dependent spatially homogeneous `spheres' which are completely different in geometry from the Friedmann isotropic models; (3) infinitely strong anti-gravity at a `true' singularity where the density is infinitely large. It is also found that (4) the GR solutions allow for an extreme `density-free' form of energy that can generate regular space-time geometries.

  3. An entropy regularization method applied to the identification of wave distribution function for an ELF hiss event

    NASA Astrophysics Data System (ADS)

    Prot, Olivier; SantolíK, OndřEj; Trotignon, Jean-Gabriel; Deferaudy, Hervé

    2006-06-01

    An entropy regularization algorithm (ERA) has been developed to compute the wave-energy density from electromagnetic field measurements. It is based on the wave distribution function (WDF) concept. To assess its suitability and efficiency, the algorithm is applied to experimental data that has already been analyzed using other inversion techniques. The FREJA satellite data that is used consists of six spectral matrices corresponding to six time-frequency points of an ELF hiss-event spectrogram. The WDF analysis is performed on these six points and the results are compared with those obtained previously. A statistical stability analysis confirms the stability of the solutions. The WDF computation is fast and without any prespecified parameters. The regularization parameter has been chosen in accordance with the Morozov's discrepancy principle. The Generalized Cross Validation and L-curve criterions are then tentatively used to provide a fully data-driven method. However, these criterions fail to determine a suitable value of the regularization parameter. Although the entropy regularization leads to solutions that agree fairly well with those already published, some differences are observed, and these are discussed in detail. The main advantage of the ERA is to return the WDF that exhibits the largest entropy and to avoid the use of a priori models, which sometimes seem to be more accurate but without any justification.

  4. A Piecewise Deterministic Markov Toy Model for Traffic/Maintenance and Associated Hamilton–Jacobi Integrodifferential Systems on Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goreac, Dan, E-mail: Dan.Goreac@u-pem.fr; Kobylanski, Magdalena, E-mail: Magdalena.Kobylanski@u-pem.fr; Martinez, Miguel, E-mail: Miguel.Martinez@u-pem.fr

    2016-10-15

    We study optimal control problems in infinite horizon whxen the dynamics belong to a specific class of piecewise deterministic Markov processes constrained to star-shaped networks (corresponding to a toy traffic model). We adapt the results in Soner (SIAM J Control Optim 24(6):1110–1122, 1986) to prove the regularity of the value function and the dynamic programming principle. Extending the networks and Krylov’s “shaking the coefficients” method, we prove that the value function can be seen as the solution to a linearized optimization problem set on a convenient set of probability measures. The approach relies entirely on viscosity arguments. As a by-product,more » the dual formulation guarantees that the value function is the pointwise supremum over regular subsolutions of the associated Hamilton–Jacobi integrodifferential system. This ensures that the value function satisfies Perron’s preconization for the (unique) candidate to viscosity solution.« less

  5. An irregular lattice method for elastic wave propagation

    NASA Astrophysics Data System (ADS)

    O'Brien, Gareth S.; Bean, Christopher J.

    2011-12-01

    Lattice methods are a class of numerical scheme which represent a medium as a connection of interacting nodes or particles. In the case of modelling seismic wave propagation, the interaction term is determined from Hooke's Law including a bond-bending term. This approach has been shown to model isotropic seismic wave propagation in an elastic or viscoelastic medium by selecting the appropriate underlying lattice structure. To predetermine the material constants, this methodology has been restricted to regular grids, hexagonal or square in 2-D or cubic in 3-D. Here, we present a method for isotropic elastic wave propagation where we can remove this lattice restriction. The methodology is outlined and a relationship between the elastic material properties and an irregular lattice geometry are derived. The numerical method is compared with an analytical solution for wave propagation in an infinite homogeneous body along with comparing the method with a numerical solution for a layered elastic medium. The dispersion properties of this method are derived from a plane wave analysis showing the scheme is more dispersive than a regular lattice method. Therefore, the computational costs of using an irregular lattice are higher. However, by removing the regular lattice structure the anisotropic nature of fracture propagation in such methods can be removed.

  6. Sequential-Optimization-Based Framework for Robust Modeling and Design of Heterogeneous Catalytic Systems

    DOE PAGES

    Rangarajan, Srinivas; Maravelias, Christos T.; Mavrikakis, Manos

    2017-11-09

    Here, we present a general optimization-based framework for (i) ab initio and experimental data driven mechanistic modeling and (ii) optimal catalyst design of heterogeneous catalytic systems. Both cases are formulated as a nonlinear optimization problem that is subject to a mean-field microkinetic model and thermodynamic consistency requirements as constraints, for which we seek sparse solutions through a ridge (L 2 regularization) penalty. The solution procedure involves an iterative sequence of forward simulation of the differential algebraic equations pertaining to the microkinetic model using a numerical tool capable of handling stiff systems, sensitivity calculations using linear algebra, and gradient-based nonlinear optimization.more » A multistart approach is used to explore the solution space, and a hierarchical clustering procedure is implemented for statistically classifying potentially competing solutions. An example of methanol synthesis through hydrogenation of CO and CO 2 on a Cu-based catalyst is used to illustrate the framework. The framework is fast, is robust, and can be used to comprehensively explore the model solution and design space of any heterogeneous catalytic system.« less

  7. Sequential-Optimization-Based Framework for Robust Modeling and Design of Heterogeneous Catalytic Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rangarajan, Srinivas; Maravelias, Christos T.; Mavrikakis, Manos

    Here, we present a general optimization-based framework for (i) ab initio and experimental data driven mechanistic modeling and (ii) optimal catalyst design of heterogeneous catalytic systems. Both cases are formulated as a nonlinear optimization problem that is subject to a mean-field microkinetic model and thermodynamic consistency requirements as constraints, for which we seek sparse solutions through a ridge (L 2 regularization) penalty. The solution procedure involves an iterative sequence of forward simulation of the differential algebraic equations pertaining to the microkinetic model using a numerical tool capable of handling stiff systems, sensitivity calculations using linear algebra, and gradient-based nonlinear optimization.more » A multistart approach is used to explore the solution space, and a hierarchical clustering procedure is implemented for statistically classifying potentially competing solutions. An example of methanol synthesis through hydrogenation of CO and CO 2 on a Cu-based catalyst is used to illustrate the framework. The framework is fast, is robust, and can be used to comprehensively explore the model solution and design space of any heterogeneous catalytic system.« less

  8. A fractional-order accumulative regularization filter for force reconstruction

    NASA Astrophysics Data System (ADS)

    Wensong, Jiang; Zhongyu, Wang; Jing, Lv

    2018-02-01

    The ill-posed inverse problem of the force reconstruction comes from the influence of noise to measured responses and results in an inaccurate or non-unique solution. To overcome this ill-posedness, in this paper, the transfer function of the reconstruction model is redefined by a Fractional order Accumulative Regularization Filter (FARF). First, the measured responses with noise are refined by a fractional-order accumulation filter based on a dynamic data refresh strategy. Second, a transfer function, generated by the filtering results of the measured responses, is manipulated by an iterative Tikhonov regularization with a serious of iterative Landweber filter factors. Third, the regularization parameter is optimized by the Generalized Cross-Validation (GCV) to improve the ill-posedness of the force reconstruction model. A Dynamic Force Measurement System (DFMS) for the force reconstruction is designed to illustrate the application advantages of our suggested FARF method. The experimental result shows that the FARF method with r = 0.1 and α = 20, has a PRE of 0.36% and an RE of 2.45%, is superior to other cases of the FARF method and the traditional regularization methods when it comes to the dynamic force reconstruction.

  9. Refraction tomography mapping of near-surface dipping layers using landstreamer data at East Canyon Dam, Utah

    USGS Publications Warehouse

    Ivanov, J.; Miller, R.D.; Markiewicz, R.D.; Xia, J.

    2008-01-01

    We apply the P-wave refraction-tomography method to seismic data collected with a landstreamer. Refraction-tomography inversion solutions were determined using regularization parameters that provided the most realistic near-surface solutions that best matched the dipping layer structure of nearby outcrops. A reasonably well matched solution was obtained using an unusual set of optimal regularization parameters. In comparison, the use of conventional regularization parameters did not provide as realistic results. Thus, we consider that even if there is only qualitative a-priori information about a site (i.e., visual) - in the case of the East Canyon Dam, Utah - it might be possible to minimize the refraction nonuniqueness by estimating the most appropriate regularization parameters.

  10. Solving ill-posed control problems by stabilized finite element methods: an alternative to Tikhonov regularization

    NASA Astrophysics Data System (ADS)

    Burman, Erik; Hansbo, Peter; Larson, Mats G.

    2018-03-01

    Tikhonov regularization is one of the most commonly used methods for the regularization of ill-posed problems. In the setting of finite element solutions of elliptic partial differential control problems, Tikhonov regularization amounts to adding suitably weighted least squares terms of the control variable, or derivatives thereof, to the Lagrangian determining the optimality system. In this note we show that the stabilization methods for discretely ill-posed problems developed in the setting of convection-dominated convection-diffusion problems, can be highly suitable for stabilizing optimal control problems, and that Tikhonov regularization will lead to less accurate discrete solutions. We consider some inverse problems for Poisson’s equation as an illustration and derive new error estimates both for the reconstruction of the solution from the measured data and reconstruction of the source term from the measured data. These estimates include both the effect of the discretization error and error in the measurements.

  11. Deformations of the Almheiri-Polchinski model

    NASA Astrophysics Data System (ADS)

    Kyono, Hideki; Okumura, Suguru; Yoshida, Kentaroh

    2017-03-01

    We study deformations of the Almheiri-Polchinski (AP) model by employing the Yang-Baxter deformation technique. The general deformed AdS2 metric becomes a solution of a deformed AP model. In particular, the dilaton potential is deformed from a simple quadratic form to a hyperbolic function-type potential similarly to integrable deformations. A specific solution is a deformed black hole solution. Because the deformation makes the spacetime structure around the boundary change drastically and a new naked singularity appears, the holographic interpretation is far from trivial. The Hawking temperature is the same as the undeformed case but the Bekenstein-Hawking entropy is modified due to the deformation. This entropy can also be reproduced by evaluating the renormalized stress tensor with an appropriate counter-term on the regularized screen close to the singularity.

  12. Main features of nucleation in model solutions of oral cavity

    NASA Astrophysics Data System (ADS)

    Golovanova, O. A.; Chikanova, E. S.; Punin, Yu. O.

    2015-05-01

    The regularities of nucleation in model solutions of oral cavity have been investigated, and the induction order and constants have been determined for two systems: saliva and dental plaque fluid (DPF). It is shown that an increase in the initial supersaturation leads to a transition from the heterogeneous nucleation of crystallites to a homogeneous one. Some additives are found to enhance nucleation: HCO{3/-} > C6H12O6 > F-, while others hinder this process: protein (casein) > Mg2+. It is established that crystallization in DPF occurs more rapidly and the DPF composition is favorable for the growth of small (52.6-26.1 μm) crystallites. On the contrary, the conditions implemented in the model saliva solution facilitate the formation of larger (198.4-41.8 μm) crystals.

  13. Enthalpy of Mixing in Al–Tb Liquid

    DOE PAGES

    Zhou, Shihuai; Tackes, Carl; Napolitano, Ralph

    2017-06-21

    The liquid-phase enthalpy of mixing for Al$-$Tb alloys is measured for 3, 5, 8, 10, and 20 at% Tb at selected temperatures in the range from 1364 to 1439 K. Methods include isothermal solution calorimetry and isoperibolic electromagnetic levitation drop calorimetry. Mixing enthalpy is determined relative to the unmixed pure (Al and Tb) components. The required formation enthalpy for the Al3Tb phase is computed from first-principles calculations. Finally, based on our measurements, three different semi-empirical solution models are offered for the excess free energy of the liquid, including regular, subregular, and associate model formulations. These models are also compared withmore » the Miedema model prediction of mixing enthalpy.« less

  14. Higher order total variation regularization for EIT reconstruction.

    PubMed

    Gong, Bo; Schullcke, Benjamin; Krueger-Ziolek, Sabine; Zhang, Fan; Mueller-Lisse, Ullrich; Moeller, Knut

    2018-01-08

    Electrical impedance tomography (EIT) attempts to reveal the conductivity distribution of a domain based on the electrical boundary condition. This is an ill-posed inverse problem; its solution is very unstable. Total variation (TV) regularization is one of the techniques commonly employed to stabilize reconstructions. However, it is well known that TV regularization induces staircase effects, which are not realistic in clinical applications. To reduce such artifacts, modified TV regularization terms considering a higher order differential operator were developed in several previous studies. One of them is called total generalized variation (TGV) regularization. TGV regularization has been successively applied in image processing in a regular grid context. In this study, we adapted TGV regularization to the finite element model (FEM) framework for EIT reconstruction. Reconstructions using simulation and clinical data were performed. First results indicate that, in comparison to TV regularization, TGV regularization promotes more realistic images. Graphical abstract Reconstructed conductivity changes located on selected vertical lines. For each of the reconstructed images as well as the ground truth image, conductivity changes located along the selected left and right vertical lines are plotted. In these plots, the notation GT in the legend stands for ground truth, TV stands for total variation method, and TGV stands for total generalized variation method. Reconstructed conductivity distributions from the GREIT algorithm are also demonstrated.

  15. Influence of viscous dissipation on a copper oxide nanofluid in an oblique channel: Implementation of the KKL model

    NASA Astrophysics Data System (ADS)

    Ahmed, Naveed; Adnan; Khan, Umar; Mohyud-Din, Syed Tauseef; Manzoor, Raheela

    2017-05-01

    This paper aims to study the flow of a nanofluid in the presence of viscous dissipation in an oblique channel (nonparallel plane walls). For thermal conductivity of the nanofluid, the KKL model is utilized. Water is taken as the base fluid and it is assumed to be containing the solid nanoparticles of copper oxide. The appropriate set of partial differential equations is transformed into a self-similar system with the help of feasible similarity transformations. The solution of the model is obtained analytically and to ensure the validity of analytical solutions, numerically one is also calculated. The homotopy analysis method (HAM) and the Runge-Kutta numerical method (coupled with shooting techniques) have been employed for the said purpose. The influence of the different flow parameters in the model on velocity, thermal field, skin friction coefficient and local rate of heat transfer has been discussed with the help of graphs. Furthermore, graphical comparison between the local rate of heat transfer in regular fluids and nanofluids has been made which shows that in case of nanofluids, heat transfer is rapid as compared to regular fluids.

  16. Thermal depth profiling of vascular lesions: automated regularization of reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Verkruysse, Wim; Choi, Bernard; Zhang, Jenny R.; Kim, Jeehyun; Nelson, J. Stuart

    2008-03-01

    Pulsed photo-thermal radiometry (PPTR) is a non-invasive, non-contact diagnostic technique used to locate cutaneous chromophores such as melanin (epidermis) and hemoglobin (vascular structures). Clinical utility of PPTR is limited because it typically requires trained user intervention to regularize the inversion solution. Herein, the feasibility of automated regularization was studied. A second objective of this study was to depart from modeling port wine stain PWS, a vascular skin lesion frequently studied with PPTR, as strictly layered structures since this may influence conclusions regarding PPTR reconstruction quality. Average blood vessel depths, diameters and densities derived from histology of 30 PWS patients were used to generate 15 randomized lesion geometries for which we simulated PPTR signals. Reconstruction accuracy for subjective regularization was compared with that for automated regularization methods. The objective regularization approach performed better. However, the average difference was much smaller than the variation between the 15 simulated profiles. Reconstruction quality depended more on the actual profile to be reconstructed than on the reconstruction algorithm or regularization method. Similar, or better, accuracy reconstructions can be achieved with an automated regularization procedure which enhances prospects for user friendly implementation of PPTR to optimize laser therapy on an individual patient basis.

  17. Parameter identification in ODE models with oscillatory dynamics: a Fourier regularization approach

    NASA Astrophysics Data System (ADS)

    Chiara D'Autilia, Maria; Sgura, Ivonne; Bozzini, Benedetto

    2017-12-01

    In this paper we consider a parameter identification problem (PIP) for data oscillating in time, that can be described in terms of the dynamics of some ordinary differential equation (ODE) model, resulting in an optimization problem constrained by the ODEs. In problems with this type of data structure, simple application of the direct method of control theory (discretize-then-optimize) yields a least-squares cost function exhibiting multiple ‘low’ minima. Since in this situation any optimization algorithm is liable to fail in the approximation of a good solution, here we propose a Fourier regularization approach that is able to identify an iso-frequency manifold {{ S}} of codimension-one in the parameter space \

  18. Generalization of the binary structural phase field crystal model

    NASA Astrophysics Data System (ADS)

    Smith, Nathan; Provatas, Nikolas

    2017-10-01

    Two improvements to the binary structural phase field crystal (XPFC) theory are presented. The first is an improvement to the phenomenology for modelling density-density correlation functions and the second extends the free energy of the mixing term in the binary XPFC model beyond ideal mixing to a regular solution model. These improvements are applied to study kinetics of precipitation from solution. We observe a two-step nucleation pathway similar to recent experimental work [N. D. Loh, S. Sen, M. Bosman, S. F. Tan, J. Zhong, C. A. Nijhuis, P. Král, P. Matsudaira, and U. Mirsaidov, Nat. Chem. 9, 77 (2017), 10.1038/nchem.2618; A. F. Wallace, L. O. Hedges, A. Fernandez-Martinez, P. Raiteri, J. D. Gale, G. A. Waychunas, S. Whitelam, J. F. Banfield, and J. J. De Yoreo, Science 341, 885 (2013), 10.1126/science.1230915] in which the liquid solution first decomposes into solute-poor and solute-rich regions, followed by precipitate nucleation of the solute-rich regions. Additionally, we find a phenomenon not previously described in the literature in which the growth of precipitates is accelerated in the presence of uncrystallized solute-rich liquid regions.

  19. Mixture models with entropy regularization for community detection in networks

    NASA Astrophysics Data System (ADS)

    Chang, Zhenhai; Yin, Xianjun; Jia, Caiyan; Wang, Xiaoyang

    2018-04-01

    Community detection is a key exploratory tool in network analysis and has received much attention in recent years. NMM (Newman's mixture model) is one of the best models for exploring a range of network structures including community structure, bipartite and core-periphery structures, etc. However, NMM needs to know the number of communities in advance. Therefore, in this study, we have proposed an entropy regularized mixture model (called EMM), which is capable of inferring the number of communities and identifying network structure contained in a network, simultaneously. In the model, by minimizing the entropy of mixing coefficients of NMM using EM (expectation-maximization) solution, the small clusters contained little information can be discarded step by step. The empirical study on both synthetic networks and real networks has shown that the proposed model EMM is superior to the state-of-the-art methods.

  20. An efficient method for model refinement in diffuse optical tomography

    NASA Astrophysics Data System (ADS)

    Zirak, A. R.; Khademi, M.

    2007-11-01

    Diffuse optical tomography (DOT) is a non-linear, ill-posed, boundary value and optimization problem which necessitates regularization. Also, Bayesian methods are suitable owing to measurements data are sparse and correlated. In such problems which are solved with iterative methods, for stabilization and better convergence, the solution space must be small. These constraints subject to extensive and overdetermined system of equations which model retrieving criteria specially total least squares (TLS) must to refine model error. Using TLS is limited to linear systems which is not achievable when applying traditional Bayesian methods. This paper presents an efficient method for model refinement using regularized total least squares (RTLS) for treating on linearized DOT problem, having maximum a posteriori (MAP) estimator and Tikhonov regulator. This is done with combination Bayesian and regularization tools as preconditioner matrices, applying them to equations and then using RTLS to the resulting linear equations. The preconditioning matrixes are guided by patient specific information as well as a priori knowledge gained from the training set. Simulation results illustrate that proposed method improves the image reconstruction performance and localize the abnormally well.

  1. Black hole solutions in d = 5 Chern-Simons gravity

    NASA Astrophysics Data System (ADS)

    Brihaye, Yves; Radu, Eugen

    2013-11-01

    The five dimensional Einstein-Gauss-Bonnet gravity with a negative cosmological constant becomes, for a special value of the Gauss-Bonnet coupling constant, a Chern-Simons (CS) theory of gravity. In this work we discuss the properties of several different types of black object solutions of this model. Special attention is paid to the case of spinning black holes with equal-magnitude angular momenta which posses a regular horizon of spherical topology. Closed form solutions are obtained in the small angular momentum limit. Nonperturbative solutions are constructed by solving numerically the equations of the model. Apart from that, new exact solutions describing static squashed black holes and black strings are also discussed. The action and global charges of all configurations studied in this work are obtained by using the quasilocal formalism with boundary counterterms generalized for the case of a d = 5 CS theory.

  2. Effects of spatially variable resolution on field-scale estimates of tracer concentration from electrical inversions using Archie's law

    USGS Publications Warehouse

    Singha, Kamini; Gorelick, Steven M.

    2006-01-01

    Two important mechanisms affect our ability to estimate solute concentrations quantitatively from the inversion of field-scale electrical resistivity tomography (ERT) data: (1) the spatially variable physical processes that govern the flow of current as well as the variation of physical properties in space and (2) the overparameterization of inverse models, which requires the imposition of a smoothing constraint (regularization) to facilitate convergence of the inverse solution. Based on analyses of field and synthetic data, we find that the ability of ERT to recover the 3D shape and magnitudes of a migrating conductive target is spatially variable. Additionally, the application of Archie's law to tomograms from field ERT data produced solute concentrations that are consistently less than 10% of point measurements collected in the field and estimated from transport modeling. Estimates of concentration from ERT using Archie's law only fit measured solute concentrations if the apparent formation factor is varied with space and time and allowed to take on unreasonably high values. Our analysis suggests that the inability to find a single petrophysical relation in space and time between concentration and electrical resistivity is largely an effect of two properties of ERT surveys: (1) decreased sensitivity of ERT to detect the target plume with increasing distance from the electrodes and (2) the smoothing imprint of regularization used in inversion.

  3. Existence and Regularity of Invariant Measures for the Three Dimensional Stochastic Primitive Equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Glatt-Holtz, Nathan, E-mail: negh@vt.edu; Kukavica, Igor, E-mail: kukavica@usc.edu; Ziane, Mohammed, E-mail: ziane@usc.edu

    2014-05-15

    We establish the continuity of the Markovian semigroup associated with strong solutions of the stochastic 3D Primitive Equations, and prove the existence of an invariant measure. The proof is based on new moment bounds for strong solutions. The invariant measure is supported on strong solutions and is furthermore shown to have higher regularity properties.

  4. Exact Markov chain and approximate diffusion solution for haploid genetic drift with one-way mutation.

    PubMed

    Hössjer, Ola; Tyvand, Peder A; Miloh, Touvia

    2016-02-01

    The classical Kimura solution of the diffusion equation is investigated for a haploid random mating (Wright-Fisher) model, with one-way mutations and initial-value specified by the founder population. The validity of the transient diffusion solution is checked by exact Markov chain computations, using a Jordan decomposition of the transition matrix. The conclusion is that the one-way diffusion model mostly works well, although the rate of convergence depends on the initial allele frequency and the mutation rate. The diffusion approximation is poor for mutation rates so low that the non-fixation boundary is regular. When this happens we perturb the diffusion solution around the non-fixation boundary and obtain a more accurate approximation that takes quasi-fixation of the mutant allele into account. The main application is to quantify how fast a specific genetic variant of the infinite alleles model is lost. We also discuss extensions of the quasi-fixation approach to other models with small mutation rates. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Spatio Temporal EEG Source Imaging with the Hierarchical Bayesian Elastic Net and Elitist Lasso Models

    PubMed Central

    Paz-Linares, Deirel; Vega-Hernández, Mayrim; Rojas-López, Pedro A.; Valdés-Hernández, Pedro A.; Martínez-Montes, Eduardo; Valdés-Sosa, Pedro A.

    2017-01-01

    The estimation of EEG generating sources constitutes an Inverse Problem (IP) in Neuroscience. This is an ill-posed problem due to the non-uniqueness of the solution and regularization or prior information is needed to undertake Electrophysiology Source Imaging. Structured Sparsity priors can be attained through combinations of (L1 norm-based) and (L2 norm-based) constraints such as the Elastic Net (ENET) and Elitist Lasso (ELASSO) models. The former model is used to find solutions with a small number of smooth nonzero patches, while the latter imposes different degrees of sparsity simultaneously along different dimensions of the spatio-temporal matrix solutions. Both models have been addressed within the penalized regression approach, where the regularization parameters are selected heuristically, leading usually to non-optimal and computationally expensive solutions. The existing Bayesian formulation of ENET allows hyperparameter learning, but using the computationally intensive Monte Carlo/Expectation Maximization methods, which makes impractical its application to the EEG IP. While the ELASSO have not been considered before into the Bayesian context. In this work, we attempt to solve the EEG IP using a Bayesian framework for ENET and ELASSO models. We propose a Structured Sparse Bayesian Learning algorithm based on combining the Empirical Bayes and the iterative coordinate descent procedures to estimate both the parameters and hyperparameters. Using realistic simulations and avoiding the inverse crime we illustrate that our methods are able to recover complicated source setups more accurately and with a more robust estimation of the hyperparameters and behavior under different sparsity scenarios than classical LORETA, ENET and LASSO Fusion solutions. We also solve the EEG IP using data from a visual attention experiment, finding more interpretable neurophysiological patterns with our methods. The Matlab codes used in this work, including Simulations, Methods, Quality Measures and Visualization Routines are freely available in a public website. PMID:29200994

  6. Spatio Temporal EEG Source Imaging with the Hierarchical Bayesian Elastic Net and Elitist Lasso Models.

    PubMed

    Paz-Linares, Deirel; Vega-Hernández, Mayrim; Rojas-López, Pedro A; Valdés-Hernández, Pedro A; Martínez-Montes, Eduardo; Valdés-Sosa, Pedro A

    2017-01-01

    The estimation of EEG generating sources constitutes an Inverse Problem (IP) in Neuroscience. This is an ill-posed problem due to the non-uniqueness of the solution and regularization or prior information is needed to undertake Electrophysiology Source Imaging. Structured Sparsity priors can be attained through combinations of (L1 norm-based) and (L2 norm-based) constraints such as the Elastic Net (ENET) and Elitist Lasso (ELASSO) models. The former model is used to find solutions with a small number of smooth nonzero patches, while the latter imposes different degrees of sparsity simultaneously along different dimensions of the spatio-temporal matrix solutions. Both models have been addressed within the penalized regression approach, where the regularization parameters are selected heuristically, leading usually to non-optimal and computationally expensive solutions. The existing Bayesian formulation of ENET allows hyperparameter learning, but using the computationally intensive Monte Carlo/Expectation Maximization methods, which makes impractical its application to the EEG IP. While the ELASSO have not been considered before into the Bayesian context. In this work, we attempt to solve the EEG IP using a Bayesian framework for ENET and ELASSO models. We propose a Structured Sparse Bayesian Learning algorithm based on combining the Empirical Bayes and the iterative coordinate descent procedures to estimate both the parameters and hyperparameters. Using realistic simulations and avoiding the inverse crime we illustrate that our methods are able to recover complicated source setups more accurately and with a more robust estimation of the hyperparameters and behavior under different sparsity scenarios than classical LORETA, ENET and LASSO Fusion solutions. We also solve the EEG IP using data from a visual attention experiment, finding more interpretable neurophysiological patterns with our methods. The Matlab codes used in this work, including Simulations, Methods, Quality Measures and Visualization Routines are freely available in a public website.

  7. Regularity gradient estimates for weak solutions of singular quasi-linear parabolic equations

    NASA Astrophysics Data System (ADS)

    Phan, Tuoc

    2017-12-01

    This paper studies the Sobolev regularity for weak solutions of a class of singular quasi-linear parabolic problems of the form ut -div [ A (x , t , u , ∇u) ] =div [ F ] with homogeneous Dirichlet boundary conditions over bounded spatial domains. Our main focus is on the case that the vector coefficients A are discontinuous and singular in (x , t)-variables, and dependent on the solution u. Global and interior weighted W 1 , p (ΩT , ω)-regularity estimates are established for weak solutions of these equations, where ω is a weight function in some Muckenhoupt class of weights. The results obtained are even new for linear equations, and for ω = 1, because of the singularity of the coefficients in (x , t)-variables.

  8. A singularity free analytical solution of artificial satellite motion with drag

    NASA Technical Reports Server (NTRS)

    Mueller, A.

    1978-01-01

    An analytical satellite theory based on the regular, canonical Poincare-Similar (PS phi) elements is described along with an accurate density model which can be implemented into the drag theory. A computationally efficient manner in which to expand the equations of motion into a fourier series is discussed.

  9. Recovering fine details from under-resolved electron tomography data using higher order total variation ℓ 1 regularization

    DOE PAGES

    Sanders, Toby; Gelb, Anne; Platte, Rodrigo B.; ...

    2017-01-03

    Over the last decade or so, reconstruction methods using ℓ 1 regularization, often categorized as compressed sensing (CS) algorithms, have significantly improved the capabilities of high fidelity imaging in electron tomography. The most popular ℓ 1 regularization approach within electron tomography has been total variation (TV) regularization. In addition to reducing unwanted noise, TV regularization encourages a piecewise constant solution with sparse boundary regions. In this paper we propose an alternative ℓ 1 regularization approach for electron tomography based on higher order total variation (HOTV). Like TV, the HOTV approach promotes solutions with sparse boundary regions. In smooth regions however,more » the solution is not limited to piecewise constant behavior. We demonstrate that this allows for more accurate reconstruction of a broader class of images – even those for which TV was designed for – particularly when dealing with pragmatic tomographic sampling patterns and very fine image features. In conclusion, we develop results for an electron tomography data set as well as a phantom example, and we also make comparisons with discrete tomography approaches.« less

  10. A novel scatter-matrix eigenvalues-based total variation (SMETV) regularization for medical image restoration

    NASA Astrophysics Data System (ADS)

    Huang, Zhenghua; Zhang, Tianxu; Deng, Lihua; Fang, Hao; Li, Qian

    2015-12-01

    Total variation(TV) based on regularization has been proven as a popular and effective model for image restoration, because of its ability of edge preserved. However, as the TV favors a piece-wise constant solution, the processing results in the flat regions of the image are easily produced "staircase effects", and the amplitude of the edges will be underestimated; the underlying cause of the problem is that the regularization parameter can not be changeable with spatial local information of image. In this paper, we propose a novel Scatter-matrix eigenvalues-based TV(SMETV) regularization with image blind restoration algorithm for deblurring medical images. The spatial information in different image regions is incorporated into regularization by using the edge indicator called difference eigenvalue to distinguish edges from flat areas. The proposed algorithm can effectively reduce the noise in flat regions as well as preserve the edge and detailed information. Moreover, it becomes more robust with the change of the regularization parameter. Extensive experiments demonstrate that the proposed approach produces results superior to most methods in both visual image quality and quantitative measures.

  11. Low-rank separated representation surrogates of high-dimensional stochastic functions: Application in Bayesian inference

    NASA Astrophysics Data System (ADS)

    Validi, AbdoulAhad

    2014-03-01

    This study introduces a non-intrusive approach in the context of low-rank separated representation to construct a surrogate of high-dimensional stochastic functions, e.g., PDEs/ODEs, in order to decrease the computational cost of Markov Chain Monte Carlo simulations in Bayesian inference. The surrogate model is constructed via a regularized alternative least-square regression with Tikhonov regularization using a roughening matrix computing the gradient of the solution, in conjunction with a perturbation-based error indicator to detect optimal model complexities. The model approximates a vector of a continuous solution at discrete values of a physical variable. The required number of random realizations to achieve a successful approximation linearly depends on the function dimensionality. The computational cost of the model construction is quadratic in the number of random inputs, which potentially tackles the curse of dimensionality in high-dimensional stochastic functions. Furthermore, this vector-valued separated representation-based model, in comparison to the available scalar-valued case, leads to a significant reduction in the cost of approximation by an order of magnitude equal to the vector size. The performance of the method is studied through its application to three numerical examples including a 41-dimensional elliptic PDE and a 21-dimensional cavity flow.

  12. A gradient enhanced plasticity-damage microplane model for concrete

    NASA Astrophysics Data System (ADS)

    Zreid, Imadeddin; Kaliske, Michael

    2018-03-01

    Computational modeling of concrete poses two main types of challenges. The first is the mathematical description of local response for such a heterogeneous material under all stress states, and the second is the stability and efficiency of the numerical implementation in finite element codes. The paper at hand presents a comprehensive approach addressing both issues. Adopting the microplane theory, a combined plasticity-damage model is formulated and regularized by an implicit gradient enhancement. The plasticity part introduces a new microplane smooth 3-surface cap yield function, which provides a stable numerical solution within an implicit finite element algorithm. The damage part utilizes a split, which can describe the transition of loading between tension and compression. Regularization of the model by the implicit gradient approach eliminates the mesh sensitivity and numerical instabilities. Identification methods for model parameters are proposed and several numerical examples of plain and reinforced concrete are carried out for illustration.

  13. Well-Posedness Results for a Class of Toxicokinetic Models

    DTIC Science & Technology

    2001-07-24

    estimation. The main result that we establish here regarding well-posedness of solutions is based on ideas presented in [5] and [1]. Banks and Musante [5...necessary regularity required for the model to t into the second class of abstract problems discussed by Banks and Musante . Transport models for other...upon the results of Banks and Musante by achieving well-posedness for a more general class of abstract nonlinear parabolic equations. Ackleh, Banks and

  14. A Hierarchical Visualization Analysis Model of Power Big Data

    NASA Astrophysics Data System (ADS)

    Li, Yongjie; Wang, Zheng; Hao, Yang

    2018-01-01

    Based on the conception of integrating VR scene and power big data analysis, a hierarchical visualization analysis model of power big data is proposed, in which levels are designed, targeting at different abstract modules like transaction, engine, computation, control and store. The regularly departed modules of power data storing, data mining and analysis, data visualization are integrated into one platform by this model. It provides a visual analysis solution for the power big data.

  15. An analytical method for the inverse Cauchy problem of Lame equation in a rectangle

    NASA Astrophysics Data System (ADS)

    Grigor’ev, Yu

    2018-04-01

    In this paper, we present an analytical computational method for the inverse Cauchy problem of Lame equation in the elasticity theory. A rectangular domain is frequently used in engineering structures and we only consider the analytical solution in a two-dimensional rectangle, wherein a missing boundary condition is recovered from the full measurement of stresses and displacements on an accessible boundary. The essence of the method consists in solving three independent Cauchy problems for the Laplace and Poisson equations. For each of them, the Fourier series is used to formulate a first-kind Fredholm integral equation for the unknown function of data. Then, we use a Lavrentiev regularization method, and the termwise separable property of kernel function allows us to obtain a closed-form regularized solution. As a result, for the displacement components, we obtain solutions in the form of a sum of series with three regularization parameters. The uniform convergence and error estimation of the regularized solutions are proved.

  16. Light scattering measurements supporting helical structures for chromatin in solution.

    PubMed

    Campbell, A M; Cotter, R I; Pardon, J F

    1978-05-01

    Laser light scattering measurements have been made on a series of polynucleosomes containing from 50 to 150 nucleosomes. Radii of gyration have been determined as a function of polynucleosome length for different ionic strength solutions. The results suggest that at low ionic strength the chromatin adopts a loosely helical structure rather than a random coil. The helix becomes more regular on increasing the ionic strength, the dimension resembling those proposed by Finch and Klug for their solenoid model.

  17. General three-state model with biased population replacement: Analytical solution and application to language dynamics

    NASA Astrophysics Data System (ADS)

    Colaiori, Francesca; Castellano, Claudio; Cuskley, Christine F.; Loreto, Vittorio; Pugliese, Martina; Tria, Francesca

    2015-01-01

    Empirical evidence shows that the rate of irregular usage of English verbs exhibits discontinuity as a function of their frequency: the most frequent verbs tend to be totally irregular. We aim to qualitatively understand the origin of this feature by studying simple agent-based models of language dynamics, where each agent adopts an inflectional state for a verb and may change it upon interaction with other agents. At the same time, agents are replaced at some rate by new agents adopting the regular form. In models with only two inflectional states (regular and irregular), we observe that either all verbs regularize irrespective of their frequency, or a continuous transition occurs between a low-frequency state, where the lemma becomes fully regular, and a high-frequency one, where both forms coexist. Introducing a third (mixed) state, wherein agents may use either form, we find that a third, qualitatively different behavior may emerge, namely, a discontinuous transition in frequency. We introduce and solve analytically a very general class of three-state models that allows us to fully understand these behaviors in a unified framework. Realistic sets of interaction rules, including the well-known naming game (NG) model, result in a discontinuous transition, in agreement with recent empirical findings. We also point out that the distinction between speaker and hearer in the interaction has no effect on the collective behavior. The results for the general three-state model, although discussed in terms of language dynamics, are widely applicable.

  18. Constrained Total Generalized p-Variation Minimization for Few-View X-Ray Computed Tomography Image Reconstruction.

    PubMed

    Zhang, Hanming; Wang, Linyuan; Yan, Bin; Li, Lei; Cai, Ailong; Hu, Guoen

    2016-01-01

    Total generalized variation (TGV)-based computed tomography (CT) image reconstruction, which utilizes high-order image derivatives, is superior to total variation-based methods in terms of the preservation of edge information and the suppression of unfavorable staircase effects. However, conventional TGV regularization employs l1-based form, which is not the most direct method for maximizing sparsity prior. In this study, we propose a total generalized p-variation (TGpV) regularization model to improve the sparsity exploitation of TGV and offer efficient solutions to few-view CT image reconstruction problems. To solve the nonconvex optimization problem of the TGpV minimization model, we then present an efficient iterative algorithm based on the alternating minimization of augmented Lagrangian function. All of the resulting subproblems decoupled by variable splitting admit explicit solutions by applying alternating minimization method and generalized p-shrinkage mapping. In addition, approximate solutions that can be easily performed and quickly calculated through fast Fourier transform are derived using the proximal point method to reduce the cost of inner subproblems. The accuracy and efficiency of the simulated and real data are qualitatively and quantitatively evaluated to validate the efficiency and feasibility of the proposed method. Overall, the proposed method exhibits reasonable performance and outperforms the original TGV-based method when applied to few-view problems.

  19. Partial regularity of viscosity solutions for a class of Kolmogorov equations arising from mathematical finance

    NASA Astrophysics Data System (ADS)

    Rosestolato, M.; Święch, A.

    2017-02-01

    We study value functions which are viscosity solutions of certain Kolmogorov equations. Using PDE techniques we prove that they are C 1 + α regular on special finite dimensional subspaces. The problem has origins in hedging derivatives of risky assets in mathematical finance.

  20. Minimum mean squared error (MSE) adjustment and the optimal Tykhonov-Phillips regularization parameter via reproducing best invariant quadratic uniformly unbiased estimates (repro-BIQUUE)

    NASA Astrophysics Data System (ADS)

    Schaffrin, Burkhard

    2008-02-01

    In a linear Gauss-Markov model, the parameter estimates from BLUUE (Best Linear Uniformly Unbiased Estimate) are not robust against possible outliers in the observations. Moreover, by giving up the unbiasedness constraint, the mean squared error (MSE) risk may be further reduced, in particular when the problem is ill-posed. In this paper, the α-weighted S-homBLE (Best homogeneously Linear Estimate) is derived via formulas originally used for variance component estimation on the basis of the repro-BIQUUE (reproducing Best Invariant Quadratic Uniformly Unbiased Estimate) principle in a model with stochastic prior information. In the present model, however, such prior information is not included, which allows the comparison of the stochastic approach (α-weighted S-homBLE) with the well-established algebraic approach of Tykhonov-Phillips regularization, also known as R-HAPS (Hybrid APproximation Solution), whenever the inverse of the “substitute matrix” S exists and is chosen as the R matrix that defines the relative impact of the regularizing term on the final result.

  1. Cosmological space-times with resolved Big Bang in Yang-Mills matrix models

    NASA Astrophysics Data System (ADS)

    Steinacker, Harold C.

    2018-02-01

    We present simple solutions of IKKT-type matrix models that can be viewed as quantized homogeneous and isotropic cosmological space-times, with finite density of microstates and a regular Big Bang (BB). The BB arises from a signature change of the effective metric on a fuzzy brane embedded in Lorentzian target space, in the presence of a quantized 4-volume form. The Hubble parameter is singular at the BB, and becomes small at late times. There is no singularity from the target space point of view, and the brane is Euclidean "before" the BB. Both recollapsing and expanding universe solutions are obtained, depending on the mass parameters.

  2. Spectral partitioning in equitable graphs.

    PubMed

    Barucca, Paolo

    2017-06-01

    Graph partitioning problems emerge in a wide variety of complex systems, ranging from biology to finance, but can be rigorously analyzed and solved only for a few graph ensembles. Here, an ensemble of equitable graphs, i.e., random graphs with a block-regular structure, is studied, for which analytical results can be obtained. In particular, the spectral density of this ensemble is computed exactly for a modular and bipartite structure. Kesten-McKay's law for random regular graphs is found analytically to apply also for modular and bipartite structures when blocks are homogeneous. An exact solution to graph partitioning for two equal-sized communities is proposed and verified numerically, and a conjecture on the absence of an efficient recovery detectability transition in equitable graphs is suggested. A final discussion summarizes results and outlines their relevance for the solution of graph partitioning problems in other graph ensembles, in particular for the study of detectability thresholds and resolution limits in stochastic block models.

  3. Spectral partitioning in equitable graphs

    NASA Astrophysics Data System (ADS)

    Barucca, Paolo

    2017-06-01

    Graph partitioning problems emerge in a wide variety of complex systems, ranging from biology to finance, but can be rigorously analyzed and solved only for a few graph ensembles. Here, an ensemble of equitable graphs, i.e., random graphs with a block-regular structure, is studied, for which analytical results can be obtained. In particular, the spectral density of this ensemble is computed exactly for a modular and bipartite structure. Kesten-McKay's law for random regular graphs is found analytically to apply also for modular and bipartite structures when blocks are homogeneous. An exact solution to graph partitioning for two equal-sized communities is proposed and verified numerically, and a conjecture on the absence of an efficient recovery detectability transition in equitable graphs is suggested. A final discussion summarizes results and outlines their relevance for the solution of graph partitioning problems in other graph ensembles, in particular for the study of detectability thresholds and resolution limits in stochastic block models.

  4. Sparse Coding and Counting for Robust Visual Tracking

    PubMed Central

    Liu, Risheng; Wang, Jing; Shang, Xiaoke; Wang, Yiyang; Su, Zhixun; Cai, Yu

    2016-01-01

    In this paper, we propose a novel sparse coding and counting method under Bayesian framework for visual tracking. In contrast to existing methods, the proposed method employs the combination of L0 and L1 norm to regularize the linear coefficients of incrementally updated linear basis. The sparsity constraint enables the tracker to effectively handle difficult challenges, such as occlusion or image corruption. To achieve real-time processing, we propose a fast and efficient numerical algorithm for solving the proposed model. Although it is an NP-hard problem, the proposed accelerated proximal gradient (APG) approach is guaranteed to converge to a solution quickly. Besides, we provide a closed solution of combining L0 and L1 regularized representation to obtain better sparsity. Experimental results on challenging video sequences demonstrate that the proposed method achieves state-of-the-art results both in accuracy and speed. PMID:27992474

  5. 3D Gravity Inversion using Tikhonov Regularization

    NASA Astrophysics Data System (ADS)

    Toushmalani, Reza; Saibi, Hakim

    2015-08-01

    Subsalt exploration for oil and gas is attractive in regions where 3D seismic depth-migration to recover the geometry of a salt base is difficult. Additional information to reduce the ambiguity in seismic images would be beneficial. Gravity data often serve these purposes in the petroleum industry. In this paper, the authors present an algorithm for a gravity inversion based on Tikhonov regularization and an automatically regularized solution process. They examined the 3D Euler deconvolution to extract the best anomaly source depth as a priori information to invert the gravity data and provided a synthetic example. Finally, they applied the gravity inversion to recently obtained gravity data from the Bandar Charak (Hormozgan, Iran) to identify its subsurface density structure. Their model showed the 3D shape of salt dome in this region.

  6. Global Regularity for the Fractional Euler Alignment System

    NASA Astrophysics Data System (ADS)

    Do, Tam; Kiselev, Alexander; Ryzhik, Lenya; Tan, Changhui

    2018-04-01

    We study a pressureless Euler system with a non-linear density-dependent alignment term, originating in the Cucker-Smale swarming models. The alignment term is dissipative in the sense that it tends to equilibrate the velocities. Its density dependence is natural: the alignment rate increases in the areas of high density due to species discomfort. The diffusive term has the order of a fractional Laplacian {(-partial _{xx})^{α/2}, α \\in (0, 1)}. The corresponding Burgers equation with a linear dissipation of this type develops shocks in a finite time. We show that the alignment nonlinearity enhances the dissipation, and the solutions are globally regular for all {α \\in (0, 1)}. To the best of our knowledge, this is the first example of such regularization due to the non-local nonlinear modulation of dissipation.

  7. Racial Differences in Trajectories of Heavy Drinking and Regular Marijuana Use from Ages 13 through 24 Among African-American and White Males

    PubMed Central

    Finlay, Andrea K.; White, Helene R.; Mun, Eun-Young; Cronley, Courtney C.; Lee, Chioun

    2011-01-01

    Background Although there are significant differences in prevalence of substance use between African-American and White adolescents, few studies have examined racial differences in developmental patterns of substance use, especially during the important developmental transition from adolescence to young adulthood. This study examines racial differences in trajectories of heavy drinking and regular marijuana use from adolescence into young adulthood. Methods A community-based sample of non-Hispanic African-American (n = 276) and non-Hispanic White (n = 211) males was analyzed to identify trajectories from ages 13 through 24. Results Initial analyses indicated race differences in heavy drinking and regular marijuana use trajectories. African Americans were more likely than Whites to be members of the nonheavy drinkers/nondrinkers group and less likely to be members of the early-onset heavy drinkers group. The former were also more likely than the latter to be members of the late-onset regular marijuana use group. Separate analyses by race indicated differences in heavy drinking for African Americans and Whites. A 2-group model for heavy drinking fit best for African Americans, whereas a 4-group solution fit best for Whites. For regular marijuana use, a similar 4-group solution fit for both races, although group proportions differed. Conclusions Within-race analyses indicated that there were clear race differences in the long-term patterns of alcohol use; regular marijuana use patterns were more similar. Extended follow ups are needed to examine differences and similarities in maturation processes for African-American and White males. For both races, prevention and intervention efforts are necessary into young adulthood. PMID:21908109

  8. Applications of exact traveling wave solutions of Modified Liouville and the Symmetric Regularized Long Wave equations via two new techniques

    NASA Astrophysics Data System (ADS)

    Lu, Dianchen; Seadawy, Aly R.; Ali, Asghar

    2018-06-01

    In this current work, we employ novel methods to find the exact travelling wave solutions of Modified Liouville equation and the Symmetric Regularized Long Wave equation, which are called extended simple equation and exp(-Ψ(ξ))-expansion methods. By assigning the different values to the parameters, different types of the solitary wave solutions are derived from the exact traveling wave solutions, which shows the efficiency and precision of our methods. Some solutions have been represented by graphical. The obtained results have several applications in physical science.

  9. The method of A-harmonic approximation and optimal interior partial regularity for nonlinear elliptic systems under the controllable growth condition

    NASA Astrophysics Data System (ADS)

    Chen, Shuhong; Tan, Zhong

    2007-11-01

    In this paper, we consider the nonlinear elliptic systems under controllable growth condition. We use a new method introduced by Duzaar and Grotowski, for proving partial regularity for weak solutions, based on a generalization of the technique of harmonic approximation. We extend previous partial regularity results under the natural growth condition to the case of the controllable growth condition, and directly establishing the optimal Hölder exponent for the derivative of a weak solution.

  10. Application of Two-Parameter Stabilizing Functions in Solving a Convolution-Type Integral Equation by Regularization Method

    NASA Astrophysics Data System (ADS)

    Maslakov, M. L.

    2018-04-01

    This paper examines the solution of convolution-type integral equations of the first kind by applying the Tikhonov regularization method with two-parameter stabilizing functions. The class of stabilizing functions is expanded in order to improve the accuracy of the resulting solution. The features of the problem formulation for identification and adaptive signal correction are described. A method for choosing regularization parameters in problems of identification and adaptive signal correction is suggested.

  11. Pattern-Based Inverse Modeling for Characterization of Subsurface Flow Models with Complex Geologic Heterogeneity

    NASA Astrophysics Data System (ADS)

    Golmohammadi, A.; Jafarpour, B.; M Khaninezhad, M. R.

    2017-12-01

    Calibration of heterogeneous subsurface flow models leads to ill-posed nonlinear inverse problems, where too many unknown parameters are estimated from limited response measurements. When the underlying parameters form complex (non-Gaussian) structured spatial connectivity patterns, classical variogram-based geostatistical techniques cannot describe the underlying connectivity patterns. Modern pattern-based geostatistical methods that incorporate higher-order spatial statistics are more suitable for describing such complex spatial patterns. Moreover, when the underlying unknown parameters are discrete (geologic facies distribution), conventional model calibration techniques that are designed for continuous parameters cannot be applied directly. In this paper, we introduce a novel pattern-based model calibration method to reconstruct discrete and spatially complex facies distributions from dynamic flow response data. To reproduce complex connectivity patterns during model calibration, we impose a feasibility constraint to ensure that the solution follows the expected higher-order spatial statistics. For model calibration, we adopt a regularized least-squares formulation, involving data mismatch, pattern connectivity, and feasibility constraint terms. Using an alternating directions optimization algorithm, the regularized objective function is divided into a continuous model calibration problem, followed by mapping the solution onto the feasible set. The feasibility constraint to honor the expected spatial statistics is implemented using a supervised machine learning algorithm. The two steps of the model calibration formulation are repeated until the convergence criterion is met. Several numerical examples are used to evaluate the performance of the developed method.

  12. The effect of solute on the homogeneous crystal nucleation frequency in metallic melts

    NASA Technical Reports Server (NTRS)

    Thompson, C. V.; Spaepen, F.

    1982-01-01

    A complete calculation that extends the classical theory for crystal nucleation in pure melts to binary alloys has been made. Using a regular solution model, approximate expressions have been developed for the free energy change upon crystallization as a function of solute concentration. They are used, together with model-based estimates of the interfacial tension, to calculate the nucleation frequency. The predictions of the theory for the maximum attainable undercooling are compared with existing experimental results for non-glass forming alloys. The theory is also applied to several easy glass-forming alloys (Pd-Si, Au-Si, Fe-B) for qualitative comparison with the present experimental experience on the ease of glass formation, and for assessment of the potential for formation of the glass in bulk.

  13. Lagrangian averaging, nonlinear waves, and shock regularization

    NASA Astrophysics Data System (ADS)

    Bhat, Harish S.

    In this thesis, we explore various models for the flow of a compressible fluid as well as model equations for shock formation, one of the main features of compressible fluid flows. We begin by reviewing the variational structure of compressible fluid mechanics. We derive the barotropic compressible Euler equations from a variational principle in both material and spatial frames. Writing the resulting equations of motion requires certain Lie-algebraic calculations that we carry out in detail for expository purposes. Next, we extend the derivation of the Lagrangian averaged Euler (LAE-alpha) equations to the case of barotropic compressible flows. The derivation in this thesis involves averaging over a tube of trajectories etaepsilon centered around a given Lagrangian flow eta. With this tube framework, the LAE-alpha equations are derived by following a simple procedure: start with a given action, expand via Taylor series in terms of small-scale fluid fluctuations xi, truncate, average, and then model those terms that are nonlinear functions of xi. We then analyze a one-dimensional subcase of the general models derived above. We prove the existence of a large family of traveling wave solutions. Computing the dispersion relation for this model, we find it is nonlinear, implying that the equation is dispersive. We carry out numerical experiments that show that the model possesses smooth, bounded solutions that display interesting pattern formation. Finally, we examine a Hamiltonian partial differential equation (PDE) that regularizes the inviscid Burgers equation without the addition of standard viscosity. Here alpha is a small parameter that controls a nonlinear smoothing term that we have added to the inviscid Burgers equation. We show the existence of a large family of traveling front solutions. We analyze the initial-value problem and prove well-posedness for a certain class of initial data. We prove that in the zero-alpha limit, without any standard viscosity, solutions of the PDE converge strongly to weak solutions of the inviscid Burgers equation. We provide numerical evidence that this limit satisfies an entropy inequality for the inviscid Burgers equation. We demonstrate a Hamiltonian structure for the PDE.

  14. Evasion of No-Hair Theorems and Novel Black-Hole Solutions in Gauss-Bonnet Theories

    NASA Astrophysics Data System (ADS)

    Antoniou, G.; Bakopoulos, A.; Kanti, P.

    2018-03-01

    We consider a general Einstein-scalar-Gauss-Bonnet theory with a coupling function f (ϕ ) . We demonstrate that black-hole solutions appear as a generic feature of this theory since a regular horizon and an asymptotically flat solution may be easily constructed under mild assumptions for f (ϕ ). We show that the existing no-hair theorems are easily evaded, and a large number of regular black-hole solutions with scalar hair are then presented for a plethora of coupling functions f (ϕ ).

  15. Evasion of No-Hair Theorems and Novel Black-Hole Solutions in Gauss-Bonnet Theories.

    PubMed

    Antoniou, G; Bakopoulos, A; Kanti, P

    2018-03-30

    We consider a general Einstein-scalar-Gauss-Bonnet theory with a coupling function f(ϕ). We demonstrate that black-hole solutions appear as a generic feature of this theory since a regular horizon and an asymptotically flat solution may be easily constructed under mild assumptions for f(ϕ). We show that the existing no-hair theorems are easily evaded, and a large number of regular black-hole solutions with scalar hair are then presented for a plethora of coupling functions f(ϕ).

  16. Predictive sparse modeling of fMRI data for improved classification, regression, and visualization using the k-support norm.

    PubMed

    Belilovsky, Eugene; Gkirtzou, Katerina; Misyrlis, Michail; Konova, Anna B; Honorio, Jean; Alia-Klein, Nelly; Goldstein, Rita Z; Samaras, Dimitris; Blaschko, Matthew B

    2015-12-01

    We explore various sparse regularization techniques for analyzing fMRI data, such as the ℓ1 norm (often called LASSO in the context of a squared loss function), elastic net, and the recently introduced k-support norm. Employing sparsity regularization allows us to handle the curse of dimensionality, a problem commonly found in fMRI analysis. In this work we consider sparse regularization in both the regression and classification settings. We perform experiments on fMRI scans from cocaine-addicted as well as healthy control subjects. We show that in many cases, use of the k-support norm leads to better predictive performance, solution stability, and interpretability as compared to other standard approaches. We additionally analyze the advantages of using the absolute loss function versus the standard squared loss which leads to significantly better predictive performance for the regularization methods tested in almost all cases. Our results support the use of the k-support norm for fMRI analysis and on the clinical side, the generalizability of the I-RISA model of cocaine addiction. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Recent advancements in GRACE mascon regularization and uncertainty assessment

    NASA Astrophysics Data System (ADS)

    Loomis, B. D.; Luthcke, S. B.

    2017-12-01

    The latest release of the NASA Goddard Space Flight Center (GSFC) global time-variable gravity mascon product applies a new regularization strategy along with new methods for estimating noise and leakage uncertainties. The critical design component of mascon estimation is the construction of the applied regularization matrices, and different strategies exist between the different centers that produce mascon solutions. The new approach from GSFC directly applies the pre-fit Level 1B inter-satellite range-acceleration residuals in the design of time-dependent regularization matrices, which are recomputed at each step of our iterative solution method. We summarize this new approach, demonstrating the simultaneous increase in recovered time-variable gravity signal and reduction in the post-fit inter-satellite residual magnitudes, until solution convergence occurs. We also present our new approach for estimating mascon noise uncertainties, which are calibrated to the post-fit inter-satellite residuals. Lastly, we present a new technique for end users to quickly estimate the signal leakage errors for any selected grouping of mascons, and we test the viability of this leakage assessment procedure on the mascon solutions produced by other processing centers.

  18. Travel time tomography with local image regularization by sparsity constrained dictionary learning

    NASA Astrophysics Data System (ADS)

    Bianco, M.; Gerstoft, P.

    2017-12-01

    We propose a regularization approach for 2D seismic travel time tomography which models small rectangular groups of slowness pixels, within an overall or `global' slowness image, as sparse linear combinations of atoms from a dictionary. The groups of slowness pixels are referred to as patches and a dictionary corresponds to a collection of functions or `atoms' describing the slowness in each patch. These functions could for example be wavelets.The patch regularization is incorporated into the global slowness image. The global image models the broad features, while the local patch images incorporate prior information from the dictionary. Further, high resolution slowness within patches is permitted if the travel times from the global estimates support it. The proposed approach is formulated as an algorithm, which is repeated until convergence is achieved: 1) From travel times, find the global slowness image with a minimum energy constraint on the pixel variance relative to a reference. 2) Find the patch level solutions to fit the global estimate as a sparse linear combination of dictionary atoms.3) Update the reference as the weighted average of the patch level solutions.This approach relies on the redundancy of the patches in the seismic image. Redundancy means that the patches are repetitions of a finite number of patterns, which are described by the dictionary atoms. Redundancy in the earth's structure was demonstrated in previous works in seismics where dictionaries of wavelet functions regularized inversion. We further exploit redundancy of the patches by using dictionary learning algorithms, a form of unsupervised machine learning, to estimate optimal dictionaries from the data in parallel with the inversion. We demonstrate our approach on densely, but irregularly sampled synthetic seismic images.

  19. A network model of successive partitioning-limited solute diffusion through the stratum corneum.

    PubMed

    Schumm, Phillip; Scoglio, Caterina M; van der Merwe, Deon

    2010-02-07

    As the most exposed point of contact with the external environment, the skin is an important barrier to many chemical exposures, including medications, potentially toxic chemicals and cosmetics. Traditional dermal absorption models treat the stratum corneum lipids as a homogenous medium through which solutes diffuse according to Fick's first law of diffusion. This approach does not explain non-linear absorption and irregular distribution patterns within the stratum corneum lipids as observed in experimental data. A network model, based on successive partitioning-limited solute diffusion through the stratum corneum, where the lipid structure is represented by a large, sparse, and regular network where nodes have variable characteristics, offers an alternative, efficient, and flexible approach to dermal absorption modeling that simulates non-linear absorption data patterns. Four model versions are presented: two linear models, which have unlimited node capacities, and two non-linear models, which have limited node capacities. The non-linear model outputs produce absorption to dose relationships that can be best characterized quantitatively by using power equations, similar to the equations used to describe non-linear experimental data.

  20. First Principle and Experimental Study for Site Preferences of Formability Improved Alloying Elements in Mg Crystal

    NASA Astrophysics Data System (ADS)

    Zeng, Ying; Jiang, Bin; Shi, Ouling; Quan, Gaofen; Al-Ezzi, Salih; Pan, FuSheng

    2018-07-01

    Some alloying elements (Al, Er, Gd, Li, Mn, Sn, Y, Zn) were proved recently by calculations or experiments to improve the formability of Mg alloys, but ignoring their site preference in Mg crystals during the calculated process. A crystallographic model was built via first principle calculations to predict the site preferences of these elements. Regularities between doping elements and site preferences were summarized. Meanwhile, in the basis of the crystallographic model, a series of formulas were deduced combining the diffraction law. It predicted that a crystal plane with abnormal XRD peak intensity of the Mg-based solid solutions, compared to that of the pure Mg, prefers to possess solute atoms. Thus, three single-phase solid solution alloys were then prepared through an original In-situ Solution Treatment, and their XRD patterns were compared. Finally, the experiment further described the site preferences of these solute atoms in Mg crystal, verifying the calculation results.

  1. First Principle and Experimental Study for Site Preferences of Formability Improved Alloying Elements in Mg Crystal

    NASA Astrophysics Data System (ADS)

    Zeng, Ying; Jiang, Bin; Shi, Ouling; Quan, Gaofen; Al-Ezzi, Salih; Pan, FuSheng

    2018-03-01

    Some alloying elements (Al, Er, Gd, Li, Mn, Sn, Y, Zn) were proved recently by calculations or experiments to improve the formability of Mg alloys, but ignoring their site preference in Mg crystals during the calculated process. A crystallographic model was built via first principle calculations to predict the site preferences of these elements. Regularities between doping elements and site preferences were summarized. Meanwhile, in the basis of the crystallographic model, a series of formulas were deduced combining the diffraction law. It predicted that a crystal plane with abnormal XRD peak intensity of the Mg-based solid solutions, compared to that of the pure Mg, prefers to possess solute atoms. Thus, three single-phase solid solution alloys were then prepared through an original In-situ Solution Treatment, and their XRD patterns were compared. Finally, the experiment further described the site preferences of these solute atoms in Mg crystal, verifying the calculation results.

  2. Extended Hansen solubility approach: naphthalene in individual solvents.

    PubMed

    Martin, A; Wu, P L; Adjei, A; Beerbower, A; Prausnitz, J M

    1981-11-01

    A multiple regression method using Hansen partial solubility parameters, delta D, delta p, and delta H, was used to reproduce the solubilities of naphthalene in pure polar and nonpolar solvents and to predict its solubility in untested solvents. The method, called the extended Hansen approach, was compared with the extended Hildebrand solubility approach and the universal-functional-group-activity-coefficient (UNIFAC) method. The Hildebrand regular solution theory was also used to calculate naphthalene solubility. Naphthalene, an aromatic molecule having no side chains or functional groups, is "well-behaved', i.e., its solubility in active solvents known to interact with drug molecules is fairly regular. Because of its simplicity, naphthalene is a suitable solute with which to initiate the difficult study of solubility phenomena. The three methods tested (Hildebrand regular solution theory was introduced only for comparison of solubilities in regular solution) yielded similar results, reproducing naphthalene solubilities within approximately 30% of literature values. In some cases, however, the error was considerably greater. The UNIFAC calculation is superior in that it requires only the solute's heat of fusion, the melting point, and a knowledge of chemical structures of solute and solvent. The extended Hansen and extended Hildebrand methods need experimental solubility data on which to carry out regression analysis. The extended Hansen approach was the method of second choice because of its adaptability to solutes and solvents from various classes. Sample calculations are included to illustrate methods of predicting solubilities in untested solvents at various temperatures. The UNIFAC method was successful in this regard.

  3. Regularized GRACE monthly solutions by constraining the difference between the longitudinal and latitudinal gravity variations

    NASA Astrophysics Data System (ADS)

    Chen, Qiujie; Chen, Wu; Shen, Yunzhong; Zhang, Xingfu; Hsu, Houze

    2016-04-01

    The existing unconstrained Gravity Recovery and Climate Experiment (GRACE) monthly solutions i.e. CSR RL05 from Center for Space Research (CSR), GFZ RL05a from GeoForschungsZentrum (GFZ), JPL RL05 from Jet Propulsion Laboratory (JPL), DMT-1 from Delft Institute of Earth Observation and Space Systems (DEOS), AIUB from Bern University, and Tongji-GRACE01 as well as Tongji-GRACE02 from Tongji University, are dominated by correlated noise (such as north-south stripe errors) in high degree coefficients. To suppress the correlated noise of the unconstrained GRACE solutions, one typical option is to use post-processing filters such as decorrelation filtering and Gaussian smoothing , which are quite effective to reduce the noise and convenient to be implemented. Unlike these post-processing methods, the CNES/GRGS monthly GRACE solutions from Centre National d'Etudes Spatiales (CNES) were developed by using regularization with Kaula rule, whose correlated noise are reduced to such a great extent that no decorrelation filtering is required. Actually, the previous studies demonstrated that the north-south stripes in the GRACE solutions are due to the poor sensitivity of gravity variation in east-west direction. In other words, the longitudinal sampling of GRACE mission is very sparse but the latitudinal sampling of GRACE mission is quite dense, indicating that the recoverability of the longitudinal gravity variation is poor or unstable, leading to the ill-conditioned monthly GRACE solutions. To stabilize the monthly solutions, we constructed the regularization matrices by minimizing the difference between the longitudinal and latitudinal gravity variations and applied them to derive a time series of regularized GRACE monthly solutions named RegTongji RL01 for the period Jan. 2003 to Aug. 2011 in this paper. The signal powers and noise level of RegTongji RL01 were analyzed in this paper, which shows that: (1) No smoothing or decorrelation filtering is required for RegTongji RL01 anymore. (2) The signal powers of RegTongji RL01 are obviously stronger than those of the filtered solutions but the noise levels of the regularized and filtered solutions are consistent, suggesting that RegTongji RL01 has the higher signal-to-noise ratio.

  4. Further investigation on "A multiplicative regularization for force reconstruction"

    NASA Astrophysics Data System (ADS)

    Aucejo, M.; De Smet, O.

    2018-05-01

    We have recently proposed a multiplicative regularization to reconstruct mechanical forces acting on a structure from vibration measurements. This method does not require any selection procedure for choosing the regularization parameter, since the amount of regularization is automatically adjusted throughout an iterative resolution process. The proposed iterative algorithm has been developed with performance and efficiency in mind, but it is actually a simplified version of a full iterative procedure not described in the original paper. The present paper aims at introducing the full resolution algorithm and comparing it with its simplified version in terms of computational efficiency and solution accuracy. In particular, it is shown that both algorithms lead to very similar identified solutions.

  5. Regularized Embedded Multiple Kernel Dimensionality Reduction for Mine Signal Processing.

    PubMed

    Li, Shuang; Liu, Bing; Zhang, Chen

    2016-01-01

    Traditional multiple kernel dimensionality reduction models are generally based on graph embedding and manifold assumption. But such assumption might be invalid for some high-dimensional or sparse data due to the curse of dimensionality, which has a negative influence on the performance of multiple kernel learning. In addition, some models might be ill-posed if the rank of matrices in their objective functions was not high enough. To address these issues, we extend the traditional graph embedding framework and propose a novel regularized embedded multiple kernel dimensionality reduction method. Different from the conventional convex relaxation technique, the proposed algorithm directly takes advantage of a binary search and an alternative optimization scheme to obtain optimal solutions efficiently. The experimental results demonstrate the effectiveness of the proposed method for supervised, unsupervised, and semisupervised scenarios.

  6. Seismic waves in a self-gravitating planet

    NASA Astrophysics Data System (ADS)

    Brazda, Katharina; de Hoop, Maarten V.; Hörmann, Günther

    2013-04-01

    The elastic-gravitational equations describe the propagation of seismic waves including the effect of self-gravitation. We rigorously derive and analyze this system of partial differential equations and boundary conditions for a general, uniformly rotating, elastic, but aspherical, inhomogeneous, and anisotropic, fluid-solid earth model, under minimal assumptions concerning the smoothness of material parameters and geometry. For this purpose we first establish a consistent mathematical formulation of the low regularity planetary model within the framework of nonlinear continuum mechanics. Using calculus of variations in a Sobolev space setting, we then show how the weak form of the linearized elastic-gravitational equations directly arises from Hamilton's principle of stationary action. Finally we prove existence and uniqueness of weak solutions by the method of energy estimates and discuss additional regularity properties.

  7. A fully Galerkin method for the recovery of stiffness and damping parameters in Euler-Bernoulli beam models

    NASA Technical Reports Server (NTRS)

    Smith, R. C.; Bowers, K. L.

    1991-01-01

    A fully Sinc-Galerkin method for recovering the spatially varying stiffness and damping parameters in Euler-Bernoulli beam models is presented. The forward problems are discretized with a sinc basis in both the spatial and temporal domains thus yielding an approximate solution which converges exponentially and is valid on the infinite time interval. Hence the method avoids the time-stepping which is characteristic of many of the forward schemes which are used in parameter recovery algorithms. Tikhonov regularization is used to stabilize the resulting inverse problem, and the L-curve method for determining an appropriate value of the regularization parameter is briefly discussed. Numerical examples are given which demonstrate the applicability of the method for both individual and simultaneous recovery of the material parameters.

  8. Characterization of Window Functions for Regularization of Electrical Capacitance Tomography Image Reconstruction

    NASA Astrophysics Data System (ADS)

    Jiang, Peng; Peng, Lihui; Xiao, Deyun

    2007-06-01

    This paper presents a regularization method by using different window functions as regularization for electrical capacitance tomography (ECT) image reconstruction. Image reconstruction for ECT is a typical ill-posed inverse problem. Because of the small singular values of the sensitivity matrix, the solution is sensitive to the measurement noise. The proposed method uses the spectral filtering properties of different window functions to make the solution stable by suppressing the noise in measurements. The window functions, such as the Hanning window, the cosine window and so on, are modified for ECT image reconstruction. Simulations with respect to five typical permittivity distributions are carried out. The reconstructions are better and some of the contours are clearer than the results from the Tikhonov regularization. Numerical results show that the feasibility of the image reconstruction algorithm using different window functions as regularization.

  9. Simple picture for neutrino flavor transformation in supernovae

    NASA Astrophysics Data System (ADS)

    Duan, Huaiyu; Fuller, George M.; Qian, Yong-Zhong

    2007-10-01

    We can understand many recently discovered features of flavor evolution in dense, self-coupled supernova neutrino and antineutrino systems with a simple, physical scheme consisting of two quasistatic solutions. One solution closely resembles the conventional, adiabatic single-neutrino Mikheyev-Smirnov-Wolfenstein (MSW) mechanism, in that neutrinos and antineutrinos remain in mass eigenstates as they evolve in flavor space. The other solution is analogous to the regular precession of a gyroscopic pendulum in flavor space, and has been discussed extensively in recent works. Results of recent numerical studies are best explained with combinations of these solutions in the following general scenario: (1) Near the neutrino sphere, the MSW-like many-body solution obtains. (2) Depending on neutrino vacuum mixing parameters, luminosities, energy spectra, and the matter density profile, collective flavor transformation in the nutation mode develops and drives neutrinos away from the MSW-like evolution and toward regular precession. (3) Neutrino and antineutrino flavors roughly evolve according to the regular precession solution until neutrino densities are low. In the late stage of the precession solution, a stepwise swapping develops in the energy spectra of νe and νμ/ντ. We also discuss some subtle points regarding adiabaticity in flavor transformation in dense-neutrino systems.

  10. The construction of sparse models of Mars' crustal magnetic field

    NASA Astrophysics Data System (ADS)

    Moore, Kimberly; Bloxham, Jeremy

    2017-04-01

    The crustal magnetic field of Mars is a key constraint on Martian geophysical history, especially the timing of the dynamo shutoff. Maps of the crustal magnetic field of Mars show wide variations in the intensity of magnetization, with most of the Northern hemisphere only weakly magnetized. Previous methods of analysis tend to favor smooth solutions for the crustal magnetic field of Mars, making use of techniques such as L2 norms. Here we utilize inversion methods designed for sparse models, to see how much of the surface area of Mars must be magnetized in order to fit available spacecraft magnetic field data. We solve for the crustal magnetic field at 10,000 individual magnetic pixels on the surface of Mars. We employ an L1 regularization, and solve for models where each magnetic pixel is identically zero, unless required otherwise by the data. We find solutions with an adequate fit to the data with over 90% sparsity (90% of magnetic pixels having a field value of exactly 0). We contrast these solutions with L2-based solutions, as well as an elastic net model (combination of L1 and L2). We find our sparse solutions look dramatically different from previous models in the literature, but still give a physically reasonable history of the dynamo (shutting off around 4.1 Ga).

  11. Terminal attractors for addressable memory in neural networks

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    1988-01-01

    A new type of attractors - terminal attractors - for an addressable memory in neural networks operating in continuous time is introduced. These attractors represent singular solutions of the dynamical system. They intersect (or envelope) the families of regular solutions while each regular solution approaches the terminal attractor in a finite time period. It is shown that terminal attractors can be incorporated into neural networks such that any desired set of these attractors with prescribed basins is provided by an appropriate selection of the weight matrix.

  12. Markov chain Monte Carlo techniques and spatial-temporal modelling for medical EIT.

    PubMed

    West, Robert M; Aykroyd, Robert G; Meng, Sha; Williams, Richard A

    2004-02-01

    Many imaging problems such as imaging with electrical impedance tomography (EIT) can be shown to be inverse problems: that is either there is no unique solution or the solution does not depend continuously on the data. As a consequence solution of inverse problems based on measured data alone is unstable, particularly if the mapping between the solution distribution and the measurements is also nonlinear as in EIT. To deliver a practical stable solution, it is necessary to make considerable use of prior information or regularization techniques. The role of a Bayesian approach is therefore of fundamental importance, especially when coupled with Markov chain Monte Carlo (MCMC) sampling to provide information about solution behaviour. Spatial smoothing is a commonly used approach to regularization. In the human thorax EIT example considered here nonlinearity increases the difficulty of imaging, using only boundary data, leading to reconstructions which are often rather too smooth. In particular, in medical imaging the resistivity distribution usually contains substantial jumps at the boundaries of different anatomical regions. With spatial smoothing these boundaries can be masked by blurring. This paper focuses on the medical application of EIT to monitor lung and cardiac function and uses explicit geometric information regarding anatomical structure and incorporates temporal correlation. Some simple properties are assumed known, or at least reliably estimated from separate studies, whereas others are estimated from the voltage measurements. This structural formulation will also allow direct estimation of clinically important quantities, such as ejection fraction and residual capacity, along with assessment of precision.

  13. Constrained Total Generalized p-Variation Minimization for Few-View X-Ray Computed Tomography Image Reconstruction

    PubMed Central

    Zhang, Hanming; Wang, Linyuan; Yan, Bin; Li, Lei; Cai, Ailong; Hu, Guoen

    2016-01-01

    Total generalized variation (TGV)-based computed tomography (CT) image reconstruction, which utilizes high-order image derivatives, is superior to total variation-based methods in terms of the preservation of edge information and the suppression of unfavorable staircase effects. However, conventional TGV regularization employs l1-based form, which is not the most direct method for maximizing sparsity prior. In this study, we propose a total generalized p-variation (TGpV) regularization model to improve the sparsity exploitation of TGV and offer efficient solutions to few-view CT image reconstruction problems. To solve the nonconvex optimization problem of the TGpV minimization model, we then present an efficient iterative algorithm based on the alternating minimization of augmented Lagrangian function. All of the resulting subproblems decoupled by variable splitting admit explicit solutions by applying alternating minimization method and generalized p-shrinkage mapping. In addition, approximate solutions that can be easily performed and quickly calculated through fast Fourier transform are derived using the proximal point method to reduce the cost of inner subproblems. The accuracy and efficiency of the simulated and real data are qualitatively and quantitatively evaluated to validate the efficiency and feasibility of the proposed method. Overall, the proposed method exhibits reasonable performance and outperforms the original TGV-based method when applied to few-view problems. PMID:26901410

  14. Invariant models in the inversion of gravity and magnetic fields and their derivatives

    NASA Astrophysics Data System (ADS)

    Ialongo, Simone; Fedi, Maurizio; Florio, Giovanni

    2014-11-01

    In potential field inversion problems we usually solve underdetermined systems and realistic solutions may be obtained by introducing a depth-weighting function in the objective function. The choice of the exponent of such power-law is crucial. It was suggested to determine it from the field-decay due to a single source-block; alternatively it has been defined as the structural index of the investigated source distribution. In both cases, when k-order derivatives of the potential field are considered, the depth-weighting exponent has to be increased by k with respect that of the potential field itself, in order to obtain consistent source model distributions. We show instead that invariant and realistic source-distribution models are obtained using the same depth-weighting exponent for the magnetic field and for its k-order derivatives. A similar behavior also occurs in the gravity case. In practice we found that the depth weighting-exponent is invariant for a given source-model and equal to that of the corresponding magnetic field, in the magnetic case, and of the 1st derivative of the gravity field, in the gravity case. In the case of the regularized inverse problem, with depth-weighting and general constraints, the mathematical demonstration of such invariance is difficult, because of its non-linearity, and of its variable form, due to the different constraints used. However, tests performed on a variety of synthetic cases seem to confirm the invariance of the depth-weighting exponent. A final consideration regards the role of the regularization parameter; we show that the regularization can severely affect the depth to the source because the estimated depth tends to increase proportionally with the size of the regularization parameter. Hence, some care is needed in handling the combined effect of the regularization parameter and depth weighting.

  15. ADAPTIVE FINITE ELEMENT MODELING TECHNIQUES FOR THE POISSON-BOLTZMANN EQUATION

    PubMed Central

    HOLST, MICHAEL; MCCAMMON, JAMES ANDREW; YU, ZEYUN; ZHOU, YOUNGCHENG; ZHU, YUNRONG

    2011-01-01

    We consider the design of an effective and reliable adaptive finite element method (AFEM) for the nonlinear Poisson-Boltzmann equation (PBE). We first examine the two-term regularization technique for the continuous problem recently proposed by Chen, Holst, and Xu based on the removal of the singular electrostatic potential inside biomolecules; this technique made possible the development of the first complete solution and approximation theory for the Poisson-Boltzmann equation, the first provably convergent discretization, and also allowed for the development of a provably convergent AFEM. However, in practical implementation, this two-term regularization exhibits numerical instability. Therefore, we examine a variation of this regularization technique which can be shown to be less susceptible to such instability. We establish a priori estimates and other basic results for the continuous regularized problem, as well as for Galerkin finite element approximations. We show that the new approach produces regularized continuous and discrete problems with the same mathematical advantages of the original regularization. We then design an AFEM scheme for the new regularized problem, and show that the resulting AFEM scheme is accurate and reliable, by proving a contraction result for the error. This result, which is one of the first results of this type for nonlinear elliptic problems, is based on using continuous and discrete a priori L∞ estimates to establish quasi-orthogonality. To provide a high-quality geometric model as input to the AFEM algorithm, we also describe a class of feature-preserving adaptive mesh generation algorithms designed specifically for constructing meshes of biomolecular structures, based on the intrinsic local structure tensor of the molecular surface. All of the algorithms described in the article are implemented in the Finite Element Toolkit (FETK), developed and maintained at UCSD. The stability advantages of the new regularization scheme are demonstrated with FETK through comparisons with the original regularization approach for a model problem. The convergence and accuracy of the overall AFEM algorithm is also illustrated by numerical approximation of electrostatic solvation energy for an insulin protein. PMID:21949541

  16. Boundary Regularity for the Porous Medium Equation

    NASA Astrophysics Data System (ADS)

    Björn, Anders; Björn, Jana; Gianazza, Ugo; Siljander, Juhana

    2018-05-01

    We study the boundary regularity of solutions to the porous medium equation {u_t = Δ u^m} in the degenerate range {m > 1} . In particular, we show that in cylinders the Dirichlet problem with positive continuous boundary data on the parabolic boundary has a solution which attains the boundary values, provided that the spatial domain satisfies the elliptic Wiener criterion. This condition is known to be optimal, and it is a consequence of our main theorem which establishes a barrier characterization of regular boundary points for general—not necessarily cylindrical—domains in {{R}^{n+1}} . One of our fundamental tools is a new strict comparison principle between sub- and superparabolic functions, which makes it essential for us to study both nonstrict and strict Perron solutions to be able to develop a fruitful boundary regularity theory. Several other comparison principles and pasting lemmas are also obtained. In the process we obtain a rather complete picture of the relation between sub/superparabolic functions and weak sub/supersolutions.

  17. Manifold optimization-based analysis dictionary learning with an ℓ1∕2-norm regularizer.

    PubMed

    Li, Zhenni; Ding, Shuxue; Li, Yujie; Yang, Zuyuan; Xie, Shengli; Chen, Wuhui

    2018-02-01

    Recently there has been increasing attention towards analysis dictionary learning. In analysis dictionary learning, it is an open problem to obtain the strong sparsity-promoting solutions efficiently while simultaneously avoiding the trivial solutions of the dictionary. In this paper, to obtain the strong sparsity-promoting solutions, we employ the ℓ 1∕2 norm as a regularizer. The very recent study on ℓ 1∕2 norm regularization theory in compressive sensing shows that its solutions can give sparser results than using the ℓ 1 norm. We transform a complex nonconvex optimization into a number of one-dimensional minimization problems. Then the closed-form solutions can be obtained efficiently. To avoid trivial solutions, we apply manifold optimization to update the dictionary directly on the manifold satisfying the orthonormality constraint, so that the dictionary can avoid the trivial solutions well while simultaneously capturing the intrinsic properties of the dictionary. The experiments with synthetic and real-world data verify that the proposed algorithm for analysis dictionary learning can not only obtain strong sparsity-promoting solutions efficiently, but also learn more accurate dictionary in terms of dictionary recovery and image processing than the state-of-the-art algorithms. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Well-posedness of characteristic symmetric hyperbolic systems

    NASA Astrophysics Data System (ADS)

    Secchi, Paolo

    1996-06-01

    We consider the initial-boundary-value problem for quasi-linear symmetric hyperbolic systems with characteristic boundary of constant multiplicity. We show the well-posedness in Hadamard's sense (i.e., existence, uniqueness and continuous dependence of solutions on the data) of regular solutions in suitable functions spaces which take into account the loss of regularity in the normal direction to the characteristic boundary.

  19. Krylov subspace iterative methods for boundary element method based near-field acoustic holography.

    PubMed

    Valdivia, Nicolas; Williams, Earl G

    2005-02-01

    The reconstruction of the acoustic field for general surfaces is obtained from the solution of a matrix system that results from a boundary integral equation discretized using boundary element methods. The solution to the resultant matrix system is obtained using iterative regularization methods that counteract the effect of noise on the measurements. These methods will not require the calculation of the singular value decomposition, which can be expensive when the matrix system is considerably large. Krylov subspace methods are iterative methods that have the phenomena known as "semi-convergence," i.e., the optimal regularization solution is obtained after a few iterations. If the iteration is not stopped, the method converges to a solution that generally is totally corrupted by errors on the measurements. For these methods the number of iterations play the role of the regularization parameter. We will focus our attention to the study of the regularizing properties from the Krylov subspace methods like conjugate gradients, least squares QR and the recently proposed Hybrid method. A discussion and comparison of the available stopping rules will be included. A vibrating plate is considered as an example to validate our results.

  20. Analytic derivation of an approximate SU(3) symmetry inside the symmetry triangle of the interacting boson approximation model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bonatsos, Dennis; Karampagia, S.; Casten, R. F.

    2011-05-15

    Using a contraction of the SU(3) algebra to the algebra of the rigid rotator in the large-boson-number limit of the interacting boson approximation (IBA) model, a line is found inside the symmetry triangle of the IBA, along which the SU(3) symmetry is preserved. The line extends from the SU(3) vertex to near the critical line of the first-order shape/phase transition separating the spherical and prolate deformed phases, and it lies within the Alhassid-Whelan arc of regularity, the unique valley of regularity connecting the SU(3) and U(5) vertices in the midst of chaotic regions. In addition to providing an explanation formore » the existence of the arc of regularity, the present line represents an example of an analytically determined approximate symmetry in the interior of the symmetry triangle of the IBA. The method is applicable to algebraic models possessing subalgebras amenable to contraction. This condition is equivalent to algebras in which the equilibrium ground state and its rotational band become energetically isolated from intrinsic excitations, as typified by deformed solutions to the IBA for large numbers of valence nucleons.« less

  1. Analytic regularization of uniform cubic B-spline deformation fields.

    PubMed

    Shackleford, James A; Yang, Qi; Lourenço, Ana M; Shusharina, Nadya; Kandasamy, Nagarajan; Sharp, Gregory C

    2012-01-01

    Image registration is inherently ill-posed, and lacks a unique solution. In the context of medical applications, it is desirable to avoid solutions that describe physically unsound deformations within the patient anatomy. Among the accepted methods of regularizing non-rigid image registration to provide solutions applicable to medical practice is the penalty of thin-plate bending energy. In this paper, we develop an exact, analytic method for computing the bending energy of a three-dimensional B-spline deformation field as a quadratic matrix operation on the spline coefficient values. Results presented on ten thoracic case studies indicate the analytic solution is between 61-1371x faster than a numerical central differencing solution.

  2. Thermodynamics of magnesian calcite solid-solutions at 25°C and 1 atm total pressure

    USGS Publications Warehouse

    Busenberg, Eurybiades; Plummer, Niel

    1989-01-01

    The stability of magnesian calcites was reexamined, and new results are presented for 28 natural inorganic, 12 biogenic, and 32 synthetic magnesian calcites. The magnesian calcite solid-solutions were separated into two groups on the basis of differences in stoichiometric solubility and other physical and chemical properties. Group I consists of solids of mainly metamorphic and hydrothermal origin, synthetic calcites prepared at high temperatures and pressures, and synthetic solids prepared at low temperature and very low calcite supersaturations () from artificial sea water or NaClMgCl2CaCl2solutions. Group I solids are essentially binary s of CaCO2 and MgCO2, and are thought to be relatively free of structural defects. Group II solid-solutions are of either biogenic origin or are synthetic magnesian calcites and protodolomites (0–20 and ∼ 45 mole percent MgCO3) prepared at high calcite supersaturations () from NaClNa2SO4MgCl2CaCl2 or NaClMgCl2CaCl2 solutions. Group II solid-solutions are treated as massively defective solids. The defects include substitution foreign ions (Na+ and SO42−) in the magnesian calcite lattice (point defects) and dislocations (~2 · 109 cm−2). Within each group, the excess free energy of mixing, GE, is described by the mixing model , where x is the mole fraction of the end-member Ca0.5Mg0.5CO3 in the solid-solution. The values of A0and A1 for Group I and II solids were evaluated at 25°C. The equilibrium constants of all the solids are closely described by the equation ln , where KC and KD are the equilibrium constants of calcite and Ca0.5Mg0.5CO3. Group I magnesian calcites were modeled as sub-regular solid-solutions between calcite and dolomite, and between calcite and “disordered dolomite”. Both models yield almost identical equilibrium constants for these magnesian calcites. The Group II magnesian calcites were modeled as sub-regular solid-solutions between defective calcite and protodolomite. Group I and II solid-solutions differ significantly in stability. The rate of crystal growth and the chemical composition of the aqueous solutions from which the solids were formed are the main factors controlling stoichiometric solubility of the magnesian calcites and the density of crystal defects. The literature on the occurrence and behavior of magnesian calcites in sea water and other aqueous solutions is also examined.

  3. On steady motion of viscoelastic fluid of Oldroyd type

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baranovskii, E. S., E-mail: esbaranovskii@gmail.com

    2014-06-01

    We consider a mathematical model describing the steady motion of a viscoelastic medium of Oldroyd type under the Navier slip condition at the boundary. In the rheological relation, we use the objective regularized Jaumann derivative. We prove the solubility of the corresponding boundary-value problem in the weak setting. We obtain an estimate for the norm of a solution in terms of the data of the problem. We show that the solution set is sequentially weakly closed. Furthermore, we give an analytic solution of the boundary-value problem describing the flow of a viscoelastic fluid in a flat channel under a slipmore » condition at the walls. Bibliography: 13 titles. (paper)« less

  4. On The Dynamics and Design of a Two-body Wave Energy Converter

    NASA Astrophysics Data System (ADS)

    Liang, Changwei; Zuo, Lei

    2016-09-01

    A two-body wave energy converter oscillating in heave is studied in this paper. The energy is extracted through the relative motion between the floating and submerged bodies. A linearized model in the frequency domain is adopted to study the dynamics of such a two-body system with consideration of both the viscous damping and the hydrodynamic damping. The closed form solution of the maximum absorption power and corresponding power take-off parameters are obtained. The suboptimal and optimal designs for a two-body system are proposed based on the closed form solution. The physical insight of the optimal design is to have one of the damped natural frequencies of the two body system the same as, or as close as possible to, the excitation frequency. A case study is conducted to investigate the influence of the submerged body on the absorption power of a two-body system subjected to suboptimal and optimal design under regular and irregular wave excitations. It is found that the absorption power of the two-body system can be significantly higher than that of the single body system with the same floating buoy in both regular and irregular waves. In regular waves, it is found that the mass of the submerged body should be designed with an optimal value in order to achieve the maximum absorption power for the given floating buoy. The viscous damping on the submerged body should be as small as possible for a given mass in both regular and irregular waves.

  5. Steady state temperature distribution in dermal regions of an irregular tapered shaped human limb with variable eccentricity.

    PubMed

    Agrawal, M; Pardasani, K R; Adlakha, N

    2014-08-01

    The investigators in the past have developed some models of temperature distribution in the human limb assuming it as a regular circular or elliptical tapered cylinder. But in reality the limb is not of regular tapered cylindrical shape. The radius and eccentricity are not same throughout the limb. In view of above a model of temperature distribution in the irregular tapered elliptical shaped human limb is proposed for a three dimensional steady state case in this paper. The limb is assumed to be composed of multiple cylindrical substructures with variable radius and eccentricity. The mathematical model incorporates the effect of blood mass flow rate, metabolic activity and thermal conductivity. The outer surface is exposed to the environment and appropriate boundary conditions have been framed. The finite element method has been employed to obtain the solution. The temperature profiles have been computed in the dermal layers of a human limb and used to study the effect of shape, microstructure and biophysical parameters on temperature distribution in human limbs. The proposed model is one of the most realistic model as compared to conventional models as this can be effectively employed to every regular and nonregular structures of the body with variable radius and eccentricity to study the thermal behaviour. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. A hybrid Pade-Galerkin technique for differential equations

    NASA Technical Reports Server (NTRS)

    Geer, James F.; Andersen, Carl M.

    1993-01-01

    A three-step hybrid analysis technique, which successively uses the regular perturbation expansion method, the Pade expansion method, and then a Galerkin approximation, is presented and applied to some model boundary value problems. In the first step of the method, the regular perturbation method is used to construct an approximation to the solution in the form of a finite power series in a small parameter epsilon associated with the problem. In the second step of the method, the series approximation obtained in step one is used to construct a Pade approximation in the form of a rational function in the parameter epsilon. In the third step, the various powers of epsilon which appear in the Pade approximation are replaced by new (unknown) parameters (delta(sub j)). These new parameters are determined by requiring that the residual formed by substituting the new approximation into the governing differential equation is orthogonal to each of the perturbation coordinate functions used in step one. The technique is applied to model problems involving ordinary or partial differential equations. In general, the technique appears to provide good approximations to the solution even when the perturbation and Pade approximations fail to do so. The method is discussed and topics for future investigations are indicated.

  7. FeynArts model file for MSSM transition counterterms from DREG to DRED

    NASA Astrophysics Data System (ADS)

    Stöckinger, Dominik; Varšo, Philipp

    2012-02-01

    The FeynArts model file MSSMdreg2dred implements MSSM transition counterterms which can convert one-loop Green functions from dimensional regularization to dimensional reduction. They correspond to a slight extension of the well-known Martin/Vaughn counterterms, specialized to the MSSM, and can serve also as supersymmetry-restoring counterterms. The paper provides full analytic results for the counterterms and gives one- and two-loop usage examples. The model file can simplify combining MS¯-parton distribution functions with supersymmetric renormalization or avoiding the renormalization of ɛ-scalars in dimensional reduction. Program summaryProgram title:MSSMdreg2dred.mod Catalogue identifier: AEKR_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKR_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: LGPL-License [1] No. of lines in distributed program, including test data, etc.: 7600 No. of bytes in distributed program, including test data, etc.: 197 629 Distribution format: tar.gz Programming language: Mathematica, FeynArts Computer: Any, capable of running Mathematica and FeynArts Operating system: Any, with running Mathematica, FeynArts installation Classification: 4.4, 5, 11.1 Subprograms used: Cat Id Title Reference ADOW_v1_0 FeynArts CPC 140 (2001) 418 Nature of problem: The computation of one-loop Feynman diagrams in the minimal supersymmetric standard model (MSSM) requires regularization. Two schemes, dimensional regularization and dimensional reduction are both common but have different pros and cons. In order to combine the advantages of both schemes one would like to easily convert existing results from one scheme into the other. Solution method: Finite counterterms are constructed which correspond precisely to the one-loop scheme differences for the MSSM. They are provided as a FeynArts [2] model file. Using this model file together with FeynArts, the (ultra-violet) regularization of any MSSM one-loop Green function is switched automatically from dimensional regularization to dimensional reduction. In particular the counterterms serve as supersymmetry-restoring counterterms for dimensional regularization. Restrictions: The counterterms are restricted to the one-loop level and the MSSM. Running time: A few seconds to generate typical Feynman graphs with FeynArts.

  8. Second-Order Two-Sided Estimates in Nonlinear Elliptic Problems

    NASA Astrophysics Data System (ADS)

    Cianchi, Andrea; Maz'ya, Vladimir G.

    2018-05-01

    Best possible second-order regularity is established for solutions to p-Laplacian type equations with {p \\in (1, ∞)} and a square-integrable right-hand side. Our results provide a nonlinear counterpart of the classical L 2-coercivity theory for linear problems, which is missing in the existing literature. Both local and global estimates are obtained. The latter apply to solutions to either Dirichlet or Neumann boundary value problems. Minimal regularity on the boundary of the domain is required, although our conclusions are new even for smooth domains. If the domain is convex, no regularity of its boundary is needed at all.

  9. Ion flux through membrane channels--an enhanced algorithm for the Poisson-Nernst-Planck model.

    PubMed

    Dyrka, Witold; Augousti, Andy T; Kotulska, Malgorzata

    2008-09-01

    A novel algorithmic scheme for numerical solution of the 3D Poisson-Nernst-Planck model is proposed. The algorithmic improvements are universal and independent of the detailed physical model. They include three major steps: an adjustable gradient-based step value, an adjustable relaxation coefficient, and an optimized segmentation of the modeled space. The enhanced algorithm significantly accelerates the speed of computation and reduces the computational demands. The theoretical model was tested on a regular artificial channel and validated on a real protein channel-alpha-hemolysin, proving its efficiency. (c) 2008 Wiley Periodicals, Inc.

  10. Modified Finch and Skea stellar model compatible with observational data

    NASA Astrophysics Data System (ADS)

    Pandya, D. M.; Thomas, V. O.; Sharma, R.

    2015-04-01

    We present a new class of solutions to the Einstein's field equations corresponding to a static spherically symmetric anisotropic system by generalizing the ansatz of Finch and Skea [Class. Quantum Grav. 6:467, 1989] for the gravitational potential g rr . The anisotropic stellar model previously studied by Sharma and Ratanpal [Int. J. Mod. Phys. D 13:1350074, 2013] is a sub-class of the solutions provided here. Based on physical requirements, regularity conditions and stability, we prescribe bounds on the model parameters. By systematically fixing values of the model parameters within the prescribed bound, we demonstrate that our model is compatible with the observed masses and radii of a wide variety of compact stars like 4U 1820-30, PSR J1903+327, 4U 1608-52, Vela X-1, PSR J1614-2230, SAX J1808.4-3658 and Her X-1.

  11. Viscous damping and spring force calculation of regularly perforated MEMS microstructures in the Stokes' approximation

    PubMed Central

    Homentcovschi, Dorel; Murray, Bruce T.; Miles, Ronald N.

    2013-01-01

    There are a number of applications for microstructure devices consisting of a regular pattern of perforations, and many of these utilize fluid damping. For the analysis of viscous damping and for calculating the spring force in some cases, it is possible to take advantage of the regular hole pattern by assuming periodicity. Here a model is developed to determine these quantities based on the solution of the Stokes' equations for the air flow. Viscous damping is directly related to thermal-mechanical noise. As a result, the design of perforated microstructures with minimal viscous damping is of real practical importance. A method is developed to calculate the damping coefficient in microstructures with periodic perforations. The result can be used to minimize squeeze film damping. Since micromachined devices have finite dimensions, the periodic model for the perforated microstructure has to be associated with the calculation of some frame (edge) corrections. Analysis of the edge corrections has also been performed. Results from analytical formulas and numerical simulations match very well with published measured data. PMID:24058267

  12. Viscous damping and spring force calculation of regularly perforated MEMS microstructures in the Stokes' approximation.

    PubMed

    Homentcovschi, Dorel; Murray, Bruce T; Miles, Ronald N

    2013-10-15

    There are a number of applications for microstructure devices consisting of a regular pattern of perforations, and many of these utilize fluid damping. For the analysis of viscous damping and for calculating the spring force in some cases, it is possible to take advantage of the regular hole pattern by assuming periodicity. Here a model is developed to determine these quantities based on the solution of the Stokes' equations for the air flow. Viscous damping is directly related to thermal-mechanical noise. As a result, the design of perforated microstructures with minimal viscous damping is of real practical importance. A method is developed to calculate the damping coefficient in microstructures with periodic perforations. The result can be used to minimize squeeze film damping. Since micromachined devices have finite dimensions, the periodic model for the perforated microstructure has to be associated with the calculation of some frame (edge) corrections. Analysis of the edge corrections has also been performed. Results from analytical formulas and numerical simulations match very well with published measured data.

  13. A note on convergence of solutions of total variation regularized linear inverse problems

    NASA Astrophysics Data System (ADS)

    Iglesias, José A.; Mercier, Gwenael; Scherzer, Otmar

    2018-05-01

    In a recent paper by Chambolle et al (2017 Inverse Problems 33 015002) it was proven that if the subgradient of the total variation at the noise free data is not empty, the level-sets of the total variation denoised solutions converge to the level-sets of the noise free data with respect to the Hausdorff distance. The condition on the subgradient corresponds to the source condition introduced by Burger and Osher (2007 Multiscale Model. Simul. 6 365–95), who proved convergence rates results with respect to the Bregman distance under this condition. We generalize the result of Chambolle et al to total variation regularization of general linear inverse problems under such a source condition. As particular applications we present denoising in bounded and unbounded, convex and non convex domains, deblurring and inversion of the circular Radon transform. In all these examples the convergence result applies. Moreover, we illustrate the convergence behavior through numerical examples.

  14. Stereo-tomography in triangulated models

    NASA Astrophysics Data System (ADS)

    Yang, Kai; Shao, Wei-Dong; Xing, Feng-yuan; Xiong, Kai

    2018-04-01

    Stereo-tomography is a distinctive tomographic method. It is capable of estimating the scatterer position, the local dip of scatterer and the background velocity simultaneously. Building a geologically consistent velocity model is always appealing for applied and earthquake seismologists. Differing from the previous work to incorporate various regularization techniques into the cost function of stereo-tomography, we think extending stereo-tomography to the triangulated model will be the most straightforward way to achieve this goal. In this paper, we provided all the Fréchet derivatives of stereo-tomographic data components with respect to model components for slowness-squared triangulated model (or sloth model) in 2D Cartesian coordinate based on the ray perturbation theory for interfaces. A sloth model representation means a sparser model representation when compared with conventional B-spline model representation. A sparser model representation leads to a smaller scale of stereo-tomographic (Fréchet) matrix, a higher-accuracy solution when solving linear equations, a faster convergence rate and a lower requirement for quantity of data space. Moreover, a quantitative representation of interface strengthens the relationships among different model components, which makes the cross regularizations among these model components, such as node coordinates, scatterer coordinates and scattering angles, etc., more straightforward and easier to be implemented. The sensitivity analysis, the model resolution matrix analysis and a series of synthetic data examples demonstrate the correctness of the Fréchet derivatives, the applicability of the regularization terms and the robustness of the stereo-tomography in triangulated model. It provides a solid theoretical foundation for the real applications in the future.

  15. The determination of pair-distance distribution by double electron-electron resonance: regularization by the length of distance discretization with Monte Carlo calculations

    NASA Astrophysics Data System (ADS)

    Dzuba, Sergei A.

    2016-08-01

    Pulsed double electron-electron resonance technique (DEER, or PELDOR) is applied to study conformations and aggregation of peptides, proteins, nucleic acids, and other macromolecules. For a pair of spin labels, experimental data allows for the determination of their distance distribution function, P(r). P(r) is derived as a solution of a first-kind Fredholm integral equation, which is an ill-posed problem. Here, we suggest regularization by increasing the distance discretization length to its upper limit where numerical integration still provides agreement with experiment. This upper limit is found to be well above the lower limit for which the solution instability appears because of the ill-posed nature of the problem. For solving the integral equation, Monte Carlo trials of P(r) functions are employed; this method has an obvious advantage of the fulfillment of the non-negativity constraint for P(r). The regularization by the increasing of distance discretization length for the case of overlapping broad and narrow distributions may be employed selectively, with this length being different for different distance ranges. The approach is checked for model distance distributions and for experimental data taken from literature for doubly spin-labeled DNA and peptide antibiotics.

  16. Regular and irregular deswelling of polyacrylate and hyaluronate gels induced by oppositely charged surfactants.

    PubMed

    Nilsson, Peter; Hansson, Per

    2008-09-15

    The deswelling kinetics of macroscopic polyacrylate (PA) gels in solutions of dodecyltrimethylammonium bromide (C(12)TAB) and cetyltrimethylammonium bromide (C(16)TAB), with and without added sodium bromide, as well as hyaluronate (HA) gels in solutions of cetylpyridinium chloride (CPC) are investigated. Additional data are also provided by small-angle X-ray scattering and microgel experiments. The purpose is to study the deswelling behavior of (1) regularly deswelling gels, for which the deswelling is successfully described using a core/shell model earlier employed for microgels, and (2) irregularly deswelling gels, where the gel turns into a balloon-like structure with a dense outer layer surrounding a liquid-filled core. For regularly deswelling gels, the deswelling of PA/C(12)TAB is found to be controlled by diffusion through both stagnant layer and collapsed surface phase, while for PA/C(16)TAB it is found to be controlled mainly by the latter. The difference in deswelling rate between the two is found to correspond to the difference in surfactant diffusion coefficient in the surface phase. Factors found to promote irregular deswelling, described as balloon formation, are rapid surfactant binding, high bromide and surfactant concentration, longer surfactant chain length, and macroscopic gel size. Scattering data indicating a cubic structure for HA/CPC complexes are reported.

  17. Fem Simulation of Triple Diffusive Natural Convection Along Inclined Plate in Porous Medium: Prescribed Surface Heat, Solute and Nanoparticles Flux

    NASA Astrophysics Data System (ADS)

    Goyal, M.; Goyal, R.; Bhargava, R.

    2017-12-01

    In this paper, triple diffusive natural convection under Darcy flow over an inclined plate embedded in a porous medium saturated with a binary base fluid containing nanoparticles and two salts is studied. The model used for the nanofluid is the one which incorporates the effects of Brownian motion and thermophoresis. In addition, the thermal energy equations include regular diffusion and cross-diffusion terms. The vertical surface has the heat, mass and nanoparticle fluxes each prescribed as a power law function of the distance along the wall. The boundary layer equations are transformed into a set of ordinary differential equations with the help of group theory transformations. A wide range of parameter values are chosen to bring out the effect of buoyancy ratio, regular Lewis number and modified Dufour parameters of both salts and nanofluid parameters with varying angle of inclinations. The effects of parameters on the velocity, temperature, solutal and nanoparticles volume fraction profiles, as well as on the important parameters of heat and mass transfer, i.e., the reduced Nusselt, regular and nanofluid Sherwood numbers, are discussed. Such problems find application in extrusion of metals, polymers and ceramics, production of plastic films, insulation of wires and liquid packaging.

  18. On epicardial potential reconstruction using regularization schemes with the L1-norm data term.

    PubMed

    Shou, Guofa; Xia, Ling; Liu, Feng; Jiang, Mingfeng; Crozier, Stuart

    2011-01-07

    The electrocardiographic (ECG) inverse problem is ill-posed and usually solved by regularization schemes. These regularization methods, such as the Tikhonov method, are often based on the L2-norm data and constraint terms. However, L2-norm-based methods inherently provide smoothed inverse solutions that are sensitive to measurement errors, and also lack the capability of localizing and distinguishing multiple proximal cardiac electrical sources. This paper presents alternative regularization schemes employing the L1-norm data term for the reconstruction of epicardial potentials (EPs) from measured body surface potentials (BSPs). During numerical implementation, the iteratively reweighted norm algorithm was applied to solve the L1-norm-related schemes, and measurement noises were considered in the BSP data. The proposed L1-norm data term-based regularization schemes (with L1 and L2 penalty terms of the normal derivative constraint (labelled as L1TV and L1L2)) were compared with the L2-norm data terms (Tikhonov with zero-order and normal derivative constraints, labelled as ZOT and FOT, and the total variation method labelled as L2TV). The studies demonstrated that, with averaged measurement noise, the inverse solutions provided by the L1L2 and FOT algorithms have less relative error values. However, when larger noise occurred in some electrodes (for example, signal lost during measurement), the L1TV and L1L2 methods can obtain more accurate EPs in a robust manner. Therefore the L1-norm data term-based solutions are generally less perturbed by measurement noises, suggesting that the new regularization scheme is promising for providing practical ECG inverse solutions.

  19. Solid-solution aqueous-solution equilibria: thermodynamic theory and representation

    USGS Publications Warehouse

    Glynn, P.D.; Reardon, E.J.

    1990-01-01

    Thorstenson and Plummer's (1977) "stoichiometric saturation' model is reviewed, and a general relation between stoichiometric saturation Kss constants and excess free energies of mixing is derived for a binary solid-solution B1-xCxA: GE = RT[ln Kss - xln(xKCA) - (l-x)ln((l-x)KBA)]. This equation allows a suitable excess free energy function, such as Guggenheim's (1937) sub-regular function, to be fitted from experimentally determined Kss constants. Solid-phase free energies and component activity-coefficients can then be determined from one or two fitted parameters and from the endmember solubility products KBA and KCA. A general form of Lippmann's (1977,1980) "solutus equation is derived from an examination of Lippmann's (1977,1980) "total solubility product' model. Lippmann's ??II or "total solubility product' variable is used to represent graphically not only thermodynamic equilibrium states and primary saturation states but also stoichiometric saturation and pure phase saturation states. -from Authors

  20. Soliton solutions to the fifth-order Korteweg-de Vries equation and their applications to surface and internal water waves

    NASA Astrophysics Data System (ADS)

    Khusnutdinova, K. R.; Stepanyants, Y. A.; Tranter, M. R.

    2018-02-01

    We study solitary wave solutions of the fifth-order Korteweg-de Vries equation which contains, besides the traditional quadratic nonlinearity and third-order dispersion, additional terms including cubic nonlinearity and fifth order linear dispersion, as well as two nonlinear dispersive terms. An exact solitary wave solution to this equation is derived, and the dependence of its amplitude, width, and speed on the parameters of the governing equation is studied. It is shown that the derived solution can represent either an embedded or regular soliton depending on the equation parameters. The nonlinear dispersive terms can drastically influence the existence of solitary waves, their nature (regular or embedded), profile, polarity, and stability with respect to small perturbations. We show, in particular, that in some cases embedded solitons can be stable even with respect to interactions with regular solitons. The results obtained are applicable to surface and internal waves in fluids, as well as to waves in other media (plasma, solid waveguides, elastic media with microstructure, etc.).

  1. Epidemic spreading in weighted networks: an edge-based mean-field solution.

    PubMed

    Yang, Zimo; Zhou, Tao

    2012-05-01

    Weight distribution greatly impacts the epidemic spreading taking place on top of networks. This paper presents a study of a susceptible-infected-susceptible model on regular random networks with different kinds of weight distributions. Simulation results show that the more homogeneous weight distribution leads to higher epidemic prevalence, which, unfortunately, could not be captured by the traditional mean-field approximation. This paper gives an edge-based mean-field solution for general weight distribution, which can quantitatively reproduce the simulation results. This method could be applied to characterize the nonequilibrium steady states of dynamical processes on weighted networks.

  2. Surface tension and density of Si-Ge melts

    NASA Astrophysics Data System (ADS)

    Ricci, Enrica; Amore, Stefano; Giuranno, Donatella; Novakovic, Rada; Tuissi, Ausonio; Sobczak, Natalia; Nowak, Rafal; Korpala, Bartłomiej; Bruzda, Grzegorz

    2014-06-01

    In this work, the surface tension and density of Si-Ge liquid alloys were determined by the pendant drop method. Over the range of measurements, both properties show a linear temperature dependence and a nonlinear concentration dependence. Indeed, the density decreases with increasing silicon content exhibiting positive deviation from ideality, while the surface tension increases and deviates negatively with respect to the ideal solution model. Taking into account the Si-Ge phase diagram, a simple lens type, the surface tension behavior of the Si-Ge liquid alloys was analyzed in the framework of the Quasi-Chemical Approximation for the Regular Solutions model. The new experimental results were compared with a few data available in the literature, obtained by the containerless method.

  3. Constraints on geothermal reservoir volume change calculations from InSAR surface displacements and injection and production data

    NASA Astrophysics Data System (ADS)

    Kaven, J. Ole; Barbour, Andrew J.; Ali, Tabrez

    2017-04-01

    Continual production of geothermal energy at times leads to significant surface displacement that can be observed in high spatial resolution using InSAR imagery. The surface displacement can be analyzed to resolve volume change within the reservoir revealing the often-complicated patterns of reservoir deformation. Simple point source models of reservoir deformation in a homogeneous elastic or poro-elastic medium can be superimposed to provide spatially varying, kinematic representations of reservoir deformation. In many cases, injection and production data are known in insufficient detail; but, when these are available, the same Green functions can be used to constrain the reservoir deformation. Here we outline how the injection and production data can be used to constrain bounds on the solution by posing the inversion as a quadratic programming with inequality constraints and regularization rather than a conventional least squares solution with regularization. We apply this method to InSAR-derived surface displacements at the Coso and Salton Sea Geothermal Fields in California, using publically available injection and production data. At both geothermal fields the available surface deformation in conjunction with the injection and production data permit robust solutions for the spatially varying reservoir deformation. The reservoir deformation pattern resulting from the constrained quadratic programming solution is more heterogeneous when compared to a conventional least squares solution. The increased heterogeneity is consistent with the known structural controls on heat and fluid transport in each geothermal reservoir.

  4. A Novel Hypercomplex Solution to Kepler's Problem

    NASA Astrophysics Data System (ADS)

    Condurache, C.; Martinuşi, V.

    2007-05-01

    By using a Sundman like regularization, we offer a unified solution to Kepler's problem by using hypercomplex numbers. The fundamental role in this paper is played by the Laplace-Runge-Lenz prime integral and by the hypercomplex numbers algebra. The procedure unifies and generalizes the regularizations offered by Levi-Civita and Kustaanheimo-Stiefel. Closed form hypercomplex expressions for the law of motion and velocity are deduced, together with inedite hypercomplex prime integrals.

  5. Synthesis and structural study of two new heparin-like hexasaccharides.

    PubMed

    Lucas, Ricardo; Angulo, Jesús; Nieto, Pedro M; Martín-Lomas, Manuel

    2003-07-07

    Two new heparin-like hexasaccharides, 5 and 6, have been synthesised using a convergent block strategy and their solution conformations have been determined by NMR spectroscopy and molecular modelling. Both hexasaccharides contain the basic structural motif of the regular region of heparin but with negative charge distributions which have been designed to get insight into the mechanism of fibroblast growth factors (FGFs) activation.

  6. Are black holes with hair a normal state of matter?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nieuwenhuizen, Th. M.

    Recent observations put forward that quasars are black holes with a magnetic dipole moment and no event horizon. To model hairy black holes a quantum field for hydrogen is considered in curved space, coupled to the scalar curvature. An exact, regular solution for the interior metric occurs for supermassive black holes. The equation of state is p = -{rho}c{sup 2}/3.

  7. A Fast Solution of the Lindley Equations for the M-Group Regression Problem. Technical Report 78-3, October 1977 through May 1978.

    ERIC Educational Resources Information Center

    Molenaar, Ivo W.

    The technical problems involved in obtaining Bayesian model estimates for the regression parameters in m similar groups are studied. The available computer programs, BPREP (BASIC), and BAYREG, both written in FORTRAN, require an amount of computer processing that does not encourage regular use. These programs are analyzed so that the performance…

  8. Entropy of mixing calculations for compound forming liquid alloys in the hard sphere system

    NASA Astrophysics Data System (ADS)

    Singh, P.; Khanna, K. N.

    1984-06-01

    It is shown that the semi-empirical model proposed in a previous paper for the evaluation of the entropy of mixing of simple liquid metals alloys leads to accurate results for compound forming liquid alloys. The procedure is similar to that described for a regular solution. Numerical applications are made to NaGa, KPb and KT1 alloys.

  9. Adaptive regularization of the NL-means: application to image and video denoising.

    PubMed

    Sutour, Camille; Deledalle, Charles-Alban; Aujol, Jean-François

    2014-08-01

    Image denoising is a central problem in image processing and it is often a necessary step prior to higher level analysis such as segmentation, reconstruction, or super-resolution. The nonlocal means (NL-means) perform denoising by exploiting the natural redundancy of patterns inside an image; they perform a weighted average of pixels whose neighborhoods (patches) are close to each other. This reduces significantly the noise while preserving most of the image content. While it performs well on flat areas and textures, it suffers from two opposite drawbacks: it might over-smooth low-contrasted areas or leave a residual noise around edges and singular structures. Denoising can also be performed by total variation minimization-the Rudin, Osher and Fatemi model-which leads to restore regular images, but it is prone to over-smooth textures, staircasing effects, and contrast losses. We introduce in this paper a variational approach that corrects the over-smoothing and reduces the residual noise of the NL-means by adaptively regularizing nonlocal methods with the total variation. The proposed regularized NL-means algorithm combines these methods and reduces both of their respective defaults by minimizing an adaptive total variation with a nonlocal data fidelity term. Besides, this model adapts to different noise statistics and a fast solution can be obtained in the general case of the exponential family. We develop this model for image denoising and we adapt it to video denoising with 3D patches.

  10. An interior-point method for total variation regularized positron emission tomography image reconstruction

    NASA Astrophysics Data System (ADS)

    Bai, Bing

    2012-03-01

    There has been a lot of work on total variation (TV) regularized tomographic image reconstruction recently. Many of them use gradient-based optimization algorithms with a differentiable approximation of the TV functional. In this paper we apply TV regularization in Positron Emission Tomography (PET) image reconstruction. We reconstruct the PET image in a Bayesian framework, using Poisson noise model and TV prior functional. The original optimization problem is transformed to an equivalent problem with inequality constraints by adding auxiliary variables. Then we use an interior point method with logarithmic barrier functions to solve the constrained optimization problem. In this method, a series of points approaching the solution from inside the feasible region are found by solving a sequence of subproblems characterized by an increasing positive parameter. We use preconditioned conjugate gradient (PCG) algorithm to solve the subproblems directly. The nonnegativity constraint is enforced by bend line search. The exact expression of the TV functional is used in our calculations. Simulation results show that the algorithm converges fast and the convergence is insensitive to the values of the regularization and reconstruction parameters.

  11. A Path Algorithm for Constrained Estimation

    PubMed Central

    Zhou, Hua; Lange, Kenneth

    2013-01-01

    Many least-square problems involve affine equality and inequality constraints. Although there are a variety of methods for solving such problems, most statisticians find constrained estimation challenging. The current article proposes a new path-following algorithm for quadratic programming that replaces hard constraints by what are called exact penalties. Similar penalties arise in l1 regularization in model selection. In the regularization setting, penalties encapsulate prior knowledge, and penalized parameter estimates represent a trade-off between the observed data and the prior knowledge. Classical penalty methods of optimization, such as the quadratic penalty method, solve a sequence of unconstrained problems that put greater and greater stress on meeting the constraints. In the limit as the penalty constant tends to ∞, one recovers the constrained solution. In the exact penalty method, squared penalties!are replaced by absolute value penalties, and the solution is recovered for a finite value of the penalty constant. The exact path-following method starts at the unconstrained solution and follows the solution path as the penalty constant increases. In the process, the solution path hits, slides along, and exits from the various constraints. Path following in Lasso penalized regression, in contrast, starts with a large value of the penalty constant and works its way downward. In both settings, inspection of the entire solution path is revealing. Just as with the Lasso and generalized Lasso, it is possible to plot the effective degrees of freedom along the solution path. For a strictly convex quadratic program, the exact penalty algorithm can be framed entirely in terms of the sweep operator of regression analysis. A few well-chosen examples illustrate the mechanics and potential of path following. This article has supplementary materials available online. PMID:24039382

  12. Stability of planar traveling waves in a Keller-Segel equation on an infinite strip domain

    NASA Astrophysics Data System (ADS)

    Chae, Myeongju; Choi, Kyudong; Kang, Kyungkeun; Lee, Jihoon

    2018-07-01

    We consider a simplified model of tumor angiogenesis, described by a Keller-Segel equation on the two dimensional domain (x , y) ∈ R ×Sλ where Sλ is the circle of perimeter λ. It is known that the system allows planar traveling wave solutions of an invading type. In case that λ is sufficiently small, we establish the nonlinear stability of traveling wave solutions in the absence of chemical diffusion if the initial perturbation is sufficiently small in some weighted Sobolev space. When chemical diffusion is present, it can be shown that the system is linearly stable. Lastly, we prove that any solution with our front condition eventually becomes planar under certain regularity conditions.

  13. Image interpolation via regularized local linear regression.

    PubMed

    Liu, Xianming; Zhao, Debin; Xiong, Ruiqin; Ma, Siwei; Gao, Wen; Sun, Huifang

    2011-12-01

    The linear regression model is a very attractive tool to design effective image interpolation schemes. Some regression-based image interpolation algorithms have been proposed in the literature, in which the objective functions are optimized by ordinary least squares (OLS). However, it is shown that interpolation with OLS may have some undesirable properties from a robustness point of view: even small amounts of outliers can dramatically affect the estimates. To address these issues, in this paper we propose a novel image interpolation algorithm based on regularized local linear regression (RLLR). Starting with the linear regression model where we replace the OLS error norm with the moving least squares (MLS) error norm leads to a robust estimator of local image structure. To keep the solution stable and avoid overfitting, we incorporate the l(2)-norm as the estimator complexity penalty. Moreover, motivated by recent progress on manifold-based semi-supervised learning, we explicitly consider the intrinsic manifold structure by making use of both measured and unmeasured data points. Specifically, our framework incorporates the geometric structure of the marginal probability distribution induced by unmeasured samples as an additional local smoothness preserving constraint. The optimal model parameters can be obtained with a closed-form solution by solving a convex optimization problem. Experimental results on benchmark test images demonstrate that the proposed method achieves very competitive performance with the state-of-the-art interpolation algorithms, especially in image edge structure preservation. © 2011 IEEE

  14. Analysis of the iteratively regularized Gauss-Newton method under a heuristic rule

    NASA Astrophysics Data System (ADS)

    Jin, Qinian; Wang, Wei

    2018-03-01

    The iteratively regularized Gauss-Newton method is one of the most prominent regularization methods for solving nonlinear ill-posed inverse problems when the data is corrupted by noise. In order to produce a useful approximate solution, this iterative method should be terminated properly. The existing a priori and a posteriori stopping rules require accurate information on the noise level, which may not be available or reliable in practical applications. In this paper we propose a heuristic selection rule for this regularization method, which requires no information on the noise level. By imposing certain conditions on the noise, we derive a posteriori error estimates on the approximate solutions under various source conditions. Furthermore, we establish a convergence result without using any source condition. Numerical results are presented to illustrate the performance of our heuristic selection rule.

  15. Estimating parameter of influenza transmission using regularized least square

    NASA Astrophysics Data System (ADS)

    Nuraini, N.; Syukriah, Y.; Indratno, S. W.

    2014-02-01

    Transmission process of influenza can be presented in a mathematical model as a non-linear differential equations system. In this model the transmission of influenza is determined by the parameter of contact rate of the infected host and susceptible host. This parameter will be estimated using a regularized least square method where the Finite Element Method and Euler Method are used for approximating the solution of the SIR differential equation. The new infected data of influenza from CDC is used to see the effectiveness of the method. The estimated parameter represents the contact rate proportion of transmission probability in a day which can influence the number of infected people by the influenza. Relation between the estimated parameter and the number of infected people by the influenza is measured by coefficient of correlation. The numerical results show positive correlation between the estimated parameters and the infected people.

  16. On Analysis of Stationary Viscous Incompressible Flow Through a Radial Blade Machine

    NASA Astrophysics Data System (ADS)

    Neustupa, Tomáš

    2010-09-01

    The paper is concerned with the analysis of the two dimensional model of incompressible, viscous, stationary flow through a radial blade machine. This type of turbine is sometimes called Kaplan's turbine. In the technical area the use is either to force some regular characteristic to the flow of the medium going through the turbine (flow of melted iron, air conditioning) or to gain some energy from the flowing medium (water). The inflow and outflow part of boundary are in general a concentric circles. The larger one represents an inflow part of boundary the smaller one the outflow part of boundary. Between them are regularly spaced the blades of the machine. We study the existence of the weak solution in the case of nonlinear boundary condition of the "do-nothing" type. The model is interesting for study the behavior of the flow when the boundary is formed by mutually disjoint and separated parts.

  17. Source localization in electromyography using the inverse potential problem

    NASA Astrophysics Data System (ADS)

    van den Doel, Kees; Ascher, Uri M.; Pai, Dinesh K.

    2011-02-01

    We describe an efficient method for reconstructing the activity in human muscles from an array of voltage sensors on the skin surface. MRI is used to obtain morphometric data which are segmented into muscle tissue, fat, bone and skin, from which a finite element model for volume conduction is constructed. The inverse problem of finding the current sources in the muscles is solved using a careful regularization technique which adds a priori information, yielding physically reasonable solutions from among those that satisfy the basic potential problem. Several regularization functionals are considered and numerical experiments on a 2D test model are performed to determine which performs best. The resulting scheme leads to numerical difficulties when applied to large-scale 3D problems. We clarify the nature of these difficulties and provide a method to overcome them, which is shown to perform well in the large-scale problem setting.

  18. Assimilating data into open ocean tidal models

    NASA Astrophysics Data System (ADS)

    Kivman, Gennady A.

    The problem of deriving tidal fields from observations by reason of incompleteness and imperfectness of every data set practically available has an infinitely large number of allowable solutions fitting the data within measurement errors and hence can be treated as ill-posed. Therefore, interpolating the data always relies on some a priori assumptions concerning the tides, which provide a rule of sampling or, in other words, a regularization of the ill-posed problem. Data assimilation procedures used in large scale tide modeling are viewed in a common mathematical framework as such regularizations. It is shown that they all (basis functions expansion, parameter estimation, nudging, objective analysis, general inversion, and extended general inversion), including those (objective analysis and general inversion) originally formulated in stochastic terms, may be considered as utilizations of one of the three general methods suggested by the theory of ill-posed problems. The problem of grid refinement critical for inverse methods and nudging is discussed.

  19. Phillips-Tikhonov regularization with a priori information for neutron emission tomographic reconstruction on Joint European Torus

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bielecki, J.; Scholz, M.; Drozdowicz, K.

    A method of tomographic reconstruction of the neutron emissivity in the poloidal cross section of the Joint European Torus (JET, Culham, UK) tokamak was developed. Due to very limited data set (two projection angles, 19 lines of sight only) provided by the neutron emission profile monitor (KN3 neutron camera), the reconstruction is an ill-posed inverse problem. The aim of this work consists in making a contribution to the development of reliable plasma tomography reconstruction methods that could be routinely used at JET tokamak. The proposed method is based on Phillips-Tikhonov regularization and incorporates a priori knowledge of the shape ofmore » normalized neutron emissivity profile. For the purpose of the optimal selection of the regularization parameters, the shape of normalized neutron emissivity profile is approximated by the shape of normalized electron density profile measured by LIDAR or high resolution Thomson scattering JET diagnostics. In contrast with some previously developed methods of ill-posed plasma tomography reconstruction problem, the developed algorithms do not include any post-processing of the obtained solution and the physical constrains on the solution are imposed during the regularization process. The accuracy of the method is at first evaluated by several tests with synthetic data based on various plasma neutron emissivity models (phantoms). Then, the method is applied to the neutron emissivity reconstruction for JET D plasma discharge #85100. It is demonstrated that this method shows good performance and reliability and it can be routinely used for plasma neutron emissivity reconstruction on JET.« less

  20. Accuracy of AFM force distance curves via direct solution of the Euler-Bernoulli equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eppell, Steven J., E-mail: steven.eppell@case.edu; Liu, Yehe; Zypman, Fredy R.

    2016-03-15

    In an effort to improve the accuracy of force-separation curves obtained from atomic force microscope data, we compare force-separation curves computed using two methods to solve the Euler-Bernoulli equation. A recently introduced method using a direct sequential forward solution, Causal Time-Domain Analysis, is compared against a previously introduced Tikhonov Regularization method. Using the direct solution as a benchmark, it is found that the regularization technique is unable to reproduce accurate curve shapes. Using L-curve analysis and adjusting the regularization parameter, λ, to match either the depth or the full width at half maximum of the force curves, the two techniquesmore » are contrasted. Matched depths result in full width at half maxima that are off by an average of 27% and matched full width at half maxima produce depths that are off by an average of 109%.« less

  1. The Space-Wise Global Gravity Model from GOCE Nominal Mission Data

    NASA Astrophysics Data System (ADS)

    Gatti, A.; Migliaccio, F.; Reguzzoni, M.; Sampietro, D.; Sanso, F.

    2011-12-01

    In the framework of the GOCE data analysis, the space-wise approach implements a multi-step collocation solution for the estimation of a global geopotential model in terms of spherical harmonic coefficients and their error covariance matrix. The main idea is to use the collocation technique to exploit the spatial correlation of the gravity field in the GOCE data reduction. In particular the method consists of an along-track Wiener filter, a collocation gridding at satellite altitude and a spherical harmonic analysis by integration. All these steps are iterated, also to account for the rotation between local orbital and gradiometer reference frame. Error covariances are computed by Montecarlo simulations. The first release of the space-wise approach was presented at the ESA Living Planet Symposium in July 2010. This model was based on only two months of GOCE data and partially contained a priori information coming from other existing gravity models, especially at low degrees and low orders. A second release was distributed after the 4th International GOCE User Workshop in May 2011. In this solution, based on eight months of GOCE data, all the dependencies from external gravity information were removed thus giving rise to a GOCE-only space-wise model. However this model showed an over-regularization at the highest degrees of the spherical harmonic expansion due to the combination technique of intermediate solutions (based on about two months of data). In this work a new space-wise solution is presented. It is based on all nominal mission data from November 2009 to mid April 2011, and its main novelty is that the intermediate solutions are now computed in such a way to avoid over-regularization in the final solution. Beyond the spherical harmonic coefficients of the global model and their error covariance matrix, the space-wise approach is able to deliver as by-products a set of spherical grids of potential and of its second derivatives at mean satellite altitude. These grids have an information content that is very similar to the original along-orbit data, but they are much easier to handle. In addition they are estimated by local least-squares collocation and therefore, although computed by a unique global covariance function, they could yield more information at local level than the spherical harmonic coefficients of the global model. For this reason these grids seem to be useful for local geophysical investigations. The estimated grids with their estimated errors are presented in this work together with proposals on possible future improvements. A test to compare the different information contents of the along-orbit data, the gridded data and the spherical harmonic coefficients is also shown.

  2. Optimal boundary regularity for a singular Monge-Ampère equation

    NASA Astrophysics Data System (ADS)

    Jian, Huaiyu; Li, You

    2018-06-01

    In this paper we study the optimal global regularity for a singular Monge-Ampère type equation which arises from a few geometric problems. We find that the global regularity does not depend on the smoothness of domain, but it does depend on the convexity of the domain. We introduce (a , η) type to describe the convexity. As a result, we show that the more convex is the domain, the better is the regularity of the solution. In particular, the regularity is the best near angular points.

  3. Static black hole solutions with a self-interacting conformally coupled scalar field

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dotti, Gustavo; Gleiser, Reinaldo J.; Martinez, Cristian

    2008-05-15

    We study static, spherically symmetric black hole solutions of the Einstein equations with a positive cosmological constant and a conformally coupled self-interacting scalar field. Exact solutions for this model found by Martinez, Troncoso, and Zanelli were subsequently shown to be unstable under linear gravitational perturbations, with modes that diverge arbitrarily fast. We find that the moduli space of static, spherically symmetric solutions that have a regular horizon--and satisfy the weak and dominant energy conditions outside the horizon--is a singular subset of a two-dimensional space parametrized by the horizon radius and the value of the scalar field at the horizon. Themore » singularity of this space of solutions provides an explanation for the instability of the Martinez, Troncoso, and Zanelli spacetimes and leads to the conclusion that, if we include stability as a criterion, there are no physically acceptable black hole solutions for this system that contain a cosmological horizon in the exterior of its event horizon.« less

  4. On a full Bayesian inference for force reconstruction problems

    NASA Astrophysics Data System (ADS)

    Aucejo, M.; De Smet, O.

    2018-05-01

    In a previous paper, the authors introduced a flexible methodology for reconstructing mechanical sources in the frequency domain from prior local information on both their nature and location over a linear and time invariant structure. The proposed approach was derived from Bayesian statistics, because of its ability in mathematically accounting for experimenter's prior knowledge. However, since only the Maximum a Posteriori estimate was computed, the posterior uncertainty about the regularized solution given the measured vibration field, the mechanical model and the regularization parameter was not assessed. To answer this legitimate question, this paper fully exploits the Bayesian framework to provide, from a Markov Chain Monte Carlo algorithm, credible intervals and other statistical measures (mean, median, mode) for all the parameters of the force reconstruction problem.

  5. Hairy AdS black holes with a toroidal horizon in 4D Einstein-nonlinear σ-model system

    NASA Astrophysics Data System (ADS)

    Astorino, Marco; Canfora, Fabrizio; Giacomini, Alex; Ortaggio, Marcello

    2018-01-01

    An exact hairy asymptotically locally AdS black hole solution with a flat horizon in the Einstein-nonlinear sigma model system in (3+1) dimensions is constructed. The ansatz for the nonlinear SU (2) field is regular everywhere and depends explicitly on Killing coordinates, but in such a way that its energy-momentum tensor is compatible with a metric with Killing fields. The solution is characterized by a discrete parameter which has neither topological nor Noether charge associated with it and therefore represents a hair. A U (1) gauge field interacting with Einstein gravity can also be included. The thermodynamics is analyzed. Interestingly, the hairy black hole is always thermodynamically favoured with respect to the corresponding black hole with vanishing Pionic field.

  6. Estimation of Faults in DC Electrical Power System

    NASA Technical Reports Server (NTRS)

    Gorinevsky, Dimitry; Boyd, Stephen; Poll, Scott

    2009-01-01

    This paper demonstrates a novel optimization-based approach to estimating fault states in a DC power system. Potential faults changing the circuit topology are included along with faulty measurements. Our approach can be considered as a relaxation of the mixed estimation problem. We develop a linear model of the circuit and pose a convex problem for estimating the faults and other hidden states. A sparse fault vector solution is computed by using 11 regularization. The solution is computed reliably and efficiently, and gives accurate diagnostics on the faults. We demonstrate a real-time implementation of the approach for an instrumented electrical power system testbed, the ADAPT testbed at NASA ARC. The estimates are computed in milliseconds on a PC. The approach performs well despite unmodeled transients and other modeling uncertainties present in the system.

  7. The numerical calculation of laminar boundary-layer separation

    NASA Technical Reports Server (NTRS)

    Klineberg, J. M.; Steger, J. L.

    1974-01-01

    Iterative finite-difference techniques are developed for integrating the boundary-layer equations, without approximation, through a region of reversed flow. The numerical procedures are used to calculate incompressible laminar separated flows and to investigate the conditions for regular behavior at the point of separation. Regular flows are shown to be characterized by an integrable saddle-type singularity that makes it difficult to obtain numerical solutions which pass continuously into the separated region. The singularity is removed and continuous solutions ensured by specifying the wall shear distribution and computing the pressure gradient as part of the solution. Calculated results are presented for several separated flows and the accuracy of the method is verified. A computer program listing and complete solution case are included.

  8. A multi-resolution approach to electromagnetic modeling.

    NASA Astrophysics Data System (ADS)

    Cherevatova, M.; Egbert, G. D.; Smirnov, M. Yu

    2018-04-01

    We present a multi-resolution approach for three-dimensional magnetotelluric forward modeling. Our approach is motivated by the fact that fine grid resolution is typically required at shallow levels to adequately represent near surface inhomogeneities, topography, and bathymetry, while a much coarser grid may be adequate at depth where the diffusively propagating electromagnetic fields are much smoother. This is especially true for forward modeling required in regularized inversion, where conductivity variations at depth are generally very smooth. With a conventional structured finite-difference grid the fine discretization required to adequately represent rapid variations near the surface are continued to all depths, resulting in higher computational costs. Increasing the computational efficiency of the forward modeling is especially important for solving regularized inversion problems. We implement a multi-resolution finite-difference scheme that allows us to decrease the horizontal grid resolution with depth, as is done with vertical discretization. In our implementation, the multi-resolution grid is represented as a vertical stack of sub-grids, with each sub-grid being a standard Cartesian tensor product staggered grid. Thus, our approach is similar to the octree discretization previously used for electromagnetic modeling, but simpler in that we allow refinement only with depth. The major difficulty arose in deriving the forward modeling operators on interfaces between adjacent sub-grids. We considered three ways of handling the interface layers and suggest a preferable one, which results in similar accuracy as the staggered grid solution, while retaining the symmetry of coefficient matrix. A comparison between multi-resolution and staggered solvers for various models show that multi-resolution approach improves on computational efficiency without compromising the accuracy of the solution.

  9. On convergence and convergence rates for Ivanov and Morozov regularization and application to some parameter identification problems in elliptic PDEs

    NASA Astrophysics Data System (ADS)

    Kaltenbacher, Barbara; Klassen, Andrej

    2018-05-01

    In this paper we provide a convergence analysis of some variational methods alternative to the classical Tikhonov regularization, namely Ivanov regularization (also called the method of quasi solutions) with some versions of the discrepancy principle for choosing the regularization parameter, and Morozov regularization (also called the method of the residuals). After motivating nonequivalence with Tikhonov regularization by means of an example, we prove well-definedness of the Ivanov and the Morozov method, convergence in the sense of regularization, as well as convergence rates under variational source conditions. Finally, we apply these results to some linear and nonlinear parameter identification problems in elliptic boundary value problems.

  10. An overview of unconstrained free boundary problems

    PubMed Central

    Figalli, Alessio; Shahgholian, Henrik

    2015-01-01

    In this paper, we present a survey concerning unconstrained free boundary problems of type where B1 is the unit ball, Ω is an unknown open set, F1 and F2 are elliptic operators (admitting regular solutions), and is a functions space to be specified in each case. Our main objective is to discuss a unifying approach to the optimal regularity of solutions to the above matching problems, and list several open problems in this direction. PMID:26261367

  11. The solubility of hydrogen in rhodium, ruthenium, iridium and nickel.

    NASA Technical Reports Server (NTRS)

    Mclellan, R. B.; Oates, W. A.

    1973-01-01

    The temperature variation of the solubility of hydrogen in rhodium, ruthenium, iridium, and nickel in equilibrium with H2 gas at 1 atm pressure has been measured by a technique involving saturating the solvent metal with hydrogen, quenching, and analyzing in resultant solid solutions. The solubilities determined are small (atom fraction of H is in the range from 0.0005 to 0.00001, and the results are consistent with the simple quasi-regular model for dilute interstitial solid solutions. The relative partial enthalpy and excess entropy of the dissolved hydrogen atoms have been calculated from the solubility data and compared with well-known correlations between these quantities.

  12. REGULARIZATION FOR COX’S PROPORTIONAL HAZARDS MODEL WITH NP-DIMENSIONALITY*

    PubMed Central

    Fan, Jianqing; Jiang, Jiancheng

    2011-01-01

    High throughput genetic sequencing arrays with thousands of measurements per sample and a great amount of related censored clinical data have increased demanding need for better measurement specific model selection. In this paper we establish strong oracle properties of non-concave penalized methods for non-polynomial (NP) dimensional data with censoring in the framework of Cox’s proportional hazards model. A class of folded-concave penalties are employed and both LASSO and SCAD are discussed specifically. We unveil the question under which dimensionality and correlation restrictions can an oracle estimator be constructed and grasped. It is demonstrated that non-concave penalties lead to significant reduction of the “irrepresentable condition” needed for LASSO model selection consistency. The large deviation result for martingales, bearing interests of its own, is developed for characterizing the strong oracle property. Moreover, the non-concave regularized estimator, is shown to achieve asymptotically the information bound of the oracle estimator. A coordinate-wise algorithm is developed for finding the grid of solution paths for penalized hazard regression problems, and its performance is evaluated on simulated and gene association study examples. PMID:23066171

  13. REGULARIZATION FOR COX'S PROPORTIONAL HAZARDS MODEL WITH NP-DIMENSIONALITY.

    PubMed

    Bradic, Jelena; Fan, Jianqing; Jiang, Jiancheng

    2011-01-01

    High throughput genetic sequencing arrays with thousands of measurements per sample and a great amount of related censored clinical data have increased demanding need for better measurement specific model selection. In this paper we establish strong oracle properties of non-concave penalized methods for non-polynomial (NP) dimensional data with censoring in the framework of Cox's proportional hazards model. A class of folded-concave penalties are employed and both LASSO and SCAD are discussed specifically. We unveil the question under which dimensionality and correlation restrictions can an oracle estimator be constructed and grasped. It is demonstrated that non-concave penalties lead to significant reduction of the "irrepresentable condition" needed for LASSO model selection consistency. The large deviation result for martingales, bearing interests of its own, is developed for characterizing the strong oracle property. Moreover, the non-concave regularized estimator, is shown to achieve asymptotically the information bound of the oracle estimator. A coordinate-wise algorithm is developed for finding the grid of solution paths for penalized hazard regression problems, and its performance is evaluated on simulated and gene association study examples.

  14. MIB Galerkin method for elliptic interface problems.

    PubMed

    Xia, Kelin; Zhan, Meng; Wei, Guo-Wei

    2014-12-15

    Material interfaces are omnipresent in the real-world structures and devices. Mathematical modeling of material interfaces often leads to elliptic partial differential equations (PDEs) with discontinuous coefficients and singular sources, which are commonly called elliptic interface problems. The development of high-order numerical schemes for elliptic interface problems has become a well defined field in applied and computational mathematics and attracted much attention in the past decades. Despite of significant advances, challenges remain in the construction of high-order schemes for nonsmooth interfaces, i.e., interfaces with geometric singularities, such as tips, cusps and sharp edges. The challenge of geometric singularities is amplified when they are associated with low solution regularities, e.g., tip-geometry effects in many fields. The present work introduces a matched interface and boundary (MIB) Galerkin method for solving two-dimensional (2D) elliptic PDEs with complex interfaces, geometric singularities and low solution regularities. The Cartesian grid based triangular elements are employed to avoid the time consuming mesh generation procedure. Consequently, the interface cuts through elements. To ensure the continuity of classic basis functions across the interface, two sets of overlapping elements, called MIB elements, are defined near the interface. As a result, differentiation can be computed near the interface as if there is no interface. Interpolation functions are constructed on MIB element spaces to smoothly extend function values across the interface. A set of lowest order interface jump conditions is enforced on the interface, which in turn, determines the interpolation functions. The performance of the proposed MIB Galerkin finite element method is validated by numerical experiments with a wide range of interface geometries, geometric singularities, low regularity solutions and grid resolutions. Extensive numerical studies confirm the designed second order convergence of the MIB Galerkin method in the L ∞ and L 2 errors. Some of the best results are obtained in the present work when the interface is C 1 or Lipschitz continuous and the solution is C 2 continuous.

  15. Regularity for Fully Nonlinear Elliptic Equations with Oblique Boundary Conditions

    NASA Astrophysics Data System (ADS)

    Li, Dongsheng; Zhang, Kai

    2018-06-01

    In this paper, we obtain a series of regularity results for viscosity solutions of fully nonlinear elliptic equations with oblique derivative boundary conditions. In particular, we derive the pointwise C α, C 1,α and C 2,α regularity. As byproducts, we also prove the A-B-P maximum principle, Harnack inequality, uniqueness and solvability of the equations.

  16. Regularizing portfolio optimization

    NASA Astrophysics Data System (ADS)

    Still, Susanne; Kondor, Imre

    2010-07-01

    The optimization of large portfolios displays an inherent instability due to estimation error. This poses a fundamental problem, because solutions that are not stable under sample fluctuations may look optimal for a given sample, but are, in effect, very far from optimal with respect to the average risk. In this paper, we approach the problem from the point of view of statistical learning theory. The occurrence of the instability is intimately related to over-fitting, which can be avoided using known regularization methods. We show how regularized portfolio optimization with the expected shortfall as a risk measure is related to support vector regression. The budget constraint dictates a modification. We present the resulting optimization problem and discuss the solution. The L2 norm of the weight vector is used as a regularizer, which corresponds to a diversification 'pressure'. This means that diversification, besides counteracting downward fluctuations in some assets by upward fluctuations in others, is also crucial because it improves the stability of the solution. The approach we provide here allows for the simultaneous treatment of optimization and diversification in one framework that enables the investor to trade off between the two, depending on the size of the available dataset.

  17. Nonlinear refraction and reflection travel time tomography

    USGS Publications Warehouse

    Zhang, Jiahua; ten Brink, Uri S.; Toksoz, M.N.

    1998-01-01

    We develop a rapid nonlinear travel time tomography method that simultaneously inverts refraction and reflection travel times on a regular velocity grid. For travel time and ray path calculations, we apply a wave front method employing graph theory. The first-arrival refraction travel times are calculated on the basis of cell velocities, and the later refraction and reflection travel times are computed using both cell velocities and given interfaces. We solve a regularized nonlinear inverse problem. A Laplacian operator is applied to regularize the model parameters (cell slownesses and reflector geometry) so that the inverse problem is valid for a continuum. The travel times are also regularized such that we invert travel time curves rather than travel time points. A conjugate gradient method is applied to minimize the nonlinear objective function. After obtaining a solution, we perform nonlinear Monte Carlo inversions for uncertainty analysis and compute the posterior model covariance. In numerical experiments, we demonstrate that combining the first arrival refraction travel times with later reflection travel times can better reconstruct the velocity field as well as the reflector geometry. This combination is particularly important for modeling crustal structures where large velocity variations occur in the upper crust. We apply this approach to model the crustal structure of the California Borderland using ocean bottom seismometer and land data collected during the Los Angeles Region Seismic Experiment along two marine survey lines. Details of our image include a high-velocity zone under the Catalina Ridge, but a smooth gradient zone between. Catalina Ridge and San Clemente Ridge. The Moho depth is about 22 km with lateral variations. Copyright 1998 by the American Geophysical Union.

  18. A problem with inverse time for a singularly perturbed integro-differential equation with diagonal degeneration of the kernel of high order

    NASA Astrophysics Data System (ADS)

    Bobodzhanov, A. A.; Safonov, V. F.

    2016-04-01

    We consider an algorithm for constructing asymptotic solutions regularized in the sense of Lomov (see [1], [2]). We show that such problems can be reduced to integro-differential equations with inverse time. But in contrast to known papers devoted to this topic (see, for example, [3]), in this paper we study a fundamentally new case, which is characterized by the absence, in the differential part, of a linear operator that isolates, in the asymptotics of the solution, constituents described by boundary functions and by the fact that the integral operator has kernel with diagonal degeneration of high order. Furthermore, the spectrum of the regularization operator A(t) (see below) may contain purely imaginary eigenvalues, which causes difficulties in the application of the methods of construction of asymptotic solutions proposed in the monograph [3]. Based on an analysis of the principal term of the asymptotics, we isolate a class of inhomogeneities and initial data for which the exact solution of the original problem tends to the limit solution (as \\varepsilon\\to+0) on the entire time interval under consideration, also including a boundary-layer zone (that is, we solve the so-called initialization problem). The paper is of a theoretical nature and is designed to lead to a greater understanding of the problems in the theory of singular perturbations. There may be applications in various applied areas where models described by integro-differential equations are used (for example, in elasticity theory, the theory of electrical circuits, and so on).

  19. Regular black holes from semi-classical down to Planckian size

    NASA Astrophysics Data System (ADS)

    Spallucci, Euro; Smailagic, Anais

    In this paper, we review various models of curvature singularity free black holes (BHs). In the first part of the review, we describe semi-classical solutions of the Einstein equations which, however, contains a “quantum” input through the matter source. We start by reviewing the early model by Bardeen where the metric is regularized by-hand through a short-distance cutoff, which is justified in terms of nonlinear electro-dynamical effects. This toy-model is useful to point-out the common features shared by all regular semi-classical black holes. Then, we solve Einstein equations with a Gaussian source encoding the quantum spread of an elementary particle. We identify, the a priori arbitrary, Gaussian width with the Compton wavelength of the quantum particle. This Compton-Gauss model leads to the estimate of a terminal density that a gravitationally collapsed object can achieve. We identify this density to be the Planck density, and reformulate the Gaussian model assuming this as its peak density. All these models, are physically reliable as long as the BH mass is big enough with respect to the Planck mass. In the truly Planckian regime, the semi-classical approximation breaks down. In this case, a fully quantum BH description is needed. In the last part of this paper, we propose a nongeometrical quantum model of Planckian BHs implementing the Holographic Principle and realizing the “classicalization” scenario recently introduced by Dvali and collaborators. The classical relation between the mass and radius of the BH emerges only in the classical limit, far away from the Planck scale.

  20. A note on the regularity of solutions of infinite dimensional Riccati equations

    NASA Technical Reports Server (NTRS)

    Burns, John A.; King, Belinda B.

    1994-01-01

    This note is concerned with the regularity of solutions of algebraic Riccati equations arising from infinite dimensional LQR and LQG control problems. We show that distributed parameter systems described by certain parabolic partial differential equations often have a special structure that smoothes solutions of the corresponding Riccati equation. This analysis is motivated by the need to find specific representations for Riccati operators that can be used in the development of computational schemes for problems where the input and output operators are not Hilbert-Schmidt. This situation occurs in many boundary control problems and in certain distributed control problems associated with optimal sensor/actuator placement.

  1. Progress towards daily "swath" solutions from GRACE

    NASA Astrophysics Data System (ADS)

    Save, H.; Bettadpur, S. V.; Sakumura, C.

    2015-12-01

    The GRACE mission has provided invaluable and the only data of its kind that measures the total water column in the Earth System over the past 13 years. The GRACE solutions available from the project have been monthly average solutions. There have been attempts by several groups to produce shorter time-window solutions with different techniques. There is also an experimental quick-look GRACE solution available from CSR that implements a sliding window approach while applying variable daily data weights. All of these GRACE solutions require special handling for data assimilation. This study explores the possibility of generating a true daily GRACE solution by computing a daily "swath" total water storage (TWS) estimate from GRACE using the Tikhonov regularization and high resolution monthly mascon estimation implemented at CSR. This paper discusses the techniques for computing such a solution and discusses the error and uncertainty characterization. We perform comparisons with official RL05 GRACE solutions and with alternate mascon solutions from CSR to understand the impact on the science results. We evaluate these solutions with emphasis on the temporal characteristics of the signal content and validate them against multiple models and in-situ data sets.

  2. The full Keller-Segel model is well-posed on nonsmooth domains

    NASA Astrophysics Data System (ADS)

    Horstmann, D.; Meinlschmidt, H.; Rehberg, J.

    2018-04-01

    In this paper we prove that the full Keller-Segel system, a quasilinear strongly coupled reaction-crossdiffusion system of four parabolic equations, is well-posed in the sense that it always admits an unique local-in-time solution in an adequate function space, provided that the initial values are suitably regular. The proof is done via an abstract solution theorem for nonlocal quasilinear equations by Amann and is carried out for general source terms. It is fundamentally based on recent nontrivial elliptic and parabolic regularity results which hold true even on rather general nonsmooth spatial domains. For space dimensions 2 and 3, this enables us to work in a nonsmooth setting which is not available in classical parabolic systems theory. Apparently, there exists no comparable existence result for the full Keller-Segel system up to now. Due to the large class of possibly nonsmooth domains admitted, we also obtain new results for the ‘standard’ Keller-Segel system consisting of only two equations as a special case. This work is dedicated to Prof Willi Jäger.

  3. An algorithm for variational data assimilation of contact concentration measurements for atmospheric chemistry models

    NASA Astrophysics Data System (ADS)

    Penenko, Alexey; Penenko, Vladimir

    2014-05-01

    Contact concentration measurement data assimilation problem is considered for convection-diffusion-reaction models originating from the atmospheric chemistry study. High dimensionality of models imposes strict requirements on the computational efficiency of the algorithms. Data assimilation is carried out within the variation approach on a single time step of the approximated model. A control function is introduced into the source term of the model to provide flexibility for data assimilation. This function is evaluated as the minimum of the target functional that connects its norm to a misfit between measured and model-simulated data. In the case mathematical model acts as a natural Tikhonov regularizer for the ill-posed measurement data inversion problem. This provides flow-dependent and physically-plausible structure of the resulting analysis and reduces a need to calculate model error covariance matrices that are sought within conventional approach to data assimilation. The advantage comes at the cost of the adjoint problem solution. This issue is solved within the frameworks of splitting-based realization of the basic convection-diffusion-reaction model. The model is split with respect to physical processes and spatial variables. A contact measurement data is assimilated on each one-dimensional convection-diffusion splitting stage. In this case a computationally-efficient direct scheme for both direct and adjoint problem solution can be constructed based on the matrix sweep method. Data assimilation (or regularization) parameter that regulates ratio between model and data in the resulting analysis is obtained with Morozov discrepancy principle. For the proper performance the algorithm takes measurement noise estimation. In the case of Gaussian errors the probability that the used Chi-squared-based estimate is the upper one acts as the assimilation parameter. A solution obtained can be used as the initial guess for data assimilation algorithms that assimilate outside the splitting stages and involve iterations. Splitting method stage that is responsible for chemical transformation processes is realized with the explicit discrete-analytical scheme with respect to time. The scheme is based on analytical extraction of the exponential terms from the solution. This provides unconditional positive sign for the evaluated concentrations. Splitting-based structure of the algorithm provides means for efficient parallel realization. The work is partially supported by the Programs No 4 of Presidium RAS and No 3 of Mathematical Department of RAS, by RFBR project 11-01-00187 and Integrating projects of SD RAS No 8 and 35. Our studies are in the line with the goals of COST Action ES1004.

  4. Corrosion phenomena in sodium-potassium coolant resulting from solute interaction in multicomponent solution

    NASA Astrophysics Data System (ADS)

    Krasin, V. P.; Soyustova, S. I.

    2018-03-01

    The solubility of Fe, Cr, Ni, V, Mn and Mo in sodium-potassium melt has been calculated using the mathematical framework of pseudo-regular solution model. The calculation results are compared with available published experimental data on mass transfer of components of austenitic stainless steel in sodium-potassium loop under non-isothermal conditions. It is shown that the parameters of pair interaction of oxygen with transition metal can be used to predict the corrosion behavior of structural materials in sodium-potassium melt in the presence of oxygen impurity. The results of calculation of threshold concentration of oxygen of ternary oxide formation of sodium with transitional metals (Fe, Cr, Ni, V, Mn, Mo) are given in conditions when pure solid metal comes in contact with sodium-potassium melt.

  5. An investigation on a two-dimensional problem of Mode-I crack in a thermoelastic medium

    NASA Astrophysics Data System (ADS)

    Kant, Shashi; Gupta, Manushi; Shivay, Om Namha; Mukhopadhyay, Santwana

    2018-04-01

    In this work, we consider a two-dimensional dynamical problem of an infinite space with finite linear Mode-I crack and employ a recently proposed heat conduction model: an exact heat conduction with a single delay term. The thermoelastic medium is taken to be homogeneous and isotropic. However, the boundary of the crack is subjected to a prescribed temperature and stress distributions. The Fourier and Laplace transform techniques are used to solve the problem. Mathematical modeling of the present problem reduces the solution of the problem into the solution of a system of four dual integral equations. The solution of these equations is equivalent to the solution of the Fredholm's integral equation of the first kind which has been solved by using the regularization method. Inverse Laplace transform is carried out by using the Bellman method, and we obtain the numerical solution for all the physical field variables in the physical domain. Results are shown graphically, and we highlight the effects of the presence of crack in the behavior of thermoelastic interactions inside the medium in the present context, and its results are compared with the results of the thermoelasticity of type-III.

  6. Spark formation as a moving boundary process

    NASA Astrophysics Data System (ADS)

    Ebert, Ute

    2006-03-01

    The growth process of spark channels recently becomes accessible through complementary methods. First, I will review experiments with nanosecond photographic resolution and with fast and well defined power supplies that appropriately resolve the dynamics of electric breakdown [1]. Second, I will discuss the elementary physical processes as well as present computations of spark growth and branching with adaptive grid refinement [2]. These computations resolve three well separated scales of the process that emerge dynamically. Third, this scale separation motivates a hierarchy of models on different length scales. In particular, I will discuss a moving boundary approximation for the ionization fronts that generate the conducting channel. The resulting moving boundary problem shows strong similarities with classical viscous fingering. For viscous fingering, it is known that the simplest model forms unphysical cusps within finite time that are suppressed by a regularizing condition on the moving boundary. For ionization fronts, we derive a new condition on the moving boundary of mixed Dirichlet-Neumann type (φ=ɛnφ) that indeed regularizes all structures investigated so far. In particular, we present compact analytical solutions with regularization, both for uniformly translating shapes and for their linear perturbations [3]. These solutions are so simple that they may acquire a paradigmatic role in the future. Within linear perturbation theory, they explicitly show the convective stabilization of a curved front while planar fronts are linearly unstable against perturbations of arbitrary wave length. [1] T.M.P. Briels, E.M. van Veldhuizen, U. Ebert, TU Eindhoven. [2] C. Montijn, J. Wackers, W. Hundsdorfer, U. Ebert, CWI Amsterdam. [3] B. Meulenbroek, U. Ebert, L. Schäfer, Phys. Rev. Lett. 95, 195004 (2005).

  7. Hydrodynamical Aspects of the Formation of Spiral-Vortical Structures in Rotating Gaseous Disks

    NASA Astrophysics Data System (ADS)

    Elizarova, T. G.; Zlotnik, A. A.; Istomina, M. A.

    2018-01-01

    This paper is dedicated to numerical simulations of spiral-vortical structures in rotating gaseous disks using a simple model based on two-dimensional, non-stationary, barotropic Euler equations with a body force. The results suggest the possibility of a purely hydrodynamical basis for the formation and evolution of such structures. New, axially symmetric, stationary solutions of these equations are derived that modify known approximate solutions. These solutions with added small perturbations are used as initial data in the non-stationary problem, whose solution demonstrates the formation of density arms with bifurcation. The associated redistribution of angular momentum is analyzed. The correctness of laboratory experiments using shallow water to describe the formation of large-scale vortical structures in thin gaseous disks is confirmed. The computations are based on a special quasi-gas-dynamical regularization of the Euler equations in polar coordinates.

  8. Ill-posedness in modeling mixed sediment river morphodynamics

    NASA Astrophysics Data System (ADS)

    Chavarrías, Víctor; Stecca, Guglielmo; Blom, Astrid

    2018-04-01

    In this paper we analyze the Hirano active layer model used in mixed sediment river morphodynamics concerning its ill-posedness. Ill-posedness causes the solution to be unstable to short-wave perturbations. This implies that the solution presents spurious oscillations, the amplitude of which depends on the domain discretization. Ill-posedness not only produces physically unrealistic results but may also cause failure of numerical simulations. By considering a two-fraction sediment mixture we obtain analytical expressions for the mathematical characterization of the model. Using these we show that the ill-posed domain is larger than what was found in previous analyses, not only comprising cases of bed degradation into a substrate finer than the active layer but also in aggradational cases. Furthermore, by analyzing a three-fraction model we observe ill-posedness under conditions of bed degradation into a coarse substrate. We observe that oscillations in the numerical solution of ill-posed simulations grow until the model becomes well-posed, as the spurious mixing of the active layer sediment and substrate sediment acts as a regularization mechanism. Finally we conduct an eigenstructure analysis of a simplified vertically continuous model for mixed sediment for which we show that ill-posedness occurs in a wider range of conditions than the active layer model.

  9. Description of waves in inhomogeneous domains using Heun's equation

    NASA Astrophysics Data System (ADS)

    Bednarik, M.; Cervenka, M.

    2018-04-01

    There are a number of model equations describing electromagnetic, acoustic or quantum waves in inhomogeneous domains and some of them are of the same type from the mathematical point of view. This isomorphism enables us to use a unified approach to solving the corresponding equations. In this paper, the inhomogeneity is represented by a trigonometric spatial distribution of a parameter determining the properties of an inhomogeneous domain. From the point of view of modeling, this trigonometric parameter function can be smoothly connected to neighboring constant-parameter regions. For this type of distribution, exact local solutions of the model equations are represented by the local Heun functions. As the interval for which the solution is sought includes two regular singular points. For this reason, a method is proposed which resolves this problem only based on the local Heun functions. Further, the transfer matrix for the considered inhomogeneous domain is determined by means of the proposed method. As an example of the applicability of the presented solutions the transmission coefficient is calculated for the locally periodic structure which is given by an array of asymmetric barriers.

  10. Two-dimensional joint inversion of Magnetotelluric and local earthquake data: Discussion on the contribution to the solution of deep subsurface structures

    NASA Astrophysics Data System (ADS)

    Demirci, İsmail; Dikmen, Ünal; Candansayar, M. Emin

    2018-02-01

    Joint inversion of data sets collected by using several geophysical exploration methods has gained importance and associated algorithms have been developed. To explore the deep subsurface structures, Magnetotelluric and local earthquake tomography algorithms are generally used individually. Due to the usage of natural resources in both methods, it is not possible to increase data quality and resolution of model parameters. For this reason, the solution of the deep structures with the individual usage of the methods cannot be fully attained. In this paper, we firstly focused on the effects of both Magnetotelluric and local earthquake data sets on the solution of deep structures and discussed the results on the basis of the resolving power of the methods. The presence of deep-focus seismic sources increase the resolution of deep structures. Moreover, conductivity distribution of relatively shallow structures can be solved with high resolution by using MT algorithm. Therefore, we developed a new joint inversion algorithm based on the cross gradient function in order to jointly invert Magnetotelluric and local earthquake data sets. In the study, we added a new regularization parameter into the second term of the parameter correction vector of Gallardo and Meju (2003). The new regularization parameter is enhancing the stability of the algorithm and controls the contribution of the cross gradient term in the solution. The results show that even in cases where resistivity and velocity boundaries are different, both methods influence each other positively. In addition, the region of common structural boundaries of the models are clearly mapped compared with original models. Furthermore, deep structures are identified satisfactorily even with using the minimum number of seismic sources. In this paper, in order to understand the future studies, we discussed joint inversion of Magnetotelluric and local earthquake data sets only in two-dimensional space. In the light of these results and by means of the acceleration on the three-dimensional modelling and inversion algorithms, it is thought that it may be easier to identify underground structures with high resolution.

  11. Global multiresolution models of surface wave propagation: comparing equivalently regularized Born and ray theoretical solutions

    NASA Astrophysics Data System (ADS)

    Boschi, Lapo

    2006-10-01

    I invert a large set of teleseismic phase-anomaly observations, to derive tomographic maps of fundamental-mode surface wave phase velocity, first via ray theory, then accounting for finite-frequency effects through scattering theory, in the far-field approximation and neglecting mode coupling. I make use of a multiple-resolution pixel parametrization which, in the assumption of sufficient data coverage, should be adequate to represent strongly oscillatory Fréchet kernels. The parametrization is finer over North America, a region particularly well covered by the data. For each surface-wave mode where phase-anomaly observations are available, I derive a wide spectrum of plausible, differently damped solutions; I then conduct a trade-off analysis, and select as optimal solution model the one associated with the point of maximum curvature on the trade-off curve. I repeat this exercise in both theoretical frameworks, to find that selected scattering and ray theoretical phase-velocity maps are coincident in pattern, and differ only slightly in amplitude.

  12. On the membrane approximation in isothermal film casting

    NASA Astrophysics Data System (ADS)

    Hagen, Thomas

    2014-08-01

    In this work, a one-dimensional model for isothermal film casting is studied. Film casting is an important engineering process to manufacture thin films and sheets from a highly viscous polymer melt. The model equations account for variations in film width and film thickness, and arise from thinness and kinematic assumptions for the free liquid film. The first aspect of our study is a rigorous discussion of the existence and uniqueness of stationary solutions. This objective is approached via the argument principle, exploiting the homotopy invariance of a family of analytic functions. As our second objective, we analyze the linearization of the governing equations about stationary solutions. It is shown that solutions for the associated boundary-initial value problem are given by a strongly continuous semigroup of bounded linear operators. To reach this result, we cast the relevant Cauchy problem in a more accessible form. These transformed equations allow us insight into the regularity of the semigroup, thus yielding the validity of the spectral mapping theorem for the semigroup and the spectrally determined growth property.

  13. A regularization method for extrapolation of solar potential magnetic fields

    NASA Technical Reports Server (NTRS)

    Gary, G. A.; Musielak, Z. E.

    1992-01-01

    The mathematical basis of a Tikhonov regularization method for extrapolating the chromospheric-coronal magnetic field using photospheric vector magnetograms is discussed. The basic techniques show that the Cauchy initial value problem can be formulated for potential magnetic fields. The potential field analysis considers a set of linear, elliptic partial differential equations. It is found that, by introducing an appropriate smoothing of the initial data of the Cauchy potential problem, an approximate Fourier integral solution is found, and an upper bound to the error in the solution is derived. This specific regularization technique, which is a function of magnetograph measurement sensitivities, provides a method to extrapolate the potential magnetic field above an active region into the chromosphere and low corona.

  14. Modelling and properties of a nonlinear autonomous switching system in fed-batch culture of glycerol

    NASA Astrophysics Data System (ADS)

    Wang, Juan; Sun, Qingying; Feng, Enmin

    2012-11-01

    A nonlinear autonomous switching system is proposed to describe the coupled fed-batch fermentation with the pH as the feedback parameter. We prove the non-Zeno behaviors of the switching system and some basic properties of its solution, including the existence, uniqueness, boundedness and regularity. Numerical simulation is also carried out, which reveals that the proposed system can describe the factual fermentation process properly.

  15. A multiplicative regularization for force reconstruction

    NASA Astrophysics Data System (ADS)

    Aucejo, M.; De Smet, O.

    2017-02-01

    Additive regularizations, such as Tikhonov-like approaches, are certainly the most popular methods for reconstructing forces acting on a structure. These approaches require, however, the knowledge of a regularization parameter, that can be numerically computed using specific procedures. Unfortunately, these procedures are generally computationally intensive. For this particular reason, it could be of primary interest to propose a method able to proceed without defining any regularization parameter beforehand. In this paper, a multiplicative regularization is introduced for this purpose. By construction, the regularized solution has to be calculated in an iterative manner. In doing so, the amount of regularization is automatically adjusted throughout the resolution process. Validations using synthetic and experimental data highlight the ability of the proposed approach in providing consistent reconstructions.

  16. Automated model selection in covariance estimation and spatial whitening of MEG and EEG signals.

    PubMed

    Engemann, Denis A; Gramfort, Alexandre

    2015-03-01

    Magnetoencephalography and electroencephalography (M/EEG) measure non-invasively the weak electromagnetic fields induced by post-synaptic neural currents. The estimation of the spatial covariance of the signals recorded on M/EEG sensors is a building block of modern data analysis pipelines. Such covariance estimates are used in brain-computer interfaces (BCI) systems, in nearly all source localization methods for spatial whitening as well as for data covariance estimation in beamformers. The rationale for such models is that the signals can be modeled by a zero mean Gaussian distribution. While maximizing the Gaussian likelihood seems natural, it leads to a covariance estimate known as empirical covariance (EC). It turns out that the EC is a poor estimate of the true covariance when the number of samples is small. To address this issue the estimation needs to be regularized. The most common approach downweights off-diagonal coefficients, while more advanced regularization methods are based on shrinkage techniques or generative models with low rank assumptions: probabilistic PCA (PPCA) and factor analysis (FA). Using cross-validation all of these models can be tuned and compared based on Gaussian likelihood computed on unseen data. We investigated these models on simulations, one electroencephalography (EEG) dataset as well as magnetoencephalography (MEG) datasets from the most common MEG systems. First, our results demonstrate that different models can be the best, depending on the number of samples, heterogeneity of sensor types and noise properties. Second, we show that the models tuned by cross-validation are superior to models with hand-selected regularization. Hence, we propose an automated solution to the often overlooked problem of covariance estimation of M/EEG signals. The relevance of the procedure is demonstrated here for spatial whitening and source localization of MEG signals. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Remarks on regular black holes

    NASA Astrophysics Data System (ADS)

    Nicolini, Piero; Smailagic, Anais; Spallucci, Euro

    Recently, it has been claimed by Chinaglia and Zerbini that the curvature singularity is present even in the so-called regular black hole solutions of the Einstein equations. In this brief note, we show that this criticism is devoid of any physical content.

  18. Source of the Kerr-Newman solution as a gravitating bag model: 50 years of the problem of the source of the Kerr solution

    NASA Astrophysics Data System (ADS)

    Burinskii, Alexander

    2016-01-01

    It is known that gravitational and electromagnetic fields of an electron are described by the ultra-extreme Kerr-Newman (KN) black hole solution with extremely high spin/mass ratio. This solution is singular and has a topological defect, the Kerr singular ring, which may be regularized by introducing the solitonic source based on the Higgs mechanism of symmetry breaking. The source represents a domain wall bubble interpolating between the flat region inside the bubble and external KN solution. It was shown recently that the source represents a supersymmetric bag model, and its structure is unambiguously determined by Bogomolnyi equations. The Dirac equation is embedded inside the bag consistently with twistor structure of the Kerr geometry, and acquires the mass from the Yukawa coupling with Higgs field. The KN bag turns out to be flexible, and for parameters of an electron, it takes the form of very thin disk with a circular string placed along sharp boundary of the disk. Excitation of this string by a traveling wave creates a circulating singular pole, indicating that the bag-like source of KN solution unifies the dressed and point-like electron in a single bag-string-quark system.

  19. On solvability of boundary value problems for hyperbolic fourth-order equations with nonlocal boundary conditions of integral type

    NASA Astrophysics Data System (ADS)

    Popov, Nikolay S.

    2017-11-01

    Solvability of some initial-boundary value problems for linear hyperbolic equations of the fourth order is studied. A condition on the lateral boundary in these problems relates the values of a solution or the conormal derivative of a solution to the values of some integral operator applied to a solution. Nonlocal boundary-value problems for one-dimensional hyperbolic second-order equations with integral conditions on the lateral boundary were considered in the articles by A.I. Kozhanov. Higher-dimensional hyperbolic equations of higher order with integral conditions on the lateral boundary were not studied earlier. The existence and uniqueness theorems of regular solutions are proven. The method of regularization and the method of continuation in a parameter are employed to establish solvability.

  20. A regularization of the Burgers equation using a filtered convective velocity

    NASA Astrophysics Data System (ADS)

    Norgard, Greg; Mohseni, Kamran

    2008-08-01

    This paper examines the properties of a regularization of the Burgers equation in one and multiple dimensions using a filtered convective velocity, which we have dubbed as the convectively filtered Burgers (CFB) equation. A physical motivation behind the filtering technique is presented. An existence and uniqueness theorem for multiple dimensions and a general class of filters is proven. Multiple invariants of motion are found for the CFB equation which are shown to be shared with the viscous and inviscid Burgers equations. Traveling wave solutions are found for a general class of filters and are shown to converge to weak solutions of the inviscid Burgers equation with the correct wave speed. Numerical simulations are conducted in 1D and 2D cases where the shock behavior, shock thickness and kinetic energy decay are examined. Energy spectra are also examined and are shown to be related to the smoothness of the solutions. This approach is presented with the hope of being extended to shock regularization of compressible Euler equations.

  1. Majorization Minimization by Coordinate Descent for Concave Penalized Generalized Linear Models

    PubMed Central

    Jiang, Dingfeng; Huang, Jian

    2013-01-01

    Recent studies have demonstrated theoretical attractiveness of a class of concave penalties in variable selection, including the smoothly clipped absolute deviation and minimax concave penalties. The computation of the concave penalized solutions in high-dimensional models, however, is a difficult task. We propose a majorization minimization by coordinate descent (MMCD) algorithm for computing the concave penalized solutions in generalized linear models. In contrast to the existing algorithms that use local quadratic or local linear approximation to the penalty function, the MMCD seeks to majorize the negative log-likelihood by a quadratic loss, but does not use any approximation to the penalty. This strategy makes it possible to avoid the computation of a scaling factor in each update of the solutions, which improves the efficiency of coordinate descent. Under certain regularity conditions, we establish theoretical convergence property of the MMCD. We implement this algorithm for a penalized logistic regression model using the SCAD and MCP penalties. Simulation studies and a data example demonstrate that the MMCD works sufficiently fast for the penalized logistic regression in high-dimensional settings where the number of covariates is much larger than the sample size. PMID:25309048

  2. Properties of Solutions to the Irving-Mullineux Oscillator Equation

    NASA Astrophysics Data System (ADS)

    Mickens, Ronald E.

    2002-10-01

    A nonlinear differential equation is given in the book by Irving and Mullineux to model certain oscillatory phenomena.^1 They use a regular perturbation method^2 to obtain a first-approximation to the assumed periodic solution. However, their result is not uniformly valid and this means that the obtained solution is not periodic because of the presence of secular terms. We show their way of proceeding is not only incorrect, but that in fact the actual solution to this differential equation is a damped oscillatory function. Our proof uses the method of averaging^2,3 and the qualitative theory of differential equations for 2-dim systems. A nonstandard finite-difference scheme is used to calculate numerical solutions for the trajectories in phase-space. References: ^1J. Irving and N. Mullineux, Mathematics in Physics and Engineering (Academic, 1959); section 14.1. ^2R. E. Mickens, Nonlinear Oscillations (Cambridge University Press, 1981). ^3D. W. Jordan and P. Smith, Nonlinear Ordinary Differential Equations (Oxford, 1987).

  3. General phase regularized reconstruction using phase cycling.

    PubMed

    Ong, Frank; Cheng, Joseph Y; Lustig, Michael

    2018-07-01

    To develop a general phase regularized image reconstruction method, with applications to partial Fourier imaging, water-fat imaging and flow imaging. The problem of enforcing phase constraints in reconstruction was studied under a regularized inverse problem framework. A general phase regularized reconstruction algorithm was proposed to enable various joint reconstruction of partial Fourier imaging, water-fat imaging and flow imaging, along with parallel imaging (PI) and compressed sensing (CS). Since phase regularized reconstruction is inherently non-convex and sensitive to phase wraps in the initial solution, a reconstruction technique, named phase cycling, was proposed to render the overall algorithm invariant to phase wraps. The proposed method was applied to retrospectively under-sampled in vivo datasets and compared with state of the art reconstruction methods. Phase cycling reconstructions showed reduction of artifacts compared to reconstructions without phase cycling and achieved similar performances as state of the art results in partial Fourier, water-fat and divergence-free regularized flow reconstruction. Joint reconstruction of partial Fourier + water-fat imaging + PI + CS, and partial Fourier + divergence-free regularized flow imaging + PI + CS were demonstrated. The proposed phase cycling reconstruction provides an alternative way to perform phase regularized reconstruction, without the need to perform phase unwrapping. It is robust to the choice of initial solutions and encourages the joint reconstruction of phase imaging applications. Magn Reson Med 80:112-125, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  4. Contraction of high eccentricity satellite orbits using uniformly regular KS canonical elements with oblate diurnally varying atmosphere.

    NASA Astrophysics Data System (ADS)

    Raj, Xavier James

    2016-07-01

    Accurate orbit prediction of an artificial satellite under the influence of air drag is one of the most difficult and untraceable problem in orbital dynamics. The orbital decay of these satellites is mainly controlled by the atmospheric drag effects. The effects of the atmosphere are difficult to determine, since the atmospheric density undergoes large fluctuations. The classical Newtonian equations of motion, which is non linear is not suitable for long-term integration. Many transformations have emerged in the literature to stabilize the equations of motion either to reduce the accumulation of local numerical errors or allowing the use of large integration step sizes, or both in the transformed space. One such transformation is known as KS transformation by Kustaanheimo and Stiefel, who regularized the nonlinear Kepler equations of motion and reduced it into linear differential equations of a harmonic oscillator of constant frequency. The method of KS total energy element equations has been found to be a very powerful method for obtaining numerical as well as analytical solution with respect to any type of perturbing forces, as the equations are less sensitive to round off and truncation errors. The uniformly regular KS canonical equations are a particular canonical form of the KS differential equations, where all the ten KS Canonical elements αi and βi are constant for unperturbed motion. These equations permit the uniform formulation of the basic laws of elliptic, parabolic and hyperbolic motion. Using these equations, developed analytical solution for short term orbit predictions with respect to Earth's zonal harmonic terms J2, J3, J4. Further, these equations were utilized to include the canonical forces and analytical theories with air drag were developed for low eccentricity orbits (e < 0.2) with different atmospheric models. Using uniformly regular KS canonical elements developed analytical theory for high eccentricity (e > 0.2) orbits by assuming the atmosphere to be oblate only. In this paper a new non-singular analytical theory is developed for the motion of high eccentricity satellite orbits with oblate diurnally varying atmosphere in terms of the uniformly regular KS canonical elements. The analytical solutions are generated up to fourth-order terms using a new independent variable and c (a small parameter dependent on the flattening of the atmosphere). Due to symmetry, only two of the nine equations need to be solved analytically to compute the state vector and change in energy at the end of each revolution. The theory is developed on the assumption that density is constant on the surfaces of spheroids of fixed ellipticity ɛ (equal to the Earth's ellipticity, 0.00335) whose axes coincide with the Earth's axis. Numerical experimentation with the analytical solution for a wide range of perigee height, eccentricity, and orbital inclination has been carried out up to 100 revolutions. Comparisons are made with numerically integrated values and found that they match quite well. Effectiveness of the present analytical solutions will be demonstrated by comparing the results with other analytical solutions in the literature.

  5. The Thermal Equilibrium Solution of a Generic Bipolar Quantum Hydrodynamic Model

    NASA Astrophysics Data System (ADS)

    Unterreiter, Andreas

    The thermal equilibrium state of a bipolar, isothermic quantum fluid confined to a bounded domain ,d = 1,2 or d = 3 is entirely described by the particle densities n, p, minimizing the energy where G1,2 are strictly convex real valued functions, . It is shown that this variational problem has a unique minimizer in and some regularity results are proven. The semi-classical limit is carried out recovering the minimizer of the limiting functional. The subsequent zero space charge limit leads to extensions of the classical boundary conditions. Due to the lack of regularity the asymptotics can not be settled on Sobolev embedding arguments. The limit is carried out by means of a compactness-by-convexity principle.

  6. Half-quadratic variational regularization methods for speckle-suppression and edge-enhancement in SAR complex image

    NASA Astrophysics Data System (ADS)

    Zhao, Xia; Wang, Guang-xin

    2008-12-01

    Synthetic aperture radar (SAR) is an active remote sensing sensor. It is a coherent imaging system, the speckle is its inherent default, which affects badly the interpretation and recognition of the SAR targets. Conventional methods of removing the speckle is studied usually in real SAR image, which reduce the edges of the images at the same time as depressing the speckle. Morever, Conventional methods lost the information about images phase. Removing the speckle and enhancing the target and edge simultaneously are still a puzzle. To suppress the spckle and enhance the targets and the edges simultaneously, a half-quadratic variational regularization method in complex SAR image is presented, which is based on the prior knowledge of the targets and the edge. Due to the non-quadratic and non- convex quality and the complexity of the cost function, a half-quadratic variational regularization variation is used to construct a new cost function,which is solved by alternate optimization. In the proposed scheme, the construction of the model, the solution of the model and the selection of the model peremeters are studied carefully. In the end, we validate the method using the real SAR data.Theoretic analysis and the experimental results illustrate the the feasibility of the proposed method. Further more, the proposed method can preserve the information about images phase.

  7. Local-aggregate modeling for big data via distributed optimization: Applications to neuroimaging.

    PubMed

    Hu, Yue; Allen, Genevera I

    2015-12-01

    Technological advances have led to a proliferation of structured big data that have matrix-valued covariates. We are specifically motivated to build predictive models for multi-subject neuroimaging data based on each subject's brain imaging scans. This is an ultra-high-dimensional problem that consists of a matrix of covariates (brain locations by time points) for each subject; few methods currently exist to fit supervised models directly to this tensor data. We propose a novel modeling and algorithmic strategy to apply generalized linear models (GLMs) to this massive tensor data in which one set of variables is associated with locations. Our method begins by fitting GLMs to each location separately, and then builds an ensemble by blending information across locations through regularization with what we term an aggregating penalty. Our so called, Local-Aggregate Model, can be fit in a completely distributed manner over the locations using an Alternating Direction Method of Multipliers (ADMM) strategy, and thus greatly reduces the computational burden. Furthermore, we propose to select the appropriate model through a novel sequence of faster algorithmic solutions that is similar to regularization paths. We will demonstrate both the computational and predictive modeling advantages of our methods via simulations and an EEG classification problem. © 2015, The International Biometric Society.

  8. 21 CFR 606.65 - Supplies and reagents.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... solutions shall be tested on a regularly scheduled basis by methods described in the Standard Operating Procedures Manual to determine their capacity to perform as required: Reagent or solution Frequency of...

  9. 21 CFR 606.65 - Supplies and reagents.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... solutions shall be tested on a regularly scheduled basis by methods described in the Standard Operating Procedures Manual to determine their capacity to perform as required: Reagent or solution Frequency of...

  10. 21 CFR 606.65 - Supplies and reagents.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... solutions shall be tested on a regularly scheduled basis by methods described in the Standard Operating Procedures Manual to determine their capacity to perform as required: Reagent or solution Frequency of...

  11. 21 CFR 606.65 - Supplies and reagents.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... solutions shall be tested on a regularly scheduled basis by methods described in the Standard Operating Procedures Manual to determine their capacity to perform as required: Reagent or solution Frequency of...

  12. Global gradient estimates for divergence-type elliptic problems involving general nonlinear operators

    NASA Astrophysics Data System (ADS)

    Cho, Yumi

    2018-05-01

    We study nonlinear elliptic problems with nonstandard growth and ellipticity related to an N-function. We establish global Calderón-Zygmund estimates of the weak solutions in the framework of Orlicz spaces over bounded non-smooth domains. Moreover, we prove a global regularity result for asymptotically regular problems which are getting close to the regular problems considered, when the gradient variable goes to infinity.

  13. Exact Solution of Klein-Gordon and Dirac Equations with Snyder-de Sitter Algebra

    NASA Astrophysics Data System (ADS)

    Merad, M.; Hadj Moussa, M.

    2018-01-01

    In this paper, we present the exact solution of the (1+1)-dimensional relativistic Klein-Gordon and Dirac equations with linear vector and scalar potentials in the framework of deformed Snyder-de Sitter model. We introduce some changes of variables, we show that a one-dimensional linear potential for the relativistic system in a space deformed can be equivalent to the trigonometric Rosen-Morse potential in a regular space. In both cases, we determine explicitly the energy eigenvalues and their corresponding eigenfunctions expressed in terms of Romonovski polynomials. The limiting cases are analyzed for α 1 and α 2 → 0 and are compared with those of literature.

  14. Thermal Stability of Nanocrystalline Alloys by Solute Additions and A Thermodynamic Modeling

    NASA Astrophysics Data System (ADS)

    Saber, Mostafa

    Nanocrystalline alloys show superior properties due to their exceptional microstructure. Thermal stability of these materials is a critical aspect. It is well known that grain boundaries in nanocrystalline microstructures cause a significant increase in the total free energy of the system. A driving force provided to reduce this excess free energy can cause grain growth. The presence of a solute addition within a nanocrystalline alloy can lead to the thermal stability. Kinetic and thermodynamic stabilization are the two basic mechanisms with which stability of a nanoscale grain size can be achieved at high temperatures. The basis of this thesis is to study the effect of solute addition on thermal stability of nanocrystalline alloys. The objective is to determine the effect of Zr addition on the thermal stability of mechanically alloyed nanocrysatillne Fe-Cr and Fe-Ni alloys. In Fe-Cr-Zr alloy system, nanoscale grain size stabilization was maintained up to 900 °C by adding 2 at% Zr. Kinetic pinning by intermetallic particles in the nanoscale range was identified as a primary mechanism of thermal stabilization. In addition to the grain size strengthening, intermetallic particles also contribute to strengthening mechanisms. The analysis of microhardness, XRD data, and measured grain sizes from TEM micrographs suggested that both thermodynamic and kinetic mechanisms are possible mechanisms. It was found that alpha → gamma phase transformation in Fe-Cr-Zr system does not influence the grain size stabilization. In the Fe-Ni-Zr alloy system, it was shown that the grain growth in Fe-8Ni-1Zr alloy is much less than that of pure Fe and Fe-8Ni alloy at elevated temperatures. The microstructure of the ternary Fe-8Ni-1Zr alloy remains in the nanoscale range up to 700 °C. Using an in-situ TEM study, it was determined that drastic grain growth occurs when the alpha → gamma phase transformation occurs. Accordingly, there can be a synergistic relationship between grain growth and alpha → gamma phase transformation in Fe-Ni-Zr alloys. In addition to the experimental study of thermal stabilization of nanocrystalline Fe-Cr-Zr or Fe-Ni-Zr alloys, the thesis presented here developed a new predictive model, applicable to strongly segregating solutes, for thermodynamic stabilization of binary alloys. This model can serve as a benchmark for selecting solute and evaluating the possible contribution of stabilization. Following a regular solution model, both the chemical and elastic strain energy contributions are combined to obtain the mixing enthalpy. The total Gibbs free energy of mixing is then minimized with respect to simultaneous variations in the grain boundary volume fraction and the solute concentration in the grain boundary and the grain interior. The Lagrange multiplier method was used to obtained numerical solutions. Application are given for the temperature dependence of the grain size and the grain boundary solute excess for selected binary system where experimental results imply that thermodynamic stabilization could be operative. This thesis also extends the binary model to a new model for thermodynamic stabilization of ternary nanocrystalline alloys. It is applicable to strongly segregating size-misfit solutes and uses input data available in the literature. In a same manner as the binary model, this model is based on a regular solution approach such that the chemical and elastic strain energy contributions are incorporated into the mixing enthalpy DeltaHmix, and the mixing entropy DeltaSmix is obtained using the ideal solution approximation. The Gibbs mixing free energy Delta Gmix is then minimized with respect to simultaneous variations in grain growth and solute segregation parameters. The Lagrange multiplier method is similarly used to obtain numerical solutions for the minimum Delta Gmix. The temperature dependence of the nanocrystalline grain size and interfacial solute excess can be obtained for selected ternary systems. As an example, model predictions are compared to experimental results for Fe-Cr-Zr and Fe-Ni-Zr alloy systems. Consistency between the experimental results and the present model predictions provide a more rigorous criterion for investigating thermal stabilization. However, other possible contributions for grain growth stabilization should still be considered.

  15. An improved genetic algorithm for designing optimal temporal patterns of neural stimulation

    NASA Astrophysics Data System (ADS)

    Cassar, Isaac R.; Titus, Nathan D.; Grill, Warren M.

    2017-12-01

    Objective. Electrical neuromodulation therapies typically apply constant frequency stimulation, but non-regular temporal patterns of stimulation may be more effective and more efficient. However, the design space for temporal patterns is exceedingly large, and model-based optimization is required for pattern design. We designed and implemented a modified genetic algorithm (GA) intended for design optimal temporal patterns of electrical neuromodulation. Approach. We tested and modified standard GA methods for application to designing temporal patterns of neural stimulation. We evaluated each modification individually and all modifications collectively by comparing performance to the standard GA across three test functions and two biophysically-based models of neural stimulation. Main results. The proposed modifications of the GA significantly improved performance across the test functions and performed best when all were used collectively. The standard GA found patterns that outperformed fixed-frequency, clinically-standard patterns in biophysically-based models of neural stimulation, but the modified GA, in many fewer iterations, consistently converged to higher-scoring, non-regular patterns of stimulation. Significance. The proposed improvements to standard GA methodology reduced the number of iterations required for convergence and identified superior solutions.

  16. Lq -Lp optimization for multigrid fluorescence tomography of small animals using simplified spherical harmonics

    NASA Astrophysics Data System (ADS)

    Edjlali, Ehsan; Bérubé-Lauzière, Yves

    2018-01-01

    We present the first Lq -Lp optimization scheme for fluorescence tomographic imaging. This is then applied to small animal imaging. Fluorescence tomography is an ill-posed, and in full generality, a nonlinear problem that seeks to image the 3D concentration distribution of a fluorescent agent inside a biological tissue. Standard candidates for regularization to deal with the ill-posedness of the image reconstruction problem include L1 and L2 regularization. In this work, a general Lq -Lp regularization framework (Lq discrepancy function - Lp regularization term) is introduced for fluorescence tomographic imaging. A method to calculate the gradient for this general framework is developed which allows evaluating the performance of different cost functions/regularization schemes in solving the fluorescence tomographic problem. The simplified spherical harmonics approximation is used to accurately model light propagation inside the tissue. Furthermore, a multigrid mesh is utilized to decrease the dimension of the inverse problem and reduce the computational cost of the solution. The inverse problem is solved iteratively using an lm-BFGS quasi-Newton optimization method. The simulations are performed under different scenarios of noisy measurements. These are carried out on the Digimouse numerical mouse model with the kidney being the target organ. The evaluation of the reconstructed images is performed both qualitatively and quantitatively using several metrics including QR, RMSE, CNR, and TVE under rigorous conditions. The best reconstruction results under different scenarios are obtained with an L1.5 -L1 scheme with premature termination of the optimization process. This is in contrast to approaches commonly found in the literature relying on L2 -L2 schemes.

  17. The Fast Multipole Method and Fourier Convolution for the Solution of Acoustic Scattering on Regular Volumetric Grids

    PubMed Central

    Hesford, Andrew J.; Waag, Robert C.

    2010-01-01

    The fast multipole method (FMM) is applied to the solution of large-scale, three-dimensional acoustic scattering problems involving inhomogeneous objects defined on a regular grid. The grid arrangement is especially well suited to applications in which the scattering geometry is not known a priori and is reconstructed on a regular grid using iterative inverse scattering algorithms or other imaging techniques. The regular structure of unknown scattering elements facilitates a dramatic reduction in the amount of storage and computation required for the FMM, both of which scale linearly with the number of scattering elements. In particular, the use of fast Fourier transforms to compute Green's function convolutions required for neighboring interactions lowers the often-significant cost of finest-level FMM computations and helps mitigate the dependence of FMM cost on finest-level box size. Numerical results demonstrate the efficiency of the composite method as the number of scattering elements in each finest-level box is increased. PMID:20835366

  18. The fast multipole method and Fourier convolution for the solution of acoustic scattering on regular volumetric grids

    NASA Astrophysics Data System (ADS)

    Hesford, Andrew J.; Waag, Robert C.

    2010-10-01

    The fast multipole method (FMM) is applied to the solution of large-scale, three-dimensional acoustic scattering problems involving inhomogeneous objects defined on a regular grid. The grid arrangement is especially well suited to applications in which the scattering geometry is not known a priori and is reconstructed on a regular grid using iterative inverse scattering algorithms or other imaging techniques. The regular structure of unknown scattering elements facilitates a dramatic reduction in the amount of storage and computation required for the FMM, both of which scale linearly with the number of scattering elements. In particular, the use of fast Fourier transforms to compute Green's function convolutions required for neighboring interactions lowers the often-significant cost of finest-level FMM computations and helps mitigate the dependence of FMM cost on finest-level box size. Numerical results demonstrate the efficiency of the composite method as the number of scattering elements in each finest-level box is increased.

  19. The Fast Multipole Method and Fourier Convolution for the Solution of Acoustic Scattering on Regular Volumetric Grids.

    PubMed

    Hesford, Andrew J; Waag, Robert C

    2010-10-20

    The fast multipole method (FMM) is applied to the solution of large-scale, three-dimensional acoustic scattering problems involving inhomogeneous objects defined on a regular grid. The grid arrangement is especially well suited to applications in which the scattering geometry is not known a priori and is reconstructed on a regular grid using iterative inverse scattering algorithms or other imaging techniques. The regular structure of unknown scattering elements facilitates a dramatic reduction in the amount of storage and computation required for the FMM, both of which scale linearly with the number of scattering elements. In particular, the use of fast Fourier transforms to compute Green's function convolutions required for neighboring interactions lowers the often-significant cost of finest-level FMM computations and helps mitigate the dependence of FMM cost on finest-level box size. Numerical results demonstrate the efficiency of the composite method as the number of scattering elements in each finest-level box is increased.

  20. Dynamics of temporally localized states in passively mode-locked semiconductor lasers

    NASA Astrophysics Data System (ADS)

    Schelte, C.; Javaloyes, J.; Gurevich, S. V.

    2018-05-01

    We study the emergence and the stability of temporally localized structures in the output of a semiconductor laser passively mode locked by a saturable absorber in the long-cavity regime. For large yet realistic values of the linewidth enhancement factor, we disclose the existence of secondary dynamical instabilities where the pulses develop regular and subsequent irregular temporal oscillations. By a detailed bifurcation analysis we show that additional solution branches that consist of multipulse (molecules) solutions exist. We demonstrate that the various solution curves for the single and multipeak pulses can splice and intersect each other via transcritical bifurcations, leading to a complex web of solutions. Our analysis is based on a generic model of mode locking that consists of a time-delayed dynamical system, but also on a much more numerically efficient, yet approximate, partial differential equation. We compare the results of the bifurcation analysis of both models in order to assess up to which point the two approaches are equivalent. We conclude our analysis by the study of the influence of group velocity dispersion, which is only possible in the framework of the partial differential equation model, and we show that it may have a profound impact on the dynamics of the localized states.

  1. Comment on "Construction of regular black holes in general relativity"

    NASA Astrophysics Data System (ADS)

    Bronnikov, Kirill A.

    2017-12-01

    We claim that the paper by Zhong-Ying Fan and Xiaobao Wang on nonlinear electrodynamics coupled to general relativity [Phys. Rev. D 94,124027 (2016)], although correct in general, in some respects repeats previously obtained results without giving proper references. There is also an important point missing in this paper, which is necessary for understanding the physics of the system: in solutions with an electric charge, a regular center requires a non-Maxwell behavior of Lagrangian function L (f ) , (f =Fμ νFμ ν) at small f . Therefore, in all electric regular black hole solutions with a Reissner-Nordström asymptotic, the Lagrangian L (f ) is different in different parts of space, and the electromagnetic field behaves in a singular way at surfaces where L (f ) suffers branching.

  2. Global existence and incompressible limit in critical spaces for compressible flow of liquid crystals

    NASA Astrophysics Data System (ADS)

    Bie, Qunyi; Cui, Haibo; Wang, Qiru; Yao, Zheng-An

    2017-10-01

    The Cauchy problem for the compressible flow of nematic liquid crystals in the framework of critical spaces is considered. We first establish the existence and uniqueness of global solutions provided that the initial data are close to some equilibrium states. This result improves the work by Hu and Wu (SIAM J Math Anal 45(5):2678-2699, 2013) through relaxing the regularity requirement of the initial data in terms of the director field. Based on the global existence, we then consider the incompressible limit problem for ill prepared initial data. We prove that as the Mach number tends to zero, the global solution to the compressible flow of liquid crystals converges to the solution to the corresponding incompressible model in some function spaces. Moreover, the accurate converge rates are obtained.

  3. Blind calibration of radio interferometric arrays using sparsity constraints and its implications for self-calibration

    NASA Astrophysics Data System (ADS)

    Chiarucci, Simone; Wijnholds, Stefan J.

    2018-02-01

    Blind calibration, i.e. calibration without a priori knowledge of the source model, is robust to the presence of unknown sources such as transient phenomena or (low-power) broad-band radio frequency interference that escaped detection. In this paper, we present a novel method for blind calibration of a radio interferometric array assuming that the observed field only contains a small number of discrete point sources. We show the huge computational advantage over previous blind calibration methods and we assess its statistical efficiency and robustness to noise and the quality of the initial estimate. We demonstrate the method on actual data from a Low-Frequency Array low-band antenna station showing that our blind calibration is able to recover the same gain solutions as the regular calibration approach, as expected from theory and simulations. We also discuss the implications of our findings for the robustness of regular self-calibration to poor starting models.

  4. On a model of electromagnetic field propagation in ferroelectric media

    NASA Astrophysics Data System (ADS)

    Picard, Rainer

    2007-04-01

    The Maxwell system in an anisotropic, inhomogeneous medium with non-linear memory effect produced by a Maxwell type system for the polarization is investigated under low regularity assumptions on data and domain. The particular form of memory in the system is motivated by a model for electromagnetic wave propagation in ferromagnetic materials suggested by Greenberg, MacCamy and Coffman [J.M. Greenberg, R.C. MacCamy, C.V. Coffman, On the long-time behavior of ferroelectric systems, Phys. D 134 (1999) 362-383]. To avoid unnecessary regularity requirements the problem is approached as a system of space-time operator equation in the framework of extrapolation spaces (Sobolev lattices), a theoretical framework developed in [R. Picard, Evolution equations as space-time operator equations, Math. Anal. Appl. 173 (2) (1993) 436-458; R. Picard, Evolution equations as operator equations in lattices of Hilbert spaces, Glasnik Mat. 35 (2000) 111-136]. A solution theory for a large class of ferromagnetic materials confined to an arbitrary open set (with suitably generalized boundary conditions) is obtained.

  5. Higher and lowest order mixed finite element approximation of subsurface flow problems with solutions of low regularity

    NASA Astrophysics Data System (ADS)

    Bause, Markus

    2008-02-01

    In this work we study mixed finite element approximations of Richards' equation for simulating variably saturated subsurface flow and simultaneous reactive solute transport. Whereas higher order schemes have proved their ability to approximate reliably reactive solute transport (cf., e.g. [Bause M, Knabner P. Numerical simulation of contaminant biodegradation by higher order methods and adaptive time stepping. Comput Visual Sci 7;2004:61-78]), the Raviart- Thomas mixed finite element method ( RT0) with a first order accurate flux approximation is popular for computing the underlying water flow field (cf. [Bause M, Knabner P. Computation of variably saturated subsurface flow by adaptive mixed hybrid finite element methods. Adv Water Resour 27;2004:565-581, Farthing MW, Kees CE, Miller CT. Mixed finite element methods and higher order temporal approximations for variably saturated groundwater flow. Adv Water Resour 26;2003:373-394, Starke G. Least-squares mixed finite element solution of variably saturated subsurface flow problems. SIAM J Sci Comput 21;2000:1869-1885, Younes A, Mosé R, Ackerer P, Chavent G. A new formulation of the mixed finite element method for solving elliptic and parabolic PDE with triangular elements. J Comp Phys 149;1999:148-167, Woodward CS, Dawson CN. Analysis of expanded mixed finite element methods for a nonlinear parabolic equation modeling flow into variably saturated porous media. SIAM J Numer Anal 37;2000:701-724]). This combination might be non-optimal. Higher order techniques could increase the accuracy of the flow field calculation and thereby improve the prediction of the solute transport. Here, we analyse the application of the Brezzi- Douglas- Marini element ( BDM1) with a second order accurate flux approximation to elliptic, parabolic and degenerate problems whose solutions lack the regularity that is assumed in optimal order error analyses. For the flow field calculation a superiority of the BDM1 approach to the RT0 one is observed, which however is less significant for the accompanying solute transport.

  6. Lax Integrability and the Peakon Problem for the Modified Camassa-Holm Equation

    NASA Astrophysics Data System (ADS)

    Chang, Xiangke; Szmigielski, Jacek

    2018-02-01

    Peakons are special weak solutions of a class of nonlinear partial differential equations modelling non-linear phenomena such as the breakdown of regularity and the onset of shocks. We show that the natural concept of weak solutions in the case of the modified Camassa-Holm equation studied in this paper is dictated by the distributional compatibility of its Lax pair and, as a result, it differs from the one proposed and used in the literature based on the concept of weak solutions used for equations of the Burgers type. Subsequently, we give a complete construction of peakon solutions satisfying the modified Camassa-Holm equation in the sense of distributions; our approach is based on solving certain inverse boundary value problem, the solution of which hinges on a combination of classical techniques of analysis involving Stieltjes' continued fractions and multi-point Padé approximations. We propose sufficient conditions needed to ensure the global existence of peakon solutions and analyze the large time asymptotic behaviour whose special features include a formation of pairs of peakons that share asymptotic speeds, as well as Toda-like sorting property.

  7. Phase-field modeling of diffusional phase behaviors of solid surfaces: A case study of phase-separating Li XFePO 4 electrode particles

    DOE PAGES

    Heo, Tae Wook; Chen, Long-Qing; Wood, Brandon C.

    2015-04-08

    In this paper, we present a comprehensive phase-field model for simulating diffusion-mediated kinetic phase behaviors near the surface of a solid particle. The model incorporates elastic inhomogeneity and anisotropy, diffusion mobility anisotropy, interfacial energy anisotropy, and Cahn–Hilliard diffusion kinetics. The free energy density function is formulated based on the regular solution model taking into account the possible solute-surface interaction near the surface. The coherency strain energy is computed using the Fourier-spectral iterative-perturbation method due to the strong elastic inhomogeneity with a zero surface traction boundary condition. Employing a phase-separating Li XFePO 4 electrode particle for Li-ion batteries as a modelmore » system, we perform parametric three-dimensional computer simulations. The model permits the observation of surface phase behaviors that are different from the bulk counterpart. For instance, it reproduces the theoretically well-established surface modes of spinodal decomposition of an unstable solid solution: the surface mode of coherent spinodal decomposition and the surface-directed spinodal decomposition mode. We systematically investigate the influences of major factors on the kinetic surface phase behaviors during the diffusional process. Finally, our simulation study provides insights for tailoring the internal phase microstructure of a particle by controlling the surface phase morphology.« less

  8. Vorticity-divergence semi-Lagrangian global atmospheric model SL-AV20: dynamical core

    NASA Astrophysics Data System (ADS)

    Tolstykh, Mikhail; Shashkin, Vladimir; Fadeev, Rostislav; Goyman, Gordey

    2017-05-01

    SL-AV (semi-Lagrangian, based on the absolute vorticity equation) is a global hydrostatic atmospheric model. Its latest version, SL-AV20, provides global operational medium-range weather forecast with 20 km resolution over Russia. The lower-resolution configurations of SL-AV20 are being tested for seasonal prediction and climate modeling. The article presents the model dynamical core. Its main features are a vorticity-divergence formulation at the unstaggered grid, high-order finite-difference approximations, semi-Lagrangian semi-implicit discretization and the reduced latitude-longitude grid with variable resolution in latitude. The accuracy of SL-AV20 numerical solutions using a reduced lat-lon grid and the variable resolution in latitude is tested with two idealized test cases. Accuracy and stability of SL-AV20 in the presence of the orography forcing are tested using the mountain-induced Rossby wave test case. The results of all three tests are in good agreement with other published model solutions. It is shown that the use of the reduced grid does not significantly affect the accuracy up to the 25 % reduction in the number of grid points with respect to the regular grid. Variable resolution in latitude allows us to improve the accuracy of a solution in the region of interest.

  9. Geodesic active fields--a geometric framework for image registration.

    PubMed

    Zosso, Dominique; Bresson, Xavier; Thiran, Jean-Philippe

    2011-05-01

    In this paper we present a novel geometric framework called geodesic active fields for general image registration. In image registration, one looks for the underlying deformation field that best maps one image onto another. This is a classic ill-posed inverse problem, which is usually solved by adding a regularization term. Here, we propose a multiplicative coupling between the registration term and the regularization term, which turns out to be equivalent to embed the deformation field in a weighted minimal surface problem. Then, the deformation field is driven by a minimization flow toward a harmonic map corresponding to the solution of the registration problem. This proposed approach for registration shares close similarities with the well-known geodesic active contours model in image segmentation, where the segmentation term (the edge detector function) is coupled with the regularization term (the length functional) via multiplication as well. As a matter of fact, our proposed geometric model is actually the exact mathematical generalization to vector fields of the weighted length problem for curves and surfaces introduced by Caselles-Kimmel-Sapiro. The energy of the deformation field is measured with the Polyakov energy weighted by a suitable image distance, borrowed from standard registration models. We investigate three different weighting functions, the squared error and the approximated absolute error for monomodal images, and the local joint entropy for multimodal images. As compared to specialized state-of-the-art methods tailored for specific applications, our geometric framework involves important contributions. Firstly, our general formulation for registration works on any parametrizable, smooth and differentiable surface, including nonflat and multiscale images. In the latter case, multiscale images are registered at all scales simultaneously, and the relations between space and scale are intrinsically being accounted for. Second, this method is, to the best of our knowledge, the first reparametrization invariant registration method introduced in the literature. Thirdly, the multiplicative coupling between the registration term, i.e. local image discrepancy, and the regularization term naturally results in a data-dependent tuning of the regularization strength. Finally, by choosing the metric on the deformation field one can freely interpolate between classic Gaussian and more interesting anisotropic, TV-like regularization.

  10. Handwashing with soap or alcoholic solutions? A randomized clinical trial of its effectiveness.

    PubMed

    Zaragoza, M; Sallés, M; Gomez, J; Bayas, J M; Trilla, A

    1999-06-01

    The effectiveness of an alcoholic solution compared with the standard hygienic handwashing procedure during regular work in clinical wards and intensive care units of a large public university hospital in Barcelona was assessed. A prospective, randomized clinical trial with crossover design, paired data, and blind evaluation was done. Eligible health care workers (HCWs) included permanent and temporary HCWs of wards and intensive care units. From each category, a random sample of persons was selected. HCWs were randomly assigned to regular handwashing (liquid soap and water) or handwashing with the alcoholic solution by using a crossover design. The number of colony-forming units on agar plates from hands printing in 3 different samples was counted. A total of 47 HCWs were included. The average reduction in the number of colony-forming units from samples before handwashing to samples after handwashing was 49.6% for soap and water and 88.2% for the alcoholic solution. When both methods were compared, the average number of colony-forming units recovered after the procedure showed a statistically significant difference in favor of the alcoholic solution (P <.001). The alcoholic solution was well tolerated by HCWs. Overall acceptance rate was classified as "good" by 72% of HCWs after 2 weeks use. Of all HCWs included, 9.3% stated that the use of the alcoholic solution worsened minor pre-existing skin conditions. Although the regular use of hygienic soap and water handwashing procedures is the gold standard, the use of alcoholic solutions is effective and safe and deserves more attention, especially in situations in which the handwashing compliance rate is hampered by architectural problems (lack of sinks) or nursing work overload.

  11. Regularity estimates up to the boundary for elliptic systems of difference equations

    NASA Technical Reports Server (NTRS)

    Strikwerda, J. C.; Wade, B. A.; Bube, K. P.

    1986-01-01

    Regularity estimates up to the boundary for solutions of elliptic systems of finite difference equations were proved. The regularity estimates, obtained for boundary fitted coordinate systems on domains with smooth boundary, involve discrete Sobolev norms and are proved using pseudo-difference operators to treat systems with variable coefficients. The elliptic systems of difference equations and the boundary conditions which are considered are very general in form. The regularity of a regular elliptic system of difference equations was proved equivalent to the nonexistence of eigensolutions. The regularity estimates obtained are analogous to those in the theory of elliptic systems of partial differential equations, and to the results of Gustafsson, Kreiss, and Sundstrom (1972) and others for hyperbolic difference equations.

  12. Numerical Differentiation of Noisy, Nonsmooth Data

    DOE PAGES

    Chartrand, Rick

    2011-01-01

    We consider the problem of differentiating a function specified by noisy data. Regularizing the differentiation process avoids the noise amplification of finite-difference methods. We use total-variation regularization, which allows for discontinuous solutions. The resulting simple algorithm accurately differentiates noisy functions, including those which have a discontinuous derivative.

  13. Spectral Regularization Algorithms for Learning Large Incomplete Matrices.

    PubMed

    Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert

    2010-03-01

    We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 10(6) × 10(6) incomplete matrix with 10(5) observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques.

  14. Spectral Regularization Algorithms for Learning Large Incomplete Matrices

    PubMed Central

    Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert

    2010-01-01

    We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 106 × 106 incomplete matrix with 105 observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques. PMID:21552465

  15. Bayesian Inference for Generalized Linear Models for Spiking Neurons

    PubMed Central

    Gerwinn, Sebastian; Macke, Jakob H.; Bethge, Matthias

    2010-01-01

    Generalized Linear Models (GLMs) are commonly used statistical methods for modelling the relationship between neural population activity and presented stimuli. When the dimension of the parameter space is large, strong regularization has to be used in order to fit GLMs to datasets of realistic size without overfitting. By imposing properly chosen priors over parameters, Bayesian inference provides an effective and principled approach for achieving regularization. Here we show how the posterior distribution over model parameters of GLMs can be approximated by a Gaussian using the Expectation Propagation algorithm. In this way, we obtain an estimate of the posterior mean and posterior covariance, allowing us to calculate Bayesian confidence intervals that characterize the uncertainty about the optimal solution. From the posterior we also obtain a different point estimate, namely the posterior mean as opposed to the commonly used maximum a posteriori estimate. We systematically compare the different inference techniques on simulated as well as on multi-electrode recordings of retinal ganglion cells, and explore the effects of the chosen prior and the performance measure used. We find that good performance can be achieved by choosing an Laplace prior together with the posterior mean estimate. PMID:20577627

  16. Bypassing the Limits of Ll Regularization: Convex Sparse Signal Processing Using Non-Convex Regularization

    NASA Astrophysics Data System (ADS)

    Parekh, Ankit

    Sparsity has become the basis of some important signal processing methods over the last ten years. Many signal processing problems (e.g., denoising, deconvolution, non-linear component analysis) can be expressed as inverse problems. Sparsity is invoked through the formulation of an inverse problem with suitably designed regularization terms. The regularization terms alone encode sparsity into the problem formulation. Often, the ℓ1 norm is used to induce sparsity, so much so that ℓ1 regularization is considered to be `modern least-squares'. The use of ℓ1 norm, as a sparsity-inducing regularizer, leads to a convex optimization problem, which has several benefits: the absence of extraneous local minima, well developed theory of globally convergent algorithms, even for large-scale problems. Convex regularization via the ℓ1 norm, however, tends to under-estimate the non-zero values of sparse signals. In order to estimate the non-zero values more accurately, non-convex regularization is often favored over convex regularization. However, non-convex regularization generally leads to non-convex optimization, which suffers from numerous issues: convergence may be guaranteed to only a stationary point, problem specific parameters may be difficult to set, and the solution is sensitive to the initialization of the algorithm. The first part of this thesis is aimed toward combining the benefits of non-convex regularization and convex optimization to estimate sparse signals more effectively. To this end, we propose to use parameterized non-convex regularizers with designated non-convexity and provide a range for the non-convex parameter so as to ensure that the objective function is strictly convex. By ensuring convexity of the objective function (sum of data-fidelity and non-convex regularizer), we can make use of a wide variety of convex optimization algorithms to obtain the unique global minimum reliably. The second part of this thesis proposes a non-linear signal decomposition technique for an important biomedical signal processing problem: the detection of sleep spindles and K-complexes in human sleep electroencephalography (EEG). We propose a non-linear model for the EEG consisting of three components: (1) a transient (sparse piecewise constant) component, (2) a low-frequency component, and (3) an oscillatory component. The oscillatory component admits a sparse time-frequency representation. Using a convex objective function, we propose a fast non-linear optimization algorithm to estimate the three components in the proposed signal model. The low-frequency and oscillatory components are then used to estimate the K-complexes and sleep spindles respectively. The proposed detection method is shown to outperform several state-of-the-art automated sleep spindles detection methods.

  17. Terminal attractors in neural networks

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    1989-01-01

    A new type of attractor (terminal attractors) for content-addressable memory, associative memory, and pattern recognition in artificial neural networks operating in continuous time is introduced. The idea of a terminal attractor is based upon a violation of the Lipschitz condition at a fixed point. As a result, the fixed point becomes a singular solution which envelopes the family of regular solutions, while each regular solution approaches such an attractor in finite time. It will be shown that terminal attractors can be incorporated into neural networks such that any desired set of these attractors with prescribed basins is provided by an appropriate selection of the synaptic weights. The applications of terminal attractors for content-addressable and associative memories, pattern recognition, self-organization, and for dynamical training are illustrated.

  18. Electrophysiology of neurones of the inferior mesenteric ganglion of the cat.

    PubMed Central

    Julé, Y; Szurszewski, J H

    1983-01-01

    Intracellular recordings were obtained from cells in vitro in the inferior mesenteric ganglia of the cat. Neurones could be classified into three types: non-spontaneous, irregular discharging and regular discharging neurones. Non-spontaneous neurones had a stable resting membrane potential and responded with action potentials to indirect preganglionic nerve stimulation and to intracellular injection of depolarizing current. Irregular discharging neurones were characterized by a discharge of excitatory post-synaptic potentials (e.p.s.p.s.) which sometimes gave rise to action potentials. This activity was abolished by hexamethonium bromide, chlorisondamine and d-tubocurarine chloride. Tetrodotoxin and a low Ca2+ -high Mg2+ solution also blocked on-going activity in irregular discharging neurones. Regular discharging neurones were characterized by a rhythmic discharge of action potentials. Each action potential was preceded by a gradual depolarization of the intracellularly recorded membrane potential. Intracellular injection of hyperpolarizing current abolished the regular discharge of action potential. No synaptic potentials were observed during hyperpolarization of the membrane potential. Nicotinic, muscarinic and adrenergic receptor blocking drugs did not modify the discharge of action potentials in regular discharging neurones. A low Ca2+ -high Mg2+ solution also had no effect on the regular discharge of action potentials. Interpolation of an action potential between spontaneous action potentials in regular discharging neurones reset the rhythm of discharge. It is suggested that regular discharging neurones were endogenously active and that these neurones provided synaptic input to irregular discharging neurones. PMID:6140310

  19. Electrophysiology of neurones of the inferior mesenteric ganglion of the cat.

    PubMed

    Julé, Y; Szurszewski, J H

    1983-11-01

    Intracellular recordings were obtained from cells in vitro in the inferior mesenteric ganglia of the cat. Neurones could be classified into three types: non-spontaneous, irregular discharging and regular discharging neurones. Non-spontaneous neurones had a stable resting membrane potential and responded with action potentials to indirect preganglionic nerve stimulation and to intracellular injection of depolarizing current. Irregular discharging neurones were characterized by a discharge of excitatory post-synaptic potentials (e.p.s.p.s.) which sometimes gave rise to action potentials. This activity was abolished by hexamethonium bromide, chlorisondamine and d-tubocurarine chloride. Tetrodotoxin and a low Ca2+ -high Mg2+ solution also blocked on-going activity in irregular discharging neurones. Regular discharging neurones were characterized by a rhythmic discharge of action potentials. Each action potential was preceded by a gradual depolarization of the intracellularly recorded membrane potential. Intracellular injection of hyperpolarizing current abolished the regular discharge of action potential. No synaptic potentials were observed during hyperpolarization of the membrane potential. Nicotinic, muscarinic and adrenergic receptor blocking drugs did not modify the discharge of action potentials in regular discharging neurones. A low Ca2+ -high Mg2+ solution also had no effect on the regular discharge of action potentials. Interpolation of an action potential between spontaneous action potentials in regular discharging neurones reset the rhythm of discharge. It is suggested that regular discharging neurones were endogenously active and that these neurones provided synaptic input to irregular discharging neurones.

  20. Total variation regularization for seismic waveform inversion using an adaptive primal dual hybrid gradient method

    NASA Astrophysics Data System (ADS)

    Yong, Peng; Liao, Wenyuan; Huang, Jianping; Li, Zhenchuan

    2018-04-01

    Full waveform inversion is an effective tool for recovering the properties of the Earth from seismograms. However, it suffers from local minima caused mainly by the limited accuracy of the starting model and the lack of a low-frequency component in the seismic data. Because of the high velocity contrast between salt and sediment, the relation between the waveform and velocity perturbation is strongly nonlinear. Therefore, salt inversion can easily get trapped in the local minima. Since the velocity of salt is nearly constant, we can make the most of this characteristic with total variation regularization to mitigate the local minima. In this paper, we develop an adaptive primal dual hybrid gradient method to implement total variation regularization by projecting the solution onto a total variation norm constrained convex set, through which the total variation norm constraint is satisfied at every model iteration. The smooth background velocities are first inverted and the perturbations are gradually obtained by successively relaxing the total variation norm constraints. Numerical experiment of the projection of the BP model onto the intersection of the total variation norm and box constraints has demonstrated the accuracy and efficiency of our adaptive primal dual hybrid gradient method. A workflow is designed to recover complex salt structures in the BP 2004 model and the 2D SEG/EAGE salt model, starting from a linear gradient model without using low-frequency data below 3 Hz. The salt inversion processes demonstrate that wavefield reconstruction inversion with a total variation norm and box constraints is able to overcome local minima and inverts the complex salt velocity layer by layer.

  1. Solutions of differential equations with regular coefficients by the methods of Richmond and Runge-Kutta

    NASA Technical Reports Server (NTRS)

    Cockrell, C. R.

    1989-01-01

    Numerical solutions of the differential equation which describe the electric field within an inhomogeneous layer of permittivity, upon which a perpendicularly-polarized plane wave is incident, are considered. Richmond's method and the Runge-Kutta method are compared for linear and exponential profiles of permittivities. These two approximate solutions are also compared with the exact solutions.

  2. Thick de Sitter brane solutions in higher dimensions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dzhunushaliev, Vladimir; Department of Physics and Microelectronic Engineering, Kyrgyz-Russian Slavic University, Bishkek, Kievskaya Str. 44, 720021, Kyrgyz Republic; Folomeev, Vladimir

    2009-01-15

    We present thick de Sitter brane solutions which are supported by two interacting phantom scalar fields in five-, six-, and seven-dimensional spacetime. It is shown that for all cases regular solutions with anti-de Sitter asymptotic (5D problem) and a flat asymptotic far from the brane (6D and 7D cases) exist. We also discuss the stability of our solutions.

  3. Effects of regular and whitening dentifrices on remineralization of bovine enamel in vitro.

    PubMed

    Kielbassa, Andrej M; Tschoppe, Peter; Hellwig, Elmar; Wrbas, Karl-Thomas

    2009-02-01

    To compare in vitro the remineralizing effects of different regular dentifrices and whitening dentifrices (containing pyrophosphates) on predemineralized enamel. Specimens from 84 bovine incisors were embedded in epoxy resin, partly covered with nail varnish, and demineralized in a lactic acid solution (37 degrees C, pH 5.0, 8 days). Parts of the demineralized areas were covered with nail varnish, and specimens were randomly assigned to 6 groups. Subsequently, specimens were exposed to a remineralizing solution (37 degrees C, pH 7.0, 60 days) and brushed 3 times a day (1:3 slurry with remineralizing solution) with 1 of 3 regular dentifrices designed for anticaries (group 1, amine; group 2, sodium fluoride) or periodontal (group 3, amine/stannous fluoride) purposes or whitening dentifrice containing pyrophosphates (group 4, sodium fluoride). An experimental dentifrice (group 5, without pyrophosphates/fluorides) and a whitening dentifrice (group 6, monofluorophosphate) served as controls. Mineral loss and lesion depths were evaluated from contact microradiographs, and intergroup comparisons were performed using the closed-test procedure (alpha =.05). Compared to baseline, specimens brushed with the dentifrices containing stannous/amine fluorides revealed significant mineral gains and lesion depth reductions (P < .05). Concerning the reacquired mineral, the whitening dentifrice performed worse than the regular dentifrices (P > .05), while mineral gain, as well as lesion depth, reduction was negligible with the control groups. Dentifrices containing pyrophosphates perform worse than regular dentifrices but do not necessarily affect remineralization. Unless remineralizing efficacy is proven, whitening dentifrices should be recommended only after deliberate consideration in caries-prone patients.

  4. Three-dimensional finite elements for the analysis of soil contamination using a multiple-porosity approach

    NASA Astrophysics Data System (ADS)

    El-Zein, Abbas; Carter, John P.; Airey, David W.

    2006-06-01

    A three-dimensional finite-element model of contaminant migration in fissured clays or contaminated sand which includes multiple sources of non-equilibrium processes is proposed. The conceptual framework can accommodate a regular network of fissures in 1D, 2D or 3D and immobile solutions in the macro-pores of aggregated topsoils, as well as non-equilibrium sorption. A Galerkin weighted-residual statement for the three-dimensional form of the equations in the Laplace domain is formulated. Equations are discretized using linear and quadratic prism elements. The system of algebraic equations is solved in the Laplace domain and solution is inverted to the time domain numerically. The model is validated and its scope is illustrated through the analysis of three problems: a waste repository deeply buried in fissured clay, a storage tank leaking into sand and a sanitary landfill leaching into fissured clay over a sand aquifer.

  5. Flow in curved ducts of varying cross-section

    NASA Astrophysics Data System (ADS)

    Sotiropoulos, F.; Patel, V. C.

    1992-07-01

    Two numerical methods for solving the incompressible Navier-Stokes equations are compared with each other by applying them to calculate laminar and turbulent flows through curved ducts of regular cross-section. Detailed comparisons, between the computed solutions and experimental data, are carried out in order to validate the two methods and to identify their relative merits and disadvantages. Based on the conclusions of this comparative study a numerical method is developed for simulating viscous flows through curved ducts of varying cross-sections. The proposed method is capable of simulating the near-wall turbulence using fine computational meshes across the sublayer in conjunction with a two-layer k-epsilon model. Numerical solutions are obtained for: (1) a straight transition duct geometry, and (2) a hydroturbine draft-tube configuration at model scale Reynolds number for various inlet swirl intensities. The report also provides a detailed literature survey that summarizes all the experimental and computational work in the area of duct flows.

  6. Moment inference from tomograms

    USGS Publications Warehouse

    Day-Lewis, F. D.; Chen, Y.; Singha, K.

    2007-01-01

    Time-lapse geophysical tomography can provide valuable qualitative insights into hydrologic transport phenomena associated with aquifer dynamics, tracer experiments, and engineered remediation. Increasingly, tomograms are used to infer the spatial and/or temporal moments of solute plumes; these moments provide quantitative information about transport processes (e.g., advection, dispersion, and rate-limited mass transfer) and controlling parameters (e.g., permeability, dispersivity, and rate coefficients). The reliability of moments calculated from tomograms is, however, poorly understood because classic approaches to image appraisal (e.g., the model resolution matrix) are not directly applicable to moment inference. Here, we present a semi-analytical approach to construct a moment resolution matrix based on (1) the classic model resolution matrix and (2) image reconstruction from orthogonal moments. Numerical results for radar and electrical-resistivity imaging of solute plumes demonstrate that moment values calculated from tomograms depend strongly on plume location within the tomogram, survey geometry, regularization criteria, and measurement error. Copyright 2007 by the American Geophysical Union.

  7. Moment inference from tomograms

    USGS Publications Warehouse

    Day-Lewis, Frederick D.; Chen, Yongping; Singha, Kamini

    2007-01-01

    Time-lapse geophysical tomography can provide valuable qualitative insights into hydrologic transport phenomena associated with aquifer dynamics, tracer experiments, and engineered remediation. Increasingly, tomograms are used to infer the spatial and/or temporal moments of solute plumes; these moments provide quantitative information about transport processes (e.g., advection, dispersion, and rate-limited mass transfer) and controlling parameters (e.g., permeability, dispersivity, and rate coefficients). The reliability of moments calculated from tomograms is, however, poorly understood because classic approaches to image appraisal (e.g., the model resolution matrix) are not directly applicable to moment inference. Here, we present a semi-analytical approach to construct a moment resolution matrix based on (1) the classic model resolution matrix and (2) image reconstruction from orthogonal moments. Numerical results for radar and electrical-resistivity imaging of solute plumes demonstrate that moment values calculated from tomograms depend strongly on plume location within the tomogram, survey geometry, regularization criteria, and measurement error.

  8. Investigation of metal ions sorption of brown peat moss powder

    NASA Astrophysics Data System (ADS)

    Kelus, Nadezhda; Blokhina, Elena; Novikov, Dmitry; Novikova, Yaroslavna; Chuchalin, Vladimir

    2017-11-01

    For regularities research of sorptive extraction of heavy metal ions by cellulose and its derivates from aquatic solution of electrolytes it is necessary to find possible mechanism of sorption process and to choice a model describing this process. The present article investigates the regularities of aliovalent metals sorption on brown peat moss powder. The results show that sorption isotherm of Al3+ ions is described by Freundlich isotherm and sorption isotherms of Na+ i Ni2+ are described by Langmuir isotherm. To identify the mechanisms of brown peat moss powder sorption the IR-spectra of the initial brown peat moss powder samples and brown peat moss powder samples after Ni (II) sorption were studied. Metal ion binding mechanisms by brown peat moss powder points to ion exchange, physical adsorption, and complex formation with hydroxyl and carboxyl groups.

  9. Black-hole solutions with scalar hair in Einstein-scalar-Gauss-Bonnet theories

    NASA Astrophysics Data System (ADS)

    Antoniou, G.; Bakopoulos, A.; Kanti, P.

    2018-04-01

    In the context of the Einstein-scalar-Gauss-Bonnet theory, with a general coupling function between the scalar field and the quadratic Gauss-Bonnet term, we investigate the existence of regular black-hole solutions with scalar hair. Based on a previous theoretical analysis, which studied the evasion of the old and novel no-hair theorems, we consider a variety of forms for the coupling function (exponential, even and odd polynomial, inverse polynomial, and logarithmic) that, in conjunction with the profile of the scalar field, satisfy a basic constraint. Our numerical analysis then always leads to families of regular, asymptotically flat black-hole solutions with nontrivial scalar hair. The solution for the scalar field and the profile of the corresponding energy-momentum tensor, depending on the value of the coupling constant, may exhibit a nonmonotonic behavior, an unusual feature that highlights the limitations of the existing no-hair theorems. We also determine and study in detail the scalar charge, horizon area, and entropy of our solutions.

  10. Sinc-interpolants in the energy plane for regular solution, Jost function, and its zeros of quantum scattering

    NASA Astrophysics Data System (ADS)

    Annaby, M. H.; Asharabi, R. M.

    2018-01-01

    In a remarkable note of Chadan [Il Nuovo Cimento 39, 697-703 (1965)], the author expanded both the regular wave function and the Jost function of the quantum scattering problem using an interpolation theorem of Valiron [Bull. Sci. Math. 49, 181-192 (1925)]. These expansions have a very slow rate of convergence, and applying them to compute the zeros of the Jost function, which lead to the important bound states, gives poor convergence rates. It is our objective in this paper to introduce several efficient interpolation techniques to compute the regular wave solution as well as the Jost function and its zeros approximately. This work continues and improves the results of Chadan and other related studies remarkably. Several worked examples are given with illustrations and comparisons with existing methods.

  11. Quasi-brittle damage modeling based on incremental energy relaxation combined with a viscous-type regularization

    NASA Astrophysics Data System (ADS)

    Langenfeld, K.; Junker, P.; Mosler, J.

    2018-05-01

    This paper deals with a constitutive model suitable for the analysis of quasi-brittle damage in structures. The model is based on incremental energy relaxation combined with a viscous-type regularization. A similar approach—which also represents the inspiration for the improved model presented in this paper—was recently proposed in Junker et al. (Contin Mech Thermodyn 29(1):291-310, 2017). Within this work, the model introduced in Junker et al. (2017) is critically analyzed first. This analysis leads to an improved model which shows the same features as that in Junker et al. (2017), but which (i) eliminates unnecessary model parameters, (ii) can be better interpreted from a physics point of view, (iii) can capture a fully softened state (zero stresses), and (iv) is characterized by a very simple evolution equation. In contrast to the cited work, this evolution equation is (v) integrated fully implicitly and (vi) the resulting time-discrete evolution equation can be solved analytically providing a numerically efficient closed-form solution. It is shown that the final model is indeed well-posed (i.e., its tangent is positive definite). Explicit conditions guaranteeing this well-posedness are derived. Furthermore, by additively decomposing the stress rate into deformation- and purely time-dependent terms, the functionality of the model is explained. Illustrative numerical examples confirm the theoretical findings.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robinson, Judith; Johnson, Timothy C.; Slater, Lee D.

    There is an increasing need to characterize discrete fractures away from boreholes to better define fracture distributions and monitor solute transport. We performed a 3D evaluation of static and time-lapse cross-borehole electrical resistivity tomography (ERT) data sets from a limestone quarry in which flow and transport are controlled by a bedding-plane feature. Ten boreholes were discretized using an unstructured tetrahedral mesh, and 2D panel measurements were inverted for a 3D distribution of conductivity. We evaluated the benefits of 3D versus 2.5D inversion of ERT data in fractured rock while including the use of borehole regularization disconnects (BRDs) and borehole conductivitymore » constraints. High-conductivity halos (inversion artifacts) surrounding boreholes were removed in static images when BRDs and borehole conductivity constraints were implemented. Furthermore, applying these constraints focused transient changes in conductivity resulting from solute transport on the bedding plane, providing a more physically reasonable model for conductivity changes associated with solute transport at this fractured rock site. Assuming bedding-plane continuity between fractures identified in borehole televiewer data, we discretized a planar region between six boreholes and applied a fracture regularization disconnect (FRD). Although the FRD appropriately focused conductivity changes on the bedding plane, the conductivity distribution within the discretized fracture was nonunique and dependent on the starting homogeneous model conductivity. Synthetic studies performed to better explain field observations showed that inaccurate electrode locations in boreholes resulted in low-conductivity halos surrounding borehole locations. These synthetic studies also showed that the recovery of the true conductivity within an FRD depended on the conductivity contrast between the host rock and fractures. Our findings revealed that the potential exists to improve imaging of fractured rock through 3D inversion and accurate modeling of boreholes. However, deregularization of localized features can result in significant electrical conductivity artifacts, especially when representing features with a high degree of spatial uncertainty.« less

  13. Twisting singular solutions of Betheʼs equations

    NASA Astrophysics Data System (ADS)

    Nepomechie, Rafael I.; Wang, Chunguang

    2014-12-01

    The Bethe equations for the periodic XXX and XXZ spin chains admit singular solutions, for which the corresponding eigenvalues and eigenvectors are ill-defined. We use a twist regularization to derive conditions for such singular solutions to be physical, in which case they correspond to genuine eigenvalues and eigenvectors of the Hamiltonian.

  14. A Note on Weak Solutions of Conservation Laws and Energy/Entropy Conservation

    NASA Astrophysics Data System (ADS)

    Gwiazda, Piotr; Michálek, Martin; Świerczewska-Gwiazda, Agnieszka

    2018-03-01

    A common feature of systems of conservation laws of continuum physics is that they are endowed with natural companion laws which are in such cases most often related to the second law of thermodynamics. This observation easily generalizes to any symmetrizable system of conservation laws; they are endowed with nontrivial companion conservation laws, which are immediately satisfied by classical solutions. Not surprisingly, weak solutions may fail to satisfy companion laws, which are then often relaxed from equality to inequality and overtake the role of physical admissibility conditions for weak solutions. We want to answer the question: what is a critical regularity of weak solutions to a general system of conservation laws to satisfy an associated companion law as an equality? An archetypal example of such a result was derived for the incompressible Euler system in the context of Onsager's conjecture in the early nineties. This general result can serve as a simple criterion to numerous systems of mathematical physics to prescribe the regularity of solutions needed for an appropriate companion law to be satisfied.

  15. The charge conserving Poisson-Boltzmann equations: Existence, uniqueness, and maximum principle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Chiun-Chang, E-mail: chlee@mail.nhcue.edu.tw

    2014-05-15

    The present article is concerned with the charge conserving Poisson-Boltzmann (CCPB) equation in high-dimensional bounded smooth domains. The CCPB equation is a Poisson-Boltzmann type of equation with nonlocal coefficients. First, under the Robin boundary condition, we get the existence of weak solutions to this equation. The main approach is variational, based on minimization of a logarithm-type energy functional. To deal with the regularity of weak solutions, we establish a maximum modulus estimate for the standard Poisson-Boltzmann (PB) equation to show that weak solutions of the CCPB equation are essentially bounded. Then the classical solutions follow from the elliptic regularity theorem.more » Second, a maximum principle for the CCPB equation is established. In particular, we show that in the case of global electroneutrality, the solution achieves both its maximum and minimum values at the boundary. However, in the case of global non-electroneutrality, the solution may attain its maximum value at an interior point. In addition, under certain conditions on the boundary, we show that the global non-electroneutrality implies pointwise non-electroneutrality.« less

  16. Decomposing Large Inverse Problems with an Augmented Lagrangian Approach: Application to Joint Inversion of Body-Wave Travel Times and Surface-Wave Dispersion Measurements

    NASA Astrophysics Data System (ADS)

    Reiter, D. T.; Rodi, W. L.

    2015-12-01

    Constructing 3D Earth models through the joint inversion of large geophysical data sets presents numerous theoretical and practical challenges, especially when diverse types of data and model parameters are involved. Among the challenges are the computational complexity associated with large data and model vectors and the need to unify differing model parameterizations, forward modeling methods and regularization schemes within a common inversion framework. The challenges can be addressed in part by decomposing the inverse problem into smaller, simpler inverse problems that can be solved separately, providing one knows how to merge the separate inversion results into an optimal solution of the full problem. We have formulated an approach to the decomposition of large inverse problems based on the augmented Lagrangian technique from optimization theory. As commonly done, we define a solution to the full inverse problem as the Earth model minimizing an objective function motivated, for example, by a Bayesian inference formulation. Our decomposition approach recasts the minimization problem equivalently as the minimization of component objective functions, corresponding to specified data subsets, subject to the constraints that the minimizing models be equal. A standard optimization algorithm solves the resulting constrained minimization problems by alternating between the separate solution of the component problems and the updating of Lagrange multipliers that serve to steer the individual solution models toward a common model solving the full problem. We are applying our inversion method to the reconstruction of the·crust and upper-mantle seismic velocity structure across Eurasia.· Data for the inversion comprise a large set of P and S body-wave travel times·and fundamental and first-higher mode Rayleigh-wave group velocities.

  17. Optimal Spatial Design of Capacity and Quantity of Rainwater Catchment Systems for Urban Flood Mitigation

    NASA Astrophysics Data System (ADS)

    Huang, C.; Hsu, N.

    2013-12-01

    This study imports Low-Impact Development (LID) technology of rainwater catchment systems into a Storm-Water runoff Management Model (SWMM) to design the spatial capacity and quantity of rain barrel for urban flood mitigation. This study proposes a simulation-optimization model for effectively searching the optimal design. In simulation method, we design a series of regular spatial distributions of capacity and quantity of rainwater catchment facilities, and thus the reduced flooding circumstances using a variety of design forms could be simulated by SWMM. Moreover, we further calculate the net benefit that is equal to subtract facility cost from decreasing inundation loss and the best solution of simulation method would be the initial searching solution of the optimization model. In optimizing method, first we apply the outcome of simulation method and Back-Propagation Neural Network (BPNN) for developing a water level simulation model of urban drainage system in order to replace SWMM which the operating is based on a graphical user interface and is hard to combine with optimization model and method. After that we embed the BPNN-based simulation model into the developed optimization model which the objective function is minimizing the negative net benefit. Finally, we establish a tabu search-based algorithm to optimize the planning solution. This study applies the developed method in Zhonghe Dist., Taiwan. Results showed that application of tabu search and BPNN-based simulation model into the optimization model not only can find better solutions than simulation method in 12.75%, but also can resolve the limitations of previous studies. Furthermore, the optimized spatial rain barrel design can reduce 72% of inundation loss according to historical flood events.

  18. Effects of non-tidal atmospheric loading on a Kalman filter-based terrestrial reference frame

    NASA Astrophysics Data System (ADS)

    Abbondanza, C.; Altamimi, Z.; Chin, T. M.; Collilieux, X.; Dach, R.; Heflin, M. B.; Gross, R. S.; König, R.; Lemoine, F. G.; MacMillan, D. S.; Parker, J. W.; van Dam, T. M.; Wu, X.

    2013-12-01

    The International Terrestrial Reference Frame (ITRF) adopts a piece-wise linear model to parameterize regularized station positions and velocities. The space-geodetic (SG) solutions from VLBI, SLR, GPS and DORIS global networks used as input in the ITRF combination process account for tidal loading deformations, but ignore the non-tidal part. As a result, the non-linear signal observed in the time series of SG-derived station positions in part reflects non-tidal loading displacements not introduced in the SG data reduction. In this analysis, the effect of non-tidal atmospheric loading (NTAL) corrections on the TRF is assessed adopting a Remove/Restore approach: (i) Focusing on the a-posteriori approach, the NTAL model derived from the National Center for Environmental Prediction (NCEP) surface pressure is removed from the SINEX files of the SG solutions used as inputs to the TRF determinations. (ii) Adopting a Kalman-filter based approach, a linear TRF is estimated combining the 4 SG solutions free from NTAL displacements. (iii) Linear fits to the NTAL displacements removed at step (i) are restored to the linear reference frame estimated at (ii). The velocity fields of the (standard) linear reference frame in which the NTAL model has not been removed and the one in which the model has been removed/restored are compared and discussed.

  19. Regular expansion solutions for small Peclet number heat or mass transfer in concentrated two-phase particulate systems

    NASA Technical Reports Server (NTRS)

    Yaron, I.

    1974-01-01

    Steady state heat or mass transfer in concentrated ensembles of drops, bubbles or solid spheres in uniform, slow viscous motion, is investigated. Convective effects at small Peclet numbers are taken into account by expanding the nondimensional temperature or concentration in powers of the Peclet number. Uniformly valid solutions are obtained, which reflect the effects of dispersed phase content and rate of internal circulation within the fluid particles. The dependence of the range of Peclet and Reynolds numbers, for which regular expansions are valid, on particle concentration is discussed.

  20. The neural network approximation method for solving multidimensional nonlinear inverse problems of geophysics

    NASA Astrophysics Data System (ADS)

    Shimelevich, M. I.; Obornev, E. A.; Obornev, I. E.; Rodionov, E. A.

    2017-07-01

    The iterative approximation neural network method for solving conditionally well-posed nonlinear inverse problems of geophysics is presented. The method is based on the neural network approximation of the inverse operator. The inverse problem is solved in the class of grid (block) models of the medium on a regularized parameterization grid. The construction principle of this grid relies on using the calculated values of the continuity modulus of the inverse operator and its modifications determining the degree of ambiguity of the solutions. The method provides approximate solutions of inverse problems with the maximal degree of detail given the specified degree of ambiguity with the total number of the sought parameters n × 103 of the medium. The a priori and a posteriori estimates of the degree of ambiguity of the approximated solutions are calculated. The work of the method is illustrated by the example of the three-dimensional (3D) inversion of the synthesized 2D areal geoelectrical (audio magnetotelluric sounding, AMTS) data corresponding to the schematic model of a kimberlite pipe.

  1. Two-level schemes for the advection equation

    NASA Astrophysics Data System (ADS)

    Vabishchevich, Petr N.

    2018-06-01

    The advection equation is the basis for mathematical models of continuum mechanics. In the approximate solution of nonstationary problems it is necessary to inherit main properties of the conservatism and monotonicity of the solution. In this paper, the advection equation is written in the symmetric form, where the advection operator is the half-sum of advection operators in conservative (divergent) and non-conservative (characteristic) forms. The advection operator is skew-symmetric. Standard finite element approximations in space are used. The standard explicit two-level scheme for the advection equation is absolutely unstable. New conditionally stable regularized schemes are constructed, on the basis of the general theory of stability (well-posedness) of operator-difference schemes, the stability conditions of the explicit Lax-Wendroff scheme are established. Unconditionally stable and conservative schemes are implicit schemes of the second (Crank-Nicolson scheme) and fourth order. The conditionally stable implicit Lax-Wendroff scheme is constructed. The accuracy of the investigated explicit and implicit two-level schemes for an approximate solution of the advection equation is illustrated by the numerical results of a model two-dimensional problem.

  2. Heterogeneity of activated carbons in adsorption of aniline from aqueous solutions

    NASA Astrophysics Data System (ADS)

    Podkościelny, P.; László, K.

    2007-08-01

    The heterogeneity of activated carbons (ACs) prepared from different precursors is investigated on the basis of adsorption isotherms of aniline from dilute aqueous solutions at various pH values. The APET carbon prepared from polyethyleneterephthalate (PET), as well as, commercial ACP carbon prepared from peat were used. Besides, to investigate the influence of carbon surface chemistry, the adsorption was studied on modified carbons based on ACP carbon. Its various oxygen surface groups were changed by both nitric acid and thermal treatments. The Dubinin-Astakhov (DA) equation and Langmuir-Freundlich (LF) one have been used to model the phenomenon of aniline adsorption from aqueous solutions on heterogeneous carbon surfaces. Adsorption-energy distribution (AED) functions have been calculated by using an algorithm based on a regularization method. Analysis of these functions for activated carbons studied provides important comparative information about their surface heterogeneity.

  3. Peristaltic motion of magnetohydrodynamic viscous fluid in a curved circular tube

    NASA Astrophysics Data System (ADS)

    Yasmeen, Shagufta; Okechi, Nnamdi Fidelis; Anjum, Hafiz Junaid; Asghar, Saleem

    In this paper we investigate the peristaltic flow of viscous fluid through three-dimensional curved tube in the presence of the applied magnetic field. We present a mathematical model and an asymptotic solution for the three dimensional Navier-Stokes equations under the assumption of small inertial forces and long wavelength approximation. The effects of the curvature of the tube are investigated with particular interest. The solution is sought in terms of regular perturbation expansion for small curvature parameter. It is noted that the velocity field is more sensitive to the curvature of tube in comparison to the pressure gradient. It is shown that peristaltic magnetohydrodynamic (MHD) flow in a straight tube is the limiting case of this study.

  4. Nonideal Rayleigh–Taylor mixing

    PubMed Central

    Lim, Hyunkyung; Iwerks, Justin; Glimm, James; Sharp, David H.

    2010-01-01

    Rayleigh–Taylor mixing is a classical hydrodynamic instability that occurs when a light fluid pushes against a heavy fluid. The two main sources of nonideal behavior in Rayleigh–Taylor (RT) mixing are regularizations (physical and numerical), which produce deviations from a pure Euler equation, scale invariant formulation, and nonideal (i.e., experimental) initial conditions. The Kolmogorov theory of turbulence predicts stirring at all length scales for the Euler fluid equations without regularization. We interpret mathematical theories of existence and nonuniqueness in this context, and we provide numerical evidence for dependence of the RT mixing rate on nonideal regularizations; in other words, indeterminacy when modeled by Euler equations. Operationally, indeterminacy shows up as nonunique solutions for RT mixing, parametrized by Schmidt and Prandtl numbers, in the large Reynolds number (Euler equation) limit. Verification and validation evidence is presented for the large eddy simulation algorithm used here. Mesh convergence depends on breaking the nonuniqueness with explicit use of the laminar Schmidt and Prandtl numbers and their turbulent counterparts, defined in terms of subgrid scale models. The dependence of the mixing rate on the Schmidt and Prandtl numbers and other physical parameters will be illustrated. We demonstrate numerically the influence of initial conditions on the mixing rate. Both the dominant short wavelength initial conditions and long wavelength perturbations are observed to play a role. By examination of two classes of experiments, we observe the absence of a single universal explanation, with long and short wavelength initial conditions, and the various physical and numerical regularizations contributing in different proportions in these two different contexts. PMID:20615983

  5. Black holes in vector-tensor theories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heisenberg, Lavinia; Kase, Ryotaro; Tsujikawa, Shinji

    We study static and spherically symmetric black hole (BH) solutions in second-order generalized Proca theories with nonminimal vector field derivative couplings to the Ricci scalar, the Einstein tensor, and the double dual Riemann tensor. We find concrete Lagrangians which give rise to exact BH solutions by imposing two conditions of the two identical metric components and the constant norm of the vector field. These exact solutions are described by either Reissner-Nordström (RN), stealth Schwarzschild, or extremal RN solutions with a non-trivial longitudinal mode of the vector field. We then numerically construct BH solutions without imposing these conditions. For cubic andmore » quartic Lagrangians with power-law couplings which encompass vector Galileons as the specific cases, we show the existence of BH solutions with the difference between two non-trivial metric components. The quintic-order power-law couplings do not give rise to non-trivial BH solutions regular throughout the horizon exterior. The sixth-order and intrinsic vector-mode couplings can lead to BH solutions with a secondary hair. For all the solutions, the vector field is regular at least at the future or past horizon. The deviation from General Relativity induced by the Proca hair can be potentially tested by future measurements of gravitational waves in the nonlinear regime of gravity.« less

  6. Hip-hop solutions of the 2N-body problem

    NASA Astrophysics Data System (ADS)

    Barrabés, Esther; Cors, Josep Maria; Pinyol, Conxita; Soler, Jaume

    2006-05-01

    Hip-hop solutions of the 2N-body problem with equal masses are shown to exist using an analytic continuation argument. These solutions are close to planar regular 2N-gon relative equilibria with small vertical oscillations. For fixed N, an infinity of these solutions are three-dimensional choreographies, with all the bodies moving along the same closed curve in the inertial frame.

  7. Technical report series on global modeling and data assimilation. Volume 2: Direct solution of the implicit formulation of fourth order horizontal diffusion for gridpoint models on the sphere

    NASA Technical Reports Server (NTRS)

    Li, Yong; Moorthi, S.; Bates, J. Ray; Suarez, Max J.

    1994-01-01

    High order horizontal diffusion of the form K Delta(exp 2m) is widely used in spectral models as a means of preventing energy accumulation at the shortest resolved scales. In the spectral context, an implicit formation of such diffusion is trivial to implement. The present note describes an efficient method of implementing implicit high order diffusion in global finite difference models. The method expresses the high order diffusion equation as a sequence of equations involving Delta(exp 2). The solution is obtained by combining fast Fourier transforms in longitude with a finite difference solver for the second order ordinary differential equation in latitude. The implicit diffusion routine is suitable for use in any finite difference global model that uses a regular latitude/longitude grid. The absence of a restriction on the timestep makes it particularly suitable for use in semi-Lagrangian models. The scale selectivity of the high order diffusion gives it an advantage over the uncentering method that has been used to control computational noise in two-time-level semi-Lagrangian models.

  8. The Role of Solvent-Solute Interactions on The Behavior of Low Molecular Mass Organo-Gelators

    NASA Astrophysics Data System (ADS)

    Cavicchi, Kevin; Feng, Li

    2012-02-01

    Low molecular mass organo-gelators (LMOGs) are a class of small molecules that can self-assemble in organic solvents to form three-dimensional fibrillar networks. This has a profound effect on the viscoelastic properties of the solution causing physical gelation. These gels have uses in a range of industries including cosmetics, foodstuffs, plastics, petroleum and pharmaceuticals. A fundamental question in this field is: What makes a good LMOG? This talk will discuss the relationships between the viscoelastic properties and thermodynamic phase behavior of LMOG/solvent solutions. The regular solution model was used to fit the liquidus line and sol/gel transition temperature vs. concentration in different solvents to determine LMOG-solvent interaction parameters (χ = A/T). This parameter A was found to scale with the solubility parameter of the solvent, especially for non-polar solvents. This demonstrates that gelation is strongly linked to LMOG solubility and indicates that the bulk thermodynamic parameters of the LMOG (solubility parameter and melting temperature) are useful to predict the solution behavior of LMOGs.

  9. Simple, explicitly time-dependent, and regular solutions of the linearized vacuum Einstein equations in Bondi-Sachs coordinates

    NASA Astrophysics Data System (ADS)

    Mädler, Thomas

    2013-05-01

    Perturbations of the linearized vacuum Einstein equations in the Bondi-Sachs formulation of general relativity can be derived from a single master function with spin weight two, which is related to the Weyl scalar Ψ0, and which is determined by a simple wave equation. By utilizing a standard spin representation of tensors on a sphere and two different approaches to solve the master equation, we are able to determine two simple and explicitly time-dependent solutions. Both solutions, of which one is asymptotically flat, comply with the regularity conditions at the vertex of the null cone. For the asymptotically flat solution we calculate the corresponding linearized perturbations, describing all multipoles of spin-2 waves that propagate on a Minkowskian background spacetime. We also analyze the asymptotic behavior of this solution at null infinity using a Penrose compactification and calculate the Weyl scalar Ψ4. Because of its simplicity, the asymptotically flat solution presented here is ideally suited for test bed calculations in the Bondi-Sachs formulation of numerical relativity. It may be considered as a sibling of the Bergmann-Sachs or Teukolsky-Rinne solutions, on spacelike hypersurfaces, for a metric adapted to null hypersurfaces.

  10. An approach for the regularization of a power flow solution around the maximum loading point

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kataoka, Y.

    1992-08-01

    In the conventional power flow solution, the boundary conditions are directly specified by active power and reactive power at each node, so that the singular point coincided with the maximum loading point. For this reason, the computations are often disturbed by ill-condition. This paper proposes a new method for getting the wide-range regularity by giving some modifications to the conventional power flow solution method, thereby eliminating the singular point or shifting it to the region with the voltage lower than that of the maximum loading point. Then, the continuous execution of V-P curves including maximum loading point is realized. Themore » efficiency and effectiveness of the method are tested in practical 598-nodes system in comparison with the conventional method.« less

  11. A regularity condition and temporal asymptotics for chemotaxis-fluid equations

    NASA Astrophysics Data System (ADS)

    Chae, Myeongju; Kang, Kyungkeun; Lee, Jihoon; Lee, Ki-Ahm

    2018-02-01

    We consider two dimensional chemotaxis equations coupled to the Navier-Stokes equations. We present a new localized regularity criterion that is localized in a neighborhood at each point. Secondly, we establish temporal decays of the regular solutions under the assumption that the initial mass of biological cell density is sufficiently small. Both results are improvements of previously known results given in Chae et al (2013 Discrete Continuous Dyn. Syst. A 33 2271-97) and Chae et al (2014 Commun. PDE 39 1205-35)

  12. [Formula: see text] regularity properties of singular parameterizations in isogeometric analysis.

    PubMed

    Takacs, T; Jüttler, B

    2012-11-01

    Isogeometric analysis (IGA) is a numerical simulation method which is directly based on the NURBS-based representation of CAD models. It exploits the tensor-product structure of 2- or 3-dimensional NURBS objects to parameterize the physical domain. Hence the physical domain is parameterized with respect to a rectangle or to a cube. Consequently, singularly parameterized NURBS surfaces and NURBS volumes are needed in order to represent non-quadrangular or non-hexahedral domains without splitting, thereby producing a very compact and convenient representation. The Galerkin projection introduces finite-dimensional spaces of test functions in the weak formulation of partial differential equations. In particular, the test functions used in isogeometric analysis are obtained by composing the inverse of the domain parameterization with the NURBS basis functions. In the case of singular parameterizations, however, some of the resulting test functions do not necessarily fulfill the required regularity properties. Consequently, numerical methods for the solution of partial differential equations cannot be applied properly. We discuss the regularity properties of the test functions. For one- and two-dimensional domains we consider several important classes of singularities of NURBS parameterizations. For specific cases we derive additional conditions which guarantee the regularity of the test functions. In addition we present a modification scheme for the discretized function space in case of insufficient regularity. It is also shown how these results can be applied for computational domains in higher dimensions that can be parameterized via sweeping.

  13. Macroscopic theory of dark sector

    NASA Astrophysics Data System (ADS)

    Meierovich, Boris

    A simple Lagrangian with squared covariant divergence of a vector field as a kinetic term turned out an adequate tool for macroscopic description of the dark sector. The zero-mass field acts as the dark energy. Its energy-momentum tensor is a simple additive to the cosmological constant [1]. Space-like and time-like massive vector fields describe two different forms of dark matter. The space-like massive vector field is attractive. It is responsible for the observed plateau in galaxy rotation curves [2]. The time-like massive field displays repulsive elasticity. In balance with dark energy and ordinary matter it provides a four parametric diversity of regular solutions of the Einstein equations describing different possible cosmological and oscillating non-singular scenarios of evolution of the universe [3]. In particular, the singular big bang turns into a regular inflation-like transition from contraction to expansion with the accelerate expansion at late times. The fine-tuned Friedman-Robertson-Walker singular solution corresponds to the particular limiting case at the boundary of existence of regular oscillating solutions in the absence of vector fields. The simplicity of the general covariant expression for the energy-momentum tensor allows to analyse the main properties of the dark sector analytically and avoid unnecessary model assumptions. It opens a possibility to trace how the additional attraction of the space-like dark matter, dominating in the galaxy scale, transforms into the elastic repulsion of the time-like dark matter, dominating in the scale of the Universe. 1. B. E. Meierovich. "Vector fields in multidimensional cosmology". Phys. Rev. D 84, 064037 (2011). 2. B. E. Meierovich. "Galaxy rotation curves driven by massive vector fields: Key to the theory of the dark sector". Phys. Rev. D 87, 103510, (2013). 3. B. E. Meierovich. "Towards the theory of the evolution of the Universe". Phys. Rev. D 85, 123544 (2012).

  14. Numerical modeling of magnetic moments for UXO applications

    USGS Publications Warehouse

    Sanchez, V.; Li, Y.; Nabighian, M.; Wright, D.

    2006-01-01

    The surface magnetic anomaly observed in UXO clearance is mainly dipolar and, consequently, the dipole is the only magnetic moment regularly recovered in UXO applications. The dipole moment contains information about intensity of magnetization but lacks information about shape. In contrast, higher-order moments, such as quadrupole and octupole, encode asymmetry properties of the magnetization distribution within the buried targets. In order to improve our understanding of magnetization distribution within UXO and non-UXO objects and its potential utility in UXO clearance, we present a 3D numerical modeling study for highly susceptible metallic objects. The basis for the modeling is the solution of a nonlinear integral equation describing magnetization within isolated objects. A solution for magnetization distribution then allows us to compute magnetic moments of the object, analyze their relationships, and provide a depiction of the surface anomaly produced by different moments within the object. Our modeling results show significant high-order moments for more asymmetric objects situated at depths typical of UXO burial, and suggest that the increased relative contribution to magnetic gradient data from these higher-order moments may provide a practical tool for improved UXO discrimination.

  15. Water movement through plant roots - exact solutions of the water flow equation in roots with linear or exponential piecewise hydraulic properties

    NASA Astrophysics Data System (ADS)

    Meunier, Félicien; Couvreur, Valentin; Draye, Xavier; Zarebanadkouki, Mohsen; Vanderborght, Jan; Javaux, Mathieu

    2017-12-01

    In 1978, Landsberg and Fowkes presented a solution of the water flow equation inside a root with uniform hydraulic properties. These properties are root radial conductivity and axial conductance, which control, respectively, the radial water flow between the root surface and xylem and the axial flow within the xylem. From the solution for the xylem water potential, functions that describe the radial and axial flow along the root axis were derived. These solutions can also be used to derive root macroscopic parameters that are potential input parameters of hydrological and crop models. In this paper, novel analytical solutions of the water flow equation are developed for roots whose hydraulic properties vary along their axis, which is the case for most plants. We derived solutions for single roots with linear or exponential variations of hydraulic properties with distance to root tip. These solutions were subsequently combined to construct single roots with complex hydraulic property profiles. The analytical solutions allow one to verify numerical solutions and to get a generalization of the hydric behaviour with the main influencing parameters of the solutions. The resulting flow distributions in heterogeneous roots differed from those in uniform roots and simulations led to more regular, less abrupt variations of xylem suction or radial flux along root axes. The model could successfully be applied to maize effective root conductance measurements to derive radial and axial hydraulic properties. We also show that very contrasted root water uptake patterns arise when using either uniform or heterogeneous root hydraulic properties in a soil-root model. The optimal root radius that maximizes water uptake under a carbon cost constraint was also studied. The optimal radius was shown to be highly dependent on the root hydraulic properties and close to observed properties in maize roots. We finally used the obtained functions for evaluating the impact of root maturation versus root growth on water uptake. Very diverse uptake strategies arise from the analysis. These solutions open new avenues to investigate for optimal genotype-environment-management interactions by optimization, for example, of plant-scale macroscopic hydraulic parameters used in ecohydrogolocial models.

  16. Lipschitz regularity results for nonlinear strictly elliptic equations and applications

    NASA Astrophysics Data System (ADS)

    Ley, Olivier; Nguyen, Vinh Duc

    2017-10-01

    Most of Lipschitz regularity results for nonlinear strictly elliptic equations are obtained for a suitable growth power of the nonlinearity with respect to the gradient variable (subquadratic for instance). For equations with superquadratic growth power in gradient, one usually uses weak Bernstein-type arguments which require regularity and/or convex-type assumptions on the gradient nonlinearity. In this article, we obtain new Lipschitz regularity results for a large class of nonlinear strictly elliptic equations with possibly arbitrary growth power of the Hamiltonian with respect to the gradient variable using some ideas coming from Ishii-Lions' method. We use these bounds to solve an ergodic problem and to study the regularity and the large time behavior of the solution of the evolution equation.

  17. Topics in Bethe Ansatz

    NASA Astrophysics Data System (ADS)

    Wang, Chunguang

    Integrable quantum spin chains have close connections to integrable quantum field. theories, modern condensed matter physics, string and Yang-Mills theories. Bethe. ansatz is one of the most important approaches for solving quantum integrable spin. chains. At the heart of the algebraic structure of integrable quantum spin chains is. the quantum Yang-Baxter equation and the boundary Yang-Baxter equation. This. thesis focuses on four topics in Bethe ansatz. The Bethe equations for the isotropic periodic spin-1/2 Heisenberg chain with N. sites have solutions containing ±i/2 that are singular: both the corresponding energy and the algebraic Bethe ansatz vector are divergent. Such solutions must be carefully regularized. We consider a regularization involving a parameter that can be. determined using a generalization of the Bethe equations. These generalized Bethe. equations provide a practical way of determining which singular solutions correspond. to eigenvectors of the model. The Bethe equations for the periodic XXX and XXZ spin chains admit singular. solutions, for which the corresponding eigenvalues and eigenvectors are ill-defined. We use a twist regularization to derive conditions for such singular solutions to bephysical, in which case they correspond to genuine eigenvalues and eigenvectors of. the Hamiltonian. We analyze the ground state of the open spin-1/2 isotropic quantum spin chain. with a non-diagonal boundary term using a recently proposed Bethe ansatz solution. As the coefficient of the non-diagonal boundary term tends to zero, the Bethe roots. split evenly into two sets: those that remain finite, and those that become infinite. We. argue that the former satisfy conventional Bethe equations, while the latter satisfy a. generalization of the Richardson-Gaudin equations. We derive an expression for the. leading correction to the boundary energy in terms of the boundary parameters. We argue that the Hamiltonians for A(2) 2n open quantum spin chains corresponding. to two choices of integrable boundary conditions have the symmetries Uq(Bn) and. Uq(Cn), respectively. The deformation of Cn is novel, with a nonstandard coproduct. We find a formula for the Dynkin labels of the Bethe states (which determine the degeneracies of the corresponding eigenvalues) in terms of the numbers of Bethe roots of. each type. With the help of this formula, we verify numerically (for a generic value of. the anisotropy parameter) that the degeneracies and multiplicities of the spectra implied by the quantum group symmetries are completely described by the Bethe ansatz.

  18. The rotation axis for stationary and axisymmetric space-times

    NASA Astrophysics Data System (ADS)

    van den Bergh, N.; Wils, P.

    1985-03-01

    A set of 'extended' regularity conditions is discussed which have to be satisfied on the rotation axis if the latter is assumed to be also an axis of symmetry. For a wide class of energy-momentum tensors these conditions can only hold at the origin of the Weyl canonical coordinate. For static and cylindrically symmetric space-times the conditions can be derived from the regularity of the Riemann tetrad coefficients on the axis. For stationary space-times, however, the extended conditions do not necessarily hold, even when 'elementary flatness' is satisfied and when there are no curvature singularities on the axis. The result by Davies and Caplan (1971) for cylindrically symmetric stationary Einstein-Maxwell fields is generalized by proving that only Minkowski space-time and a particular magnetostatic solution possess a regular axis of rotation. Further, several sets of solutions for neutral and charged, rigidly and differentially rotating dust are discussed.

  19. Filtering techniques for efficient inversion of two-dimensional Nuclear Magnetic Resonance data

    NASA Astrophysics Data System (ADS)

    Bortolotti, V.; Brizi, L.; Fantazzini, P.; Landi, G.; Zama, F.

    2017-10-01

    The inversion of two-dimensional Nuclear Magnetic Resonance (NMR) data requires the solution of a first kind Fredholm integral equation with a two-dimensional tensor product kernel and lower bound constraints. For the solution of this ill-posed inverse problem, the recently presented 2DUPEN algorithm [V. Bortolotti et al., Inverse Problems, 33(1), 2016] uses multiparameter Tikhonov regularization with automatic choice of the regularization parameters. In this work, I2DUPEN, an improved version of 2DUPEN that implements Mean Windowing and Singular Value Decomposition filters, is deeply tested. The reconstruction problem with filtered data is formulated as a compressed weighted least squares problem with multi-parameter Tikhonov regularization. Results on synthetic and real 2D NMR data are presented with the main purpose to deeper analyze the separate and combined effects of these filtering techniques on the reconstructed 2D distribution.

  20. Hairy black holes in scalar extended massive gravity

    NASA Astrophysics Data System (ADS)

    Tolley, Andrew J.; Wu, De-Jun; Zhou, Shuang-Yong

    2015-12-01

    We construct static, spherically symmetric black hole solutions in scalar extended ghost-free massive gravity and show the existence of hairy black holes in this class of extension. While the existence seems to be a generic feature, we focus on the simplest models of this extension and find that asymptotically flat hairy black holes can exist without fine-tuning the theory parameters, unlike the bi-gravity extension, where asymptotical flatness requires fine-tuning in the parameter space. Like the bi-gravity extension, we are unable to obtain asymptotically dS regular black holes in the simplest models considered, but it is possible to obtain asymptotically AdS black holes.

  1. A compressed sensing based 3D resistivity inversion algorithm for hydrogeological applications

    NASA Astrophysics Data System (ADS)

    Ranjan, Shashi; Kambhammettu, B. V. N. P.; Peddinti, Srinivasa Rao; Adinarayana, J.

    2018-04-01

    Image reconstruction from discrete electrical responses pose a number of computational and mathematical challenges. Application of smoothness constrained regularized inversion from limited measurements may fail to detect resistivity anomalies and sharp interfaces separated by hydro stratigraphic units. Under favourable conditions, compressed sensing (CS) can be thought of an alternative to reconstruct the image features by finding sparse solutions to highly underdetermined linear systems. This paper deals with the development of a CS assisted, 3-D resistivity inversion algorithm for use with hydrogeologists and groundwater scientists. CS based l1-regularized least square algorithm was applied to solve the resistivity inversion problem. Sparseness in the model update vector is introduced through block oriented discrete cosine transformation, with recovery of the signal achieved through convex optimization. The equivalent quadratic program was solved using primal-dual interior point method. Applicability of the proposed algorithm was demonstrated using synthetic and field examples drawn from hydrogeology. The proposed algorithm has outperformed the conventional (smoothness constrained) least square method in recovering the model parameters with much fewer data, yet preserving the sharp resistivity fronts separated by geologic layers. Resistivity anomalies represented by discrete homogeneous blocks embedded in contrasting geologic layers were better imaged using the proposed algorithm. In comparison to conventional algorithm, CS has resulted in an efficient (an increase in R2 from 0.62 to 0.78; a decrease in RMSE from 125.14 Ω-m to 72.46 Ω-m), reliable, and fast converging (run time decreased by about 25%) solution.

  2. Anisotropic k-essence cosmologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chimento, Luis P.; Forte, Monica

    We investigate a Bianchi type-I cosmology with k-essence and find the set of models which dissipate the initial anisotropy. There are cosmological models with extended tachyon fields and k-essence having a constant barotropic index. We obtain the conditions leading to a regular bounce of the average geometry and the residual anisotropy on the bounce. For constant potential, we develop purely kinetic k-essence models which are dust dominated in their early stages, dissipate the initial anisotropy, and end in a stable de Sitter accelerated expansion scenario. We show that linear k-field and polynomial kinetic function models evolve asymptotically to Friedmann-Robertson-Walker cosmologies.more » The linear case is compatible with an asymptotic potential interpolating between V{sub l}{proportional_to}{phi}{sup -{gamma}{sub l}}, in the shear dominated regime, and V{sub l}{proportional_to}{phi}{sup -2} at late time. In the polynomial case, the general solution contains cosmological models with an oscillatory average geometry. For linear k-essence, we find the general solution in the Bianchi type-I cosmology when the k field is driven by an inverse square potential. This model shares the same geometry as a quintessence field driven by an exponential potential.« less

  3. A regularization corrected score method for nonlinear regression models with covariate error.

    PubMed

    Zucker, David M; Gorfine, Malka; Li, Yi; Tadesse, Mahlet G; Spiegelman, Donna

    2013-03-01

    Many regression analyses involve explanatory variables that are measured with error, and failing to account for this error is well known to lead to biased point and interval estimates of the regression coefficients. We present here a new general method for adjusting for covariate error. Our method consists of an approximate version of the Stefanski-Nakamura corrected score approach, using the method of regularization to obtain an approximate solution of the relevant integral equation. We develop the theory in the setting of classical likelihood models; this setting covers, for example, linear regression, nonlinear regression, logistic regression, and Poisson regression. The method is extremely general in terms of the types of measurement error models covered, and is a functional method in the sense of not involving assumptions on the distribution of the true covariate. We discuss the theoretical properties of the method and present simulation results in the logistic regression setting (univariate and multivariate). For illustration, we apply the method to data from the Harvard Nurses' Health Study concerning the relationship between physical activity and breast cancer mortality in the period following a diagnosis of breast cancer. Copyright © 2013, The International Biometric Society.

  4. Exploring equivalence domain in nonlinear inverse problems using Covariance Matrix Adaption Evolution Strategy (CMAES) and random sampling

    NASA Astrophysics Data System (ADS)

    Grayver, Alexander V.; Kuvshinov, Alexey V.

    2016-05-01

    This paper presents a methodology to sample equivalence domain (ED) in nonlinear partial differential equation (PDE)-constrained inverse problems. For this purpose, we first applied state-of-the-art stochastic optimization algorithm called Covariance Matrix Adaptation Evolution Strategy (CMAES) to identify low-misfit regions of the model space. These regions were then randomly sampled to create an ensemble of equivalent models and quantify uncertainty. CMAES is aimed at exploring model space globally and is robust on very ill-conditioned problems. We show that the number of iterations required to converge grows at a moderate rate with respect to number of unknowns and the algorithm is embarrassingly parallel. We formulated the problem by using the generalized Gaussian distribution. This enabled us to seamlessly use arbitrary norms for residual and regularization terms. We show that various regularization norms facilitate studying different classes of equivalent solutions. We further show how performance of the standard Metropolis-Hastings Markov chain Monte Carlo algorithm can be substantially improved by using information CMAES provides. This methodology was tested by using individual and joint inversions of magneotelluric, controlled-source electromagnetic (EM) and global EM induction data.

  5. Effect of concentration and temperature on the rheological behavior of collagen solution.

    PubMed

    Lai, Guoli; Li, Yang; Li, Guoying

    2008-04-01

    Dynamic viscoelastic properties of collagen solutions with concentrations of 0.5-1.5% (w/w) were characterized by means of oscillatory rheometry at temperatures ranging from 20 to 32.5 degrees C. All collagen solutions showed a shear-thinning flow behavior. The complex viscosity exhibited an exponential increase and the loss tangent decreased with the increase of collagen concentration (C(COL)) when the C(COL)> or =0.75%. Both storage modulus (G') and loss modulus (G'') increased with the increase of frequency and concentration, but decreased with the increase of temperature and behaved without regularity at 32.5 degrees C. The relaxation times decreased with the increase of temperature for 1.0% collagen solution. According to a three-zone model, dynamic modulus of collagen solutions showed terminal-zone and plateau-zone behavior when C(COL) was no more than 1.25% or the stated temperature was no more than 30 degrees C. The concentrated solution (1.5%) behaved being entirely in plateau zone. An application of the time-temperature superposition (TTS) allowed the construction of master curve and an Arrhenius-type TTS principle was used to yield the activation energy of 161.4 kJ mol(-1).

  6. Identification of subsurface structures using electromagnetic data and shape priors

    NASA Astrophysics Data System (ADS)

    Tveit, Svenn; Bakr, Shaaban A.; Lien, Martha; Mannseth, Trond

    2015-03-01

    We consider the inverse problem of identifying large-scale subsurface structures using the controlled source electromagnetic method. To identify structures in the subsurface where the contrast in electric conductivity can be small, regularization is needed to bias the solution towards preserving structural information. We propose to combine two approaches for regularization of the inverse problem. In the first approach we utilize a model-based, reduced, composite representation of the electric conductivity that is highly flexible, even for a moderate number of degrees of freedom. With a low number of parameters, the inverse problem is efficiently solved using a standard, second-order gradient-based optimization algorithm. Further regularization is obtained using structural prior information, available, e.g., from interpreted seismic data. The reduced conductivity representation is suitable for incorporation of structural prior information. Such prior information cannot, however, be accurately modeled with a gaussian distribution. To alleviate this, we incorporate the structural information using shape priors. The shape prior technique requires the choice of kernel function, which is application dependent. We argue for using the conditionally positive definite kernel which is shown to have computational advantages over the commonly applied gaussian kernel for our problem. Numerical experiments on various test cases show that the methodology is able to identify fairly complex subsurface electric conductivity distributions while preserving structural prior information during the inversion.

  7. Topological regularization and self-duality in four-dimensional anti-de Sitter gravity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miskovic, Olivera; Olea, Rodrigo; Instituto de Fisica, Pontificia Universidad Catolica de Valparaiso, Casilla 4059, Valparaiso

    2009-06-15

    It is shown that the addition of a topological invariant (Gauss-Bonnet term) to the anti-de Sitter gravity action in four dimensions recovers the standard regularization given by the holographic renormalization procedure. This crucial step makes possible the inclusion of an odd parity invariant (Pontryagin term) whose coupling is fixed by demanding an asymptotic (anti) self-dual condition on the Weyl tensor. This argument allows one to find the dual point of the theory where the holographic stress tensor is related to the boundary Cotton tensor as T{sub j}{sup i}={+-}(l{sup 2}/8{pi}G)C{sub j}{sup i}, which has been observed in recent literature in solitonicmore » solutions and hydrodynamic models. A general procedure to generate the counterterm series for anti-de Sitter gravity in any even dimension from the corresponding Euler term is also briefly discussed.« less

  8. Fast Algorithms for Earth Mover Distance Based on Optimal Transport and L1 Regularization II

    DTIC Science & Technology

    2016-09-01

    of optimal transport, the EMD problem can be reformulated as a familiar L1 minimization. We use a regularization which gives us a unique solution for...plays a central role in many applications, including image processing, computer vision and statistics etc. [13, 17, 20, 24]. The EMD is a metric defined

  9. Regularities of the sorption of 1,2,3,4-tetrahydroquinoline derivatives under conditions of reversed phase HPLC

    NASA Astrophysics Data System (ADS)

    Nekrasova, N. A.; Kurbatova, S. V.; Zemtsova, M. N.

    2016-12-01

    Regularities of the sorption of 1,2,3,4-tetrahydroquinoline derivatives on octadecylsilyl silica gel and porous graphitic carbon from aqueous acetonitrile solutions were investigated. The effect the molecular structure and physicochemical parameters of the sorbates have on their retention characteristics under conditions of reversed phase HPLC are analyzed.

  10. Efficient robust doubly adaptive regularized regression with applications.

    PubMed

    Karunamuni, Rohana J; Kong, Linglong; Tu, Wei

    2018-01-01

    We consider the problem of estimation and variable selection for general linear regression models. Regularized regression procedures have been widely used for variable selection, but most existing methods perform poorly in the presence of outliers. We construct a new penalized procedure that simultaneously attains full efficiency and maximum robustness. Furthermore, the proposed procedure satisfies the oracle properties. The new procedure is designed to achieve sparse and robust solutions by imposing adaptive weights on both the decision loss and the penalty function. The proposed method of estimation and variable selection attains full efficiency when the model is correct and, at the same time, achieves maximum robustness when outliers are present. We examine the robustness properties using the finite-sample breakdown point and an influence function. We show that the proposed estimator attains the maximum breakdown point. Furthermore, there is no loss in efficiency when there are no outliers or the error distribution is normal. For practical implementation of the proposed method, we present a computational algorithm. We examine the finite-sample and robustness properties using Monte Carlo studies. Two datasets are also analyzed.

  11. Constrained Optimization Methods in Health Services Research-An Introduction: Report 1 of the ISPOR Optimization Methods Emerging Good Practices Task Force.

    PubMed

    Crown, William; Buyukkaramikli, Nasuh; Thokala, Praveen; Morton, Alec; Sir, Mustafa Y; Marshall, Deborah A; Tosh, Jon; Padula, William V; Ijzerman, Maarten J; Wong, Peter K; Pasupathy, Kalyan S

    2017-03-01

    Providing health services with the greatest possible value to patients and society given the constraints imposed by patient characteristics, health care system characteristics, budgets, and so forth relies heavily on the design of structures and processes. Such problems are complex and require a rigorous and systematic approach to identify the best solution. Constrained optimization is a set of methods designed to identify efficiently and systematically the best solution (the optimal solution) to a problem characterized by a number of potential solutions in the presence of identified constraints. This report identifies 1) key concepts and the main steps in building an optimization model; 2) the types of problems for which optimal solutions can be determined in real-world health applications; and 3) the appropriate optimization methods for these problems. We first present a simple graphical model based on the treatment of "regular" and "severe" patients, which maximizes the overall health benefit subject to time and budget constraints. We then relate it back to how optimization is relevant in health services research for addressing present day challenges. We also explain how these mathematical optimization methods relate to simulation methods, to standard health economic analysis techniques, and to the emergent fields of analytics and machine learning. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  12. The Mimetic Finite Element Method and the Virtual Element Method for elliptic problems with arbitrary regularity.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Manzini, Gianmarco

    2012-07-13

    We develop and analyze a new family of virtual element methods on unstructured polygonal meshes for the diffusion problem in primal form, that use arbitrarily regular discrete spaces V{sub h} {contained_in} C{sup {alpha}} {element_of} N. The degrees of freedom are (a) solution and derivative values of various degree at suitable nodes and (b) solution moments inside polygons. The convergence of the method is proven theoretically and an optimal error estimate is derived. The connection with the Mimetic Finite Difference method is also discussed. Numerical experiments confirm the convergence rate that is expected from the theory.

  13. The Cauchy Problem in Local Spaces for the Complex Ginzburg-Landau EquationII. Contraction Methods

    NASA Astrophysics Data System (ADS)

    Ginibre, J.; Velo, G.

    We continue the study of the initial value problem for the complex Ginzburg-Landau equation (with a > 0, b > 0, g>= 0) in initiated in a previous paper [I]. We treat the case where the initial data and the solutions belong to local uniform spaces, more precisely to spaces of functions satisfying local regularity conditions and uniform bounds in local norms, but no decay conditions (or arbitrarily weak decay conditions) at infinity in . In [I] we used compactness methods and an extended version of recent local estimates [3] and proved in particular the existence of solutions globally defined in time with local regularity of the initial data corresponding to the spaces Lr for r>= 2 or H1. Here we treat the same problem by contraction methods. This allows us in particular to prove that the solutions obtained in [I] are unique under suitable subcriticality conditions, and to obtain for them additional regularity properties and uniform bounds. The method extends some of those previously applied to the nonlinear heat equation in global spaces to the framework of local uniform spaces.

  14. Surface defects on the Gd{sub 2}Zr{sub 2}O{sub 7} oxide films grown on textured NiW technical substrates by chemical solution method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Y., E-mail: yuezhao@sjtu.edu.cn

    2017-02-15

    Epitaxial growth of oxide thin films has attracted much interest because of their broad applications in various fields. In this study, we investigated the microstructure of textured Gd{sub 2}Zr{sub 2}O{sub 7} films grown on (001)〈100〉 orientated NiW alloy substrates by a chemical solution deposition (CSD) method. The aging effect of precursor solution on defect formation was thoroughly investigated. A slight difference was observed between the as-obtained and aged precursor solutions with respect to the phase purity and global texture of films prepared using these solutions. However, the surface morphologies are different, i.e., some regular-shaped regions (mainly hexagonal or dodecagonal) weremore » observed on the film prepared using the as-obtained precursor, whereas the film prepared using the aged precursor exhibits a homogeneous structure. Electron backscatter diffraction and scanning electron microscopy analyses showed that the Gd{sub 2}Zr{sub 2}O{sub 7} grains present within the regular-shaped regions are polycrystalline, whereas those present in the surrounding are epitaxial. Some polycrystalline regions ranging from several micrometers to several tens of micrometers grew across the NiW grain boundaries underneath. To understand this phenomenon, the properties of the precursors and corresponding xerogel were studied by Fourier transform infrared spectroscopy and coupled thermogravimetry/differential thermal analysis. The results showed that both the solutions mainly contain small Gd−Zr−O clusters obtained by the reaction of zirconium acetylacetonate with propionic acid during the precursor synthesis. The regular-shaped regions were probably formed by large Gd−Zr−O frameworks with a metastable structure in the solution with limited aging time. This study demonstrates the importance of the precise control of chemical reaction path to enhance the stability and homogeneity of the precursors of the CSD route. - Highlights: •We investigate microstructure of Gd{sub 2}Zr{sub 2}O{sub 7} films grown by a chemical solution route. •The aging effect of precursor solution on formation of surface defect was thoroughly studied. •Gd−Zr−O clusters are present in the precursor solutions.« less

  15. LP-stability for the strong solutions of the Navier-Stokes equations in the whole space

    NASA Astrophysics Data System (ADS)

    Beiraodaveiga, H.; Secchi, P.

    1985-10-01

    We consider the motion of a viscous fluid filling the whole space R3, governed by the classical Navier-Stokes equations (1). Existence of global (in time) regular solutions for that system of non-linear partial differential equations, is still an open problem. From either the mathematical and the physical point of view, an interesting property is the stability (or not) of the (eventual) global regular solutions. Here, we assume that v1(t,x) is a solution, with initial data a1(x). For small perturbations of a1, we want the solution v1(t,x) being slightly perturbed, too. Due to viscosity, it is even expected that the perturbed solution v2(t,x) approaches the unperturbed one, as time goes to + infinity. This is just the result proved in this paper. To measure the distance between v1(t,x) and v2(t,x), at each time t, suitable norms are introduced (LP-norms). For fluids filling a bounded vessel, exponential decay of the above distance, is expected. Such a strong result is not reasonable, for fluids filling the entire space.

  16. Dynamical black holes in low-energy string theory

    NASA Astrophysics Data System (ADS)

    Aniceto, Pedro; Rocha, Jorge V.

    2017-05-01

    We investigate time-dependent spherically symmetric solutions of the four-dimensional Einstein-Maxwell-axion-dilaton system, with the dilaton coupling that occurs in low-energy effective heterotic string theory. A class of dilaton-electrovacuum radiating solutions with a trivial axion, previously found by Güven and Yörük, is re-derived in a simpler manner and its causal structure is clarified. It is shown that such dynamical spacetimes featuring apparent horizons do not possess a regular light-like past null infinity or future null infinity, depending on whether they are radiating or accreting. These solutions are then extended in two ways. First we consider a Vaidya-like generalisation, which introduces a null dust source. Such spacetimes are used to test the status of cosmic censorship in the context of low-energy string theory. We prove that — within this family of solutions — regular black holes cannot evolve into naked singularities by accreting null dust, unless standard energy conditions are violated. Secondly, we employ S-duality to derive new time-dependent dyon solutions with a nontrivial axion turned on. Although they share the same causal structure as their Einstein-Maxwell-dilaton counterparts, these solutions possess both electric and magnetic charges.

  17. Bayesian inversion of marine CSEM data from the Scarborough gas field using a transdimensional 2-D parametrization

    NASA Astrophysics Data System (ADS)

    Ray, Anandaroop; Key, Kerry; Bodin, Thomas; Myer, David; Constable, Steven

    2014-12-01

    We apply a reversible-jump Markov chain Monte Carlo method to sample the Bayesian posterior model probability density function of 2-D seafloor resistivity as constrained by marine controlled source electromagnetic data. This density function of earth models conveys information on which parts of the model space are illuminated by the data. Whereas conventional gradient-based inversion approaches require subjective regularization choices to stabilize this highly non-linear and non-unique inverse problem and provide only a single solution with no model uncertainty information, the method we use entirely avoids model regularization. The result of our approach is an ensemble of models that can be visualized and queried to provide meaningful information about the sensitivity of the data to the subsurface, and the level of resolution of model parameters. We represent models in 2-D using a Voronoi cell parametrization. To make the 2-D problem practical, we use a source-receiver common midpoint approximation with 1-D forward modelling. Our algorithm is transdimensional and self-parametrizing where the number of resistivity cells within a 2-D depth section is variable, as are their positions and geometries. Two synthetic studies demonstrate the algorithm's use in the appraisal of a thin, segmented, resistive reservoir which makes for a challenging exploration target. As a demonstration example, we apply our method to survey data collected over the Scarborough gas field on the Northwest Australian shelf.

  18. A New Continuous-Time Equality-Constrained Optimization to Avoid Singularity.

    PubMed

    Quan, Quan; Cai, Kai-Yuan

    2016-02-01

    In equality-constrained optimization, a standard regularity assumption is often associated with feasible point methods, namely, that the gradients of constraints are linearly independent. In practice, the regularity assumption may be violated. In order to avoid such a singularity, a new projection matrix is proposed based on which a feasible point method to continuous-time, equality-constrained optimization is developed. First, the equality constraint is transformed into a continuous-time dynamical system with solutions that always satisfy the equality constraint. Second, a new projection matrix without singularity is proposed to realize the transformation. An update (or say a controller) is subsequently designed to decrease the objective function along the solutions of the transformed continuous-time dynamical system. The invariance principle is then applied to analyze the behavior of the solution. Furthermore, the proposed method is modified to address cases in which solutions do not satisfy the equality constraint. Finally, the proposed optimization approach is applied to three examples to demonstrate its effectiveness.

  19. Neural network for nonsmooth pseudoconvex optimization with general convex constraints.

    PubMed

    Bian, Wei; Ma, Litao; Qin, Sitian; Xue, Xiaoping

    2018-05-01

    In this paper, a one-layer recurrent neural network is proposed for solving a class of nonsmooth, pseudoconvex optimization problems with general convex constraints. Based on the smoothing method, we construct a new regularization function, which does not depend on any information of the feasible region. Thanks to the special structure of the regularization function, we prove the global existence, uniqueness and "slow solution" character of the state of the proposed neural network. Moreover, the state solution of the proposed network is proved to be convergent to the feasible region in finite time and to the optimal solution set of the related optimization problem subsequently. In particular, the convergence of the state to an exact optimal solution is also considered in this paper. Numerical examples with simulation results are given to show the efficiency and good characteristics of the proposed network. In addition, some preliminary theoretical analysis and application of the proposed network for a wider class of dynamic portfolio optimization are included. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. Measuring, Enabling and Comparing Modularity, Regularity and Hierarchy in Evolutionary Design

    NASA Technical Reports Server (NTRS)

    Hornby, Gregory S.

    2005-01-01

    For computer-automated design systems to scale to complex designs they must be able to produce designs that exhibit the characteristics of modularity, regularity and hierarchy - characteristics that are found both in man-made and natural designs. Here we claim that these characteristics are enabled by implementing the attributes of combination, control-flow and abstraction in the representation. To support this claim we use an evolutionary algorithm to evolve solutions to different sizes of a table design problem using five different representations, each with different combinations of modularity, regularity and hierarchy enabled and show that the best performance happens when all three of these attributes are enabled. We also define metrics for modularity, regularity and hierarchy in design encodings and demonstrate that high fitness values are achieved with high values of modularity, regularity and hierarchy and that there is a positive correlation between increases in fitness and increases in modularity. regularity and hierarchy.

  1. Joint tumor segmentation and dense deformable registration of brain MR images.

    PubMed

    Parisot, Sarah; Duffau, Hugues; Chemouny, Stéphane; Paragios, Nikos

    2012-01-01

    In this paper we propose a novel graph-based concurrent registration and segmentation framework. Registration is modeled with a pairwise graphical model formulation that is modular with respect to the data and regularization term. Segmentation is addressed by adopting a similar graphical model, using image-based classification techniques while producing a smooth solution. The two problems are coupled via a relaxation of the registration criterion in the presence of tumors as well as a segmentation through a registration term aiming the separation between healthy and diseased tissues. Efficient linear programming is used to solve both problems simultaneously. State of the art results demonstrate the potential of our method on a large and challenging low-grade glioma data set.

  2. Sandia fracture challenge 2: Sandia California's modeling approach

    DOE PAGES

    Karlson, Kyle N.; James W. Foulk, III; Brown, Arthur A.; ...

    2016-03-09

    The second Sandia Fracture Challenge illustrates that predicting the ductile fracture of Ti-6Al-4V subjected to moderate and elevated rates of loading requires thermomechanical coupling, elasto-thermo-poro-viscoplastic constitutive models with the physics of anisotropy and regularized numerical methods for crack initiation and propagation. We detail our initial approach with an emphasis on iterative calibration and systematically increasing complexity to accommodate anisotropy in the context of an isotropic material model. Blind predictions illustrate strengths and weaknesses of our initial approach. We then revisit our findings to illustrate the importance of including anisotropy in the failure process. Furthermore, mesh-independent solutions of continuum damage modelsmore » having both isotropic and anisotropic yields surfaces are obtained through nonlocality and localization elements.« less

  3. Small-angle x-ray scattering study of polymer structure: Carbosilane dendrimers in hexane solution

    NASA Astrophysics Data System (ADS)

    Shtykova, E. V.; Feigin, L. A.; Volkov, V. V.; Malakhova, Yu. N.; Streltsov, D. R.; Buzin, A. I.; Chvalun, S. N.; Katarzhanova, E. Yu.; Ignatieva, G. M.; Muzafarov, A. M.

    2016-09-01

    The three-dimensional organization of monodisperse hyper-branched macromolecules of regular structure—carbosilane dendrimers of zero, third, and sixth generations—has been studied by small-angle X-ray scattering (SAXS) in solution. The use of modern methods of SAXS data interpretation, including ab initio modeling, has made it possible to determine the internal architecture of the dendrimers in dependence of the generation number and the number of cyclosiloxane end groups (forming the shell of dendritic macromolecules) and show dendrimers to be spherical. The structural results give grounds to consider carbosilane dendrimers promising objects for forming crystals with subsequent structural analysis and determining their structure with high resolution, as well as for designing new materials to be used in various dendrimer-based technological applications.

  4. Space structures insulating material's thermophysical and radiation properties estimation

    NASA Astrophysics Data System (ADS)

    Nenarokomov, A. V.; Alifanov, O. M.; Titov, D. M.

    2007-11-01

    In many practical situations in aerospace technology it is impossible to measure directly such properties of analyzed materials (for example, composites) as thermal and radiation characteristics. The only way that can often be used to overcome these difficulties is indirect measurements. This type of measurement is usually formulated as the solution of inverse heat transfer problems. Such problems are ill-posed in mathematical sense and their main feature shows itself in the solution instabilities. That is why special regularizing methods are needed to solve them. The experimental methods of identification of the mathematical models of heat transfer based on solving the inverse problems are one of the modern effective solving manners. The objective of this paper is to estimate thermal and radiation properties of advanced materials using the approach based on inverse methods.

  5. Identification of the population density of a species model with nonlocal diffusion and nonlinear reaction

    NASA Astrophysics Data System (ADS)

    Tuan, Nguyen Huy; Van Au, Vo; Khoa, Vo Anh; Lesnic, Daniel

    2017-05-01

    The identification of the population density of a logistic equation backwards in time associated with nonlocal diffusion and nonlinear reaction, motivated by biology and ecology fields, is investigated. The diffusion depends on an integral average of the population density whilst the reaction term is a global or local Lipschitz function of the population density. After discussing the ill-posedness of the problem, we apply the quasi-reversibility method to construct stable approximation problems. It is shown that the regularized solutions stemming from such method not only depend continuously on the final data, but also strongly converge to the exact solution in L 2-norm. New error estimates together with stability results are obtained. Furthermore, numerical examples are provided to illustrate the theoretical results.

  6. Wave drift damping acting on multiple circular cylinders (model tests)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kinoshita, Takeshi; Sunahara, Shunji; Bao, W.

    1995-12-31

    The wave drift damping for the slow drift motion of a four-column platform is experimentally investigated. The estimation of damping force of the slow drift motion of moored floating structures in ocean waves, is one of the most important topics. Bao et al. calculated an interaction of multiple circular cylinders based on the potential flow theory, and showed that the wave drift damping is significantly influenced by the interaction between cylinders. This calculation method assumes that the slow drift motion is approximately replaced by steady current, that is, structures on slow drift motion are supposed to be equivalent to onesmore » in both regular waves and slow current. To validate semi-analytical solutions of Bao et al., experiments were carried out. At first, added resistance due to waves acting on a structure composed of multiple (four) vertical circular cylinders fixed to a slowly moving carriage, was measured in regular waves. Next, the added resistance of the structure moored by linear spring to the slowly moving carriage were measured in regular waves. Furthermore, to validate the assumption that the slow drift motion is replaced by steady current, free decay tests in still water and in regular waves were compared with the simulation of the slow drift motion using the wave drift damping coefficient obtained by the added resistance tests.« less

  7. A class of nonideal solutions. 1: Definition and properties

    NASA Technical Reports Server (NTRS)

    Zeleznik, F. J.

    1983-01-01

    A class of nonideal solutions is defined by constructing a function to represent the composition dependence of thermodynamic properties for members of the class, and some properties of these solutions are studied. The constructed function has several useful features: (1) its parameters occur linearly; (2) it contains a logarithmic singularity in the dilute solution region and contains ideal solutions and regular solutions as special cases; and (3) it is applicable to N-ary systems and reduces to M-ary systems (M or = N) in a form-invariant manner.

  8. Regularity criterion for solutions of the three-dimensional Cahn-Hilliard-Navier-Stokes equations and associated computations.

    PubMed

    Gibbon, John D; Pal, Nairita; Gupta, Anupam; Pandit, Rahul

    2016-12-01

    We consider the three-dimensional (3D) Cahn-Hilliard equations coupled to, and driven by, the forced, incompressible 3D Navier-Stokes equations. The combination, known as the Cahn-Hilliard-Navier-Stokes (CHNS) equations, is used in statistical mechanics to model the motion of a binary fluid. The potential development of singularities (blow-up) in the contours of the order parameter ϕ is an open problem. To address this we have proved a theorem that closely mimics the Beale-Kato-Majda theorem for the 3D incompressible Euler equations [J. T. Beale, T. Kato, and A. J. Majda, Commun. Math. Phys. 94, 61 (1984)CMPHAY0010-361610.1007/BF01212349]. By taking an L^{∞} norm of the energy of the full binary system, designated as E_{∞}, we have shown that ∫_{0}^{t}E_{∞}(τ)dτ governs the regularity of solutions of the full 3D system. Our direct numerical simulations (DNSs) of the 3D CHNS equations for (a) a gravity-driven Rayleigh Taylor instability and (b) a constant-energy-injection forcing, with 128^{3} to 512^{3} collocation points and over the duration of our DNSs confirm that E_{∞} remains bounded as far as our computations allow.

  9. Spectral studies on the interaction of pinacyanol chloride with binary surfactants in aqueous medium.

    PubMed

    Manna, Kausik; Panda, Amiya Kumar

    2009-12-01

    Interaction of pinacyanol chloride (PIN) with pure and binary mixtures of cetyltrimethylammonium bromide (CTAB) and sodium deoxycholate (NaDC) was spectroscopically studied. Interaction of PIN with pure NaDC produced a blue shifted metachromatic band (at approximately 502 nm), which gradually shifted to higher wavelength region as the concentration of NaDC increased in the pre-micellar stage. For CTAB only intensity of both the bands increased without any shift. Mixed surfactant systems behaved differently than the pure components. Absorbance of monomeric band with a slight red-shift, and a simultaneous decrease in the absorbance of dimeric band of PIN, were observed for all the combinations in the post-micellar region. PIN-micelle binding constant (K(b)) for pure as well as mixed was determined from spectral data using Benesi-Hildebrand equation. Using the idea of Regular Solution Theory, micellar aggregates were assumed to be predominant than other aggregated state, like vesicles. Aggregation number was determined by fluorescence quenching method. Spectral analyses were also done to evaluate CMC values. Rubinigh's model for Regular Solution Theory was employed to evaluate the interaction parameters and micellar composition. Strong synergistic interaction between the oppositely charged surfactants was noted. Bulkier nature of NaDC lowered down its access in mixed micellar system.

  10. Black hole thermodynamics, conformal couplings, and R 2 terms

    NASA Astrophysics Data System (ADS)

    Chernicoff, Mariano; Galante, Mario; Giribet, Gaston; Goya, Andres; Leoni, Matias; Oliva, Julio; Perez-Nadal, Guillem

    2016-06-01

    Lovelock theory provides a tractable model of higher-curvature gravity in which several questions can be studied analytically. This is the reason why, in the last years, this theory has become the favorite arena to study the effects of higher-curvature terms in the context of AdS/CFT correspondence. Lovelock theory also admits extensions that permit to accommodate matter coupled to gravity in a non-minimal way. In this setup, problems such as the backreaction of matter on the black hole geometry can also be solved exactly. In this paper, we study the thermodynamics of black holes in theories of gravity of this type, which include both higher-curvature terms, U(1) gauge fields, and conformal couplings with matter fields in D dimensions. These charged black hole solutions exhibit a backreacting scalar field configuration that is regular everywhere outside and on the horizon, and may exist both in asymptotically flat and asymptotically Anti-de Sitter (AdS) spaces. We work out explicitly the boundary action for this theory, which renders the variational problem well-posed and suffices to regularize the Euclidean action in AdS. We also discuss several interrelated properties of the theory, such as its duality symmetry under field redefinition and how it acts on black holes and gravitational wave solutions.

  11. 3-dimensional magnetotelluric inversion including topography using deformed hexahedral edge finite elements and direct solvers parallelized on symmetric multiprocessor computers - Part II: direct data-space inverse solution

    NASA Astrophysics Data System (ADS)

    Kordy, M.; Wannamaker, P.; Maris, V.; Cherkaev, E.; Hill, G.

    2016-01-01

    Following the creation described in Part I of a deformable edge finite-element simulator for 3-D magnetotelluric (MT) responses using direct solvers, in Part II we develop an algorithm named HexMT for 3-D regularized inversion of MT data including topography. Direct solvers parallelized on large-RAM, symmetric multiprocessor (SMP) workstations are used also for the Gauss-Newton model update. By exploiting the data-space approach, the computational cost of the model update becomes much less in both time and computer memory than the cost of the forward simulation. In order to regularize using the second norm of the gradient, we factor the matrix related to the regularization term and apply its inverse to the Jacobian, which is done using the MKL PARDISO library. For dense matrix multiplication and factorization related to the model update, we use the PLASMA library which shows very good scalability across processor cores. A synthetic test inversion using a simple hill model shows that including topography can be important; in this case depression of the electric field by the hill can cause false conductors at depth or mask the presence of resistive structure. With a simple model of two buried bricks, a uniform spatial weighting for the norm of model smoothing recovered more accurate locations for the tomographic images compared to weightings which were a function of parameter Jacobians. We implement joint inversion for static distortion matrices tested using the Dublin secret model 2, for which we are able to reduce nRMS to ˜1.1 while avoiding oscillatory convergence. Finally we test the code on field data by inverting full impedance and tipper MT responses collected around Mount St Helens in the Cascade volcanic chain. Among several prominent structures, the north-south trending, eruption-controlling shear zone is clearly imaged in the inversion.

  12. Variable Grid Traveltime Tomography for Near-surface Seismic Imaging

    NASA Astrophysics Data System (ADS)

    Cai, A.; Zhang, J.

    2017-12-01

    We present a new algorithm of traveltime tomography for imaging the subsurface with automated variable grids upon geological structures. The nonlinear traveltime tomography along with Tikhonov regularization using conjugate gradient method is a conventional method for near surface imaging. However, model regularization for any regular and even grids assumes uniform resolution. From geophysical point of view, long-wavelength and large scale structures can be reliably resolved, the details along geological boundaries are difficult to resolve. Therefore, we solve a traveltime tomography problem that automatically identifies large scale structures and aggregates grids within the structures for inversion. As a result, the number of velocity unknowns is reduced significantly, and inversion intends to resolve small-scale structures or the boundaries of large-scale structures. The approach is demonstrated by tests on both synthetic and field data. One synthetic model is a buried basalt model with one horizontal layer. Using the variable grid traveltime tomography, the resulted model is more accurate in top layer velocity, and basalt blocks, and leading to a less number of grids. The field data was collected in an oil field in China. The survey was performed in an area where the subsurface structures were predominantly layered. The data set includes 476 shots with a 10 meter spacing and 1735 receivers with a 10 meter spacing. The first-arrival traveltime of the seismogram is picked for tomography. The reciprocal errors of most shots are between 2ms and 6ms. The normal tomography results in fluctuations in layers and some artifacts in the velocity model. In comparison, the implementation of new method with proper threshold provides blocky model with resolved flat layer and less artifacts. Besides, the number of grids reduces from 205,656 to 4,930 and the inversion produces higher resolution due to less unknowns and relatively fine grids in small structures. The variable grid traveltime tomography provides an alternative imaging solution for blocky structures in the subsurface and builds a good starting model for waveform inversion and statics.

  13. Control of the transition between regular and mach reflection of shock waves

    NASA Astrophysics Data System (ADS)

    Alekseev, A. K.

    2012-06-01

    A control problem was considered that makes it possible to switch the flow between stationary Mach and regular reflection of shock waves within the dual solution domain. The sensitivity of the flow was computed by solving adjoint equations. A control disturbance was sought by applying gradient optimization methods. According to the computational results, the transition from regular to Mach reflection can be executed by raising the temperature. The transition from Mach to regular reflection can be achieved by lowering the temperature at moderate Mach numbers and is impossible at large numbers. The reliability of the numerical results was confirmed by verifying them with the help of a posteriori analysis.

  14. Stability Properties of the Regular Set for the Navier-Stokes Equation

    NASA Astrophysics Data System (ADS)

    D'Ancona, Piero; Lucà, Renato

    2018-06-01

    We investigate the size of the regular set for small perturbations of some classes of strong large solutions to the Navier-Stokes equation. We consider perturbations of the data that are small in suitable weighted L2 spaces but can be arbitrarily large in any translation invariant Banach space. We give similar results in the small data setting.

  15. Experimental/clinical evaluation of EIT image reconstruction with l1 data and image norms

    NASA Astrophysics Data System (ADS)

    Mamatjan, Yasin; Borsic, Andrea; Gürsoy, Doga; Adler, Andy

    2013-04-01

    Electrical impedance tomography (EIT) image reconstruction is ill-posed, and the spatial resolution of reconstructed images is low due to the diffuse propagation of current and limited number of independent measurements. Generally, image reconstruction is formulated using a regularized scheme in which l2 norms are preferred for both the data misfit and image prior terms due to computational convenience which result in smooth solutions. However, recent work on a Primal Dual-Interior Point Method (PDIPM) framework showed its effectiveness in dealing with the minimization problem. l1 norms on data and regularization terms in EIT image reconstruction address both problems of reconstruction with sharp edges and dealing with measurement errors. We aim for a clinical and experimental evaluation of the PDIPM method by selecting scenarios (human lung and dog breathing) with known electrode errors, which require a rigorous regularization and cause the failure of reconstructions with l2 norm. Results demonstrate the applicability of PDIPM algorithms, especially l1 data and regularization norms for clinical applications of EIT showing that l1 solution is not only more robust to measurement errors in clinical setting, but also provides high contrast resolution on organ boundaries.

  16. Laplace Inversion of Low-Resolution NMR Relaxometry Data Using Sparse Representation Methods

    PubMed Central

    Berman, Paula; Levi, Ofer; Parmet, Yisrael; Saunders, Michael; Wiesman, Zeev

    2013-01-01

    Low-resolution nuclear magnetic resonance (LR-NMR) relaxometry is a powerful tool that can be harnessed for characterizing constituents in complex materials. Conversion of the relaxation signal into a continuous distribution of relaxation components is an ill-posed inverse Laplace transform problem. The most common numerical method implemented today for dealing with this kind of problem is based on L2-norm regularization. However, sparse representation methods via L1 regularization and convex optimization are a relatively new approach for effective analysis and processing of digital images and signals. In this article, a numerical optimization method for analyzing LR-NMR data by including non-negativity constraints and L1 regularization and by applying a convex optimization solver PDCO, a primal-dual interior method for convex objectives, that allows general linear constraints to be treated as linear operators is presented. The integrated approach includes validation of analyses by simulations, testing repeatability of experiments, and validation of the model and its statistical assumptions. The proposed method provides better resolved and more accurate solutions when compared with those suggested by existing tools. © 2013 Wiley Periodicals, Inc. Concepts Magn Reson Part A 42A: 72–88, 2013. PMID:23847452

  17. Laplace Inversion of Low-Resolution NMR Relaxometry Data Using Sparse Representation Methods.

    PubMed

    Berman, Paula; Levi, Ofer; Parmet, Yisrael; Saunders, Michael; Wiesman, Zeev

    2013-05-01

    Low-resolution nuclear magnetic resonance (LR-NMR) relaxometry is a powerful tool that can be harnessed for characterizing constituents in complex materials. Conversion of the relaxation signal into a continuous distribution of relaxation components is an ill-posed inverse Laplace transform problem. The most common numerical method implemented today for dealing with this kind of problem is based on L 2 -norm regularization. However, sparse representation methods via L 1 regularization and convex optimization are a relatively new approach for effective analysis and processing of digital images and signals. In this article, a numerical optimization method for analyzing LR-NMR data by including non-negativity constraints and L 1 regularization and by applying a convex optimization solver PDCO, a primal-dual interior method for convex objectives, that allows general linear constraints to be treated as linear operators is presented. The integrated approach includes validation of analyses by simulations, testing repeatability of experiments, and validation of the model and its statistical assumptions. The proposed method provides better resolved and more accurate solutions when compared with those suggested by existing tools. © 2013 Wiley Periodicals, Inc. Concepts Magn Reson Part A 42A: 72-88, 2013.

  18. Thermodynamic properties of model CdTe/CdSe mixtures

    DOE PAGES

    van Swol, Frank; Zhou, Xiaowang W.; Challa, Sivakumar R.; ...

    2015-02-20

    We report on the thermodynamic properties of binary compound mixtures of model groups II–VI semiconductors. We use the recently introduced Stillinger–Weber Hamiltonian to model binary mixtures of CdTe and CdSe. We use molecular dynamics simulations to calculate the volume and enthalpy of mixing as a function of mole fraction. The lattice parameter of the mixture closely follows Vegard's law: a linear relation. This implies that the excess volume is a cubic function of mole fraction. A connection is made with hard sphere models of mixed fcc and zincblende structures. We found that the potential energy exhibits a positive deviation frommore » ideal soluton behaviour; the excess enthalpy is nearly independent of temperatures studied (300 and 533 K) and is well described by a simple cubic function of the mole fraction. Using a regular solution approach (combining non-ideal behaviour for the enthalpy with ideal solution behaviour for the entropy of mixing), we arrive at the Gibbs free energy of the mixture. The Gibbs free energy results indicate that the CdTe and CdSe mixtures exhibit phase separation. The upper consolute temperature is found to be 335 K. Finally, we provide the surface energy as a function of composition. Moreover, it roughly follows ideal solution theory, but with a negative deviation (negative excess surface energy). This indicates that alloying increases the stability, even for nano-particles.« less

  19. Accurate orbit propagation in the presence of planetary close encounters

    NASA Astrophysics Data System (ADS)

    Amato, Davide; Baù, Giulio; Bombardelli, Claudio

    2017-09-01

    We present an efficient strategy for the numerical propagation of small Solar system objects undergoing close encounters with massive bodies. The trajectory is split into several phases, each of them being the solution of a perturbed two-body problem. Formulations regularized with respect to different primaries are employed in two subsequent phases. In particular, we consider the Kustaanheimo-Stiefel regularization and a novel set of non-singular orbital elements pertaining to the Dromo family. In order to test the proposed strategy, we perform ensemble propagations in the Earth-Sun Circular Restricted 3-Body Problem (CR3BP) using a variable step size and order multistep integrator and an improved version of Everhart's radau solver of 15th order. By combining the trajectory splitting with regularized equations of motion in short-term propagations (1 year), we gain up to six orders of magnitude in accuracy with respect to the classical Cowell's method for the same computational cost. Moreover, in the propagation of asteroid (99942) Apophis through its 2029 Earth encounter, the position error stays within 100 metres after 100 years. In general, as to improve the performance of regularized formulations, the trajectory must be split between 1.2 and 3 Hill radii from the Earth. We also devise a robust iterative algorithm to stop the integration of regularized equations of motion at a prescribed physical time. The results rigorously hold in the CR3BP, and similar considerations may apply when considering more complex models. The methods and algorithms are implemented in the naples fortran 2003 code, which is available online as a GitHub repository.

  20. A reaction-diffusion model of the Darien Gap Sterile Insect Release Method

    NASA Astrophysics Data System (ADS)

    Alford, John G.

    2015-05-01

    The Sterile Insect Release Method (SIRM) is used as a biological control for invasive insect species. SIRM involves introducing large quantities of sterilized male insects into a wild population of invading insects. A fertile/sterile mating produces offspring that are not viable and the wild insect population will eventually be eradicated. A U.S. government program maintains a permanent sterile fly barrier zone in the Darien Gap between Panama and Columbia to control the screwworm fly (Cochliomyia Hominivorax), an insect that feeds off of living tissue in mammals and has devastating effects on livestock. This barrier zone is maintained by regular releases of massive quantities of sterilized male screwworm flies from aircraft. We analyze a reaction-diffusion model of the Darien Gap barrier zone. Simulations of the model equations yield two types of spatially inhomogeneous steady-state solutions representing a sterile fly barrier that does not prevent invasion and a barrier that does prevent invasion. We investigate steady-state solutions using both phase plane methods and monotone iteration methods and describe how barrier width and the sterile fly release rate affects steady-state behavior.

  1. On the mechanical theory for biological pattern formation

    NASA Astrophysics Data System (ADS)

    Bentil, D. E.; Murray, J. D.

    1993-02-01

    We investigate the pattern-forming potential of mechanical models in embryology proposed by Oster, Murray and their coworkers. We show that the presence of source terms in the tissue extracellular matrix and cell density equations give rise to spatio-temporal oscillations. An extension of one such model to include ‘biologically realistic long range effects induces the formation of stationary spatial patterns. Previous attempts to solve the full system were in one dimension only. We obtain solutions in one dimension and extend our simulations to two dimensions. We show that a single mechanical model alone is capable of generating complex but regular spatial patterns rather than the requirement of model interaction as suggested by Nagorcka et al. and Shaw and Murray. We discuss some biological applications of the models among which are would healing and formation of dermatoglyphic (fingerprint) patterns.

  2. Noise effects in nonlinear biochemical signaling

    NASA Astrophysics Data System (ADS)

    Bostani, Neda; Kessler, David A.; Shnerb, Nadav M.; Rappel, Wouter-Jan; Levine, Herbert

    2012-01-01

    It has been generally recognized that stochasticity can play an important role in the information processing accomplished by reaction networks in biological cells. Most treatments of that stochasticity employ Gaussian noise even though it is a priori obvious that this approximation can violate physical constraints, such as the positivity of chemical concentrations. Here, we show that even when such nonphysical fluctuations are rare, an exact solution of the Gaussian model shows that the model can yield unphysical results. This is done in the context of a simple incoherent-feedforward model which exhibits perfect adaptation in the deterministic limit. We show how one can use the natural separation of time scales in this model to yield an approximate model, that is analytically solvable, including its dynamical response to an environmental change. Alternatively, one can employ a cutoff procedure to regularize the Gaussian result.

  3. Inside black holes with synchronized hair

    NASA Astrophysics Data System (ADS)

    Brihaye, Yves; Herdeiro, Carlos; Radu, Eugen

    2016-09-01

    Recently, various examples of asymptotically flat, rotating black holes (BHs) with synchronized hair have been explicitly constructed, including Kerr BHs with scalar or Proca hair, and Myers-Perry BHs with scalar hair and a mass gap, showing there is a general mechanism at work. All these solutions have been found numerically, integrating the fully non-linear field equations of motion from the event horizon outwards. Here, we address the spacetime geometry of these solutions inside the event horizon. Firstly, we provide arguments, within linear theory, that there is no regular inner horizon for these solutions. Then, we address this question fully non-linearly, using as a tractable model five dimensional, equal spinning, Myers-Perry hairy BHs. We find that, for non-extremal solutions: (1) the inside spacetime geometry in the vicinity of the event horizon is smooth and the equations of motion can be integrated inwards; (2) before an inner horizon is reached, the spacetime curvature grows (apparently) without bound. In all cases, our results suggest the absence of a smooth Cauchy horizon, beyond which the metric can be extended, for hairy BHs with synchronized hair.

  4. Augmenting Space Technology Program Management with Secure Cloud & Mobile Services

    NASA Technical Reports Server (NTRS)

    Hodson, Robert F.; Munk, Christopher; Helble, Adelle; Press, Martin T.; George, Cory; Johnson, David

    2017-01-01

    The National Aeronautics and Space Administration (NASA) Game Changing Development (GCD) program manages technology projects across all NASA centers and reports to NASA headquarters regularly on progress. Program stakeholders expect an up-to-date, accurate status and often have questions about the program's portfolio that requires a timely response. Historically, reporting, data collection, and analysis were done with manual processes that were inefficient and prone to error. To address these issues, GCD set out to develop a new business automation solution. In doing this, the program wanted to leverage the latest information technology platforms and decided to utilize traditional systems along with new cloud-based web services and gaming technology for a novel and interactive user environment. The team also set out to develop a mobile solution for anytime information access. This paper discusses a solution to these challenging goals and how the GCD team succeeded in developing and deploying such a system. The architecture and approach taken has proven to be effective and robust and can serve as a model for others looking to develop secure interactive mobile business solutions for government or enterprise business automation.

  5. The New Method of Tsunami Source Reconstruction With r-Solution Inversion Method

    NASA Astrophysics Data System (ADS)

    Voronina, T. A.; Romanenko, A. A.

    2016-12-01

    Application of the r-solution method to reconstructing the initial tsunami waveform is discussed. This methodology is based on the inversion of remote measurements of water-level data. The wave propagation is considered within the scope of a linear shallow-water theory. The ill-posed inverse problem in question is regularized by means of a least square inversion using the truncated Singular Value Decomposition method. As a result of the numerical process, an r-solution is obtained. The method proposed allows one to control the instability of a numerical solution and to obtain an acceptable result in spite of ill posedness of the problem. Implementation of this methodology to reconstructing of the initial waveform to 2013 Solomon Islands tsunami validates the theoretical conclusion for synthetic data and a model tsunami source: the inversion result strongly depends on data noisiness, the azimuthal and temporal coverage of recording stations with respect to the source area. Furthermore, it is possible to make a preliminary selection of the most informative set of the available recording stations used in the inversion process.

  6. Robust Joint Graph Sparse Coding for Unsupervised Spectral Feature Selection.

    PubMed

    Zhu, Xiaofeng; Li, Xuelong; Zhang, Shichao; Ju, Chunhua; Wu, Xindong

    2017-06-01

    In this paper, we propose a new unsupervised spectral feature selection model by embedding a graph regularizer into the framework of joint sparse regression for preserving the local structures of data. To do this, we first extract the bases of training data by previous dictionary learning methods and, then, map original data into the basis space to generate their new representations, by proposing a novel joint graph sparse coding (JGSC) model. In JGSC, we first formulate its objective function by simultaneously taking subspace learning and joint sparse regression into account, then, design a new optimization solution to solve the resulting objective function, and further prove the convergence of the proposed solution. Furthermore, we extend JGSC to a robust JGSC (RJGSC) via replacing the least square loss function with a robust loss function, for achieving the same goals and also avoiding the impact of outliers. Finally, experimental results on real data sets showed that both JGSC and RJGSC outperformed the state-of-the-art algorithms in terms of k -nearest neighbor classification performance.

  7. Self-assembly of (perfluoroalkyl)alkanes on a substrate surface from solutions in supercritical carbon dioxide.

    PubMed

    Gallyamov, Marat O; Mourran, Ahmed; Tartsch, Bernd; Vinokur, Rostislav A; Nikitin, Lev N; Khokhlov, Alexei R; Schaumburg, Kjeld; Möller, Martin

    2006-06-14

    Toroidal self-assembled structures of perfluorododecylnonadecane and perfluorotetradecyloctadecane have been deposited on mica and highly oriented pyrolytic graphite surfaces by exposure of the substrates to solutions of the (pefluoroalkyl)alkanes in supercritical carbon dioxide. Scanning force microscopy (SFM) images have displayed a high degree of regularity of these self-assembled nanoobjects regarding size, shape, and packing in a monolayer. Analysis of SFM images allowed us to estimate that each toroidal domain has an outer diameter of about 50 nm and consists of several thousands of molecules. We propose a simple model explaining the clustering of the molecules to objects with a finite size. The model based on the close-packing principles predicts formation of toroids, whose size is determined by the molecular geometry. Here, we consider the amphiphilic nature of the (perfluoroalkyl)alkane molecules in combination with incommensurable packing parameters of the alkyl- and the perfluoralkyl-segments to be a key factor for such a self-assembly.

  8. Initial-boundary value problem to 2D Boussinesq equations for MHD convection with stratification effects

    NASA Astrophysics Data System (ADS)

    Bian, Dongfen; Liu, Jitao

    2017-12-01

    This paper is concerned with the initial-boundary value problem to 2D magnetohydrodynamics-Boussinesq system with the temperature-dependent viscosity, thermal diffusivity and electrical conductivity. First, we establish the global weak solutions under the minimal initial assumption. Then by imposing higher regularity assumption on the initial data, we obtain the global strong solution with uniqueness. Moreover, the exponential decay rates of weak solutions and strong solution are obtained respectively.

  9. A 25% tannic acid solution as a root canal irrigant cleanser: a scanning electron microscope study.

    PubMed

    Bitter, N C

    1989-03-01

    A scanning electron microscope was used to evaluate the cleansing properties of a 25% tannic acid solution on the dentinal surface in the pulp chamber of endodontically prepared teeth. This was compared with the amorphous smear layer of the canal with the use of hydrogen peroxide and sodium hypochlorite solution as an irrigant. The tannic acid solution removed the smear layer more effectively than the regular cleansing agent.

  10. Holographic self-tuning of the cosmological constant

    NASA Astrophysics Data System (ADS)

    Charmousis, Christos; Kiritsis, Elias; Nitti, Francesco

    2017-09-01

    We propose a brane-world setup based on gauge/gravity duality in which the four-dimensional cosmological constant is set to zero by a dynamical self-adjustment mechanism. The bulk contains Einstein gravity and a scalar field. We study holographic RG flow solutions, with the standard model brane separating an infinite volume UV region and an IR region of finite volume. For generic values of the brane vacuum energy, regular solutions exist such that the four-dimensional brane is flat. Its position in the bulk is determined dynamically by the junction conditions. Analysis of linear fluctuations shows that a regime of 4-dimensional gravity is possible at large distances, due to the presence of an induced gravity term. The graviton acquires an effective mass, and a five-dimensional regime may exist at large and/or small scales. We show that, for a broad choice of potentials, flat-brane solutions are manifestly stable and free of ghosts. We compute the scalar contribution to the force between brane-localized sources and show that, in certain models, the vDVZ discontinuity is absent and the effective interaction at short distances is mediated by two transverse graviton helicities.

  11. On dynamical systems approaches and methods in f ( R ) cosmology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alho, Artur; Carloni, Sante; Uggla, Claes, E-mail: aalho@math.ist.utl.pt, E-mail: sante.carloni@tecnico.ulisboa.pt, E-mail: claes.uggla@kau.se

    We discuss dynamical systems approaches and methods applied to flat Robertson-Walker models in f ( R )-gravity. We argue that a complete description of the solution space of a model requires a global state space analysis that motivates globally covering state space adapted variables. This is shown explicitly by an illustrative example, f ( R ) = R + α R {sup 2}, α > 0, for which we introduce new regular dynamical systems on global compactly extended state spaces for the Jordan and Einstein frames. This example also allows us to illustrate several local and global dynamical systems techniquesmore » involving, e.g., blow ups of nilpotent fixed points, center manifold analysis, averaging, and use of monotone functions. As a result of applying dynamical systems methods to globally state space adapted dynamical systems formulations, we obtain pictures of the entire solution spaces in both the Jordan and the Einstein frames. This shows, e.g., that due to the domain of the conformal transformation between the Jordan and Einstein frames, not all the solutions in the Jordan frame are completely contained in the Einstein frame. We also make comparisons with previous dynamical systems approaches to f ( R ) cosmology and discuss their advantages and disadvantages.« less

  12. Interactive Inverse Groundwater Modeling - Addressing User Fatigue

    NASA Astrophysics Data System (ADS)

    Singh, A.; Minsker, B. S.

    2006-12-01

    This paper builds on ongoing research on developing an interactive and multi-objective framework to solve the groundwater inverse problem. In this work we solve the classic groundwater inverse problem of estimating a spatially continuous conductivity field, given field measurements of hydraulic heads. The proposed framework is based on an interactive multi-objective genetic algorithm (IMOGA) that not only considers quantitative measures such as calibration error and degree of regularization, but also takes into account expert knowledge about the structure of the underlying conductivity field expressed as subjective rankings of potential conductivity fields by the expert. The IMOGA converges to the optimal Pareto front representing the best trade- off among the qualitative as well as quantitative objectives. However, since the IMOGA is a population-based iterative search it requires the user to evaluate hundreds of solutions. This leads to the problem of 'user fatigue'. We propose a two step methodology to combat user fatigue in such interactive systems. The first step is choosing only a few highly representative solutions to be shown to the expert for ranking. Spatial clustering is used to group the search space based on the similarity of the conductivity fields. Sampling is then carried out from different clusters to improve the diversity of solutions shown to the user. Once the expert has ranked representative solutions from each cluster a machine learning model is used to 'learn user preference' and extrapolate these for the solutions not ranked by the expert. We investigate different machine learning models such as Decision Trees, Bayesian learning model, and instance based weighting to model user preference. In addition, we also investigate ways to improve the performance of these models by providing information about the spatial structure of the conductivity fields (which is what the expert bases his or her rank on). Results are shown for each of these machine learning models and the advantages and disadvantages for each approach are discussed. These results indicate that using the proposed two-step methodology leads to significant reduction in user-fatigue without deteriorating the solution quality of the IMOGA.

  13. The wet solidus of silica: predictions from the scaled particle theory and polarized continuum model.

    PubMed

    Ottonello, G; Richet, P; Vetuschi Zuccolini, M

    2015-02-07

    We present an application of the Scaling Particle Theory (SPT) coupled with an ab initio assessment of the electronic, dispersive, and repulsive energy terms based on the Polarized Continuum Model (PCM) aimed at reproducing the observed solubility behavior of OH2 over the entire compositional range from pure molten silica to pure water and wide pressure and temperature regimes. It is shown that the solution energy is dominated by cavitation terms, mainly entropic in nature, which cause a large negative solution entropy and a consequent marked increase of gas phase fugacity with increasing temperatures. Besides, the solution enthalpy is negative and dominated by electrostatic terms which depict a pseudopotential well whose minimum occurs at a low water fraction (XH2O) of about 6 mol. %. The fine tuning of the solute-solvent interaction is achieved through very limited adjustments of the electrostatic scaling factor γel which, in pure water, is slightly higher than the nominal value (i.e., γel  =  1.224 against 1.2), it attains its minimum at low H2O content (γel = 0.9958) and then rises again at infinite dilution (γel   =  1.0945). The complex solution behavior is interpreted as due to the formation of energetically efficient hydrogen bonding when OH functionals are in appropriate amount and relative positioning with respect to the discrete OH2 molecules, reinforcing in this way the nominal solute-solvent inductive interaction. The interaction energy derived from the SPT-PCM calculations is then recast in terms of a sub-regular Redlich-Kister expansion of appropriate order whereas the thermodynamic properties of the H2O component at its standard state (1-molal solution referred to infinite dilution) are calculated from partial differentiation of the solution energy over the intensive variables.

  14. Digit replacement: A generic map for nonlinear dynamical systems.

    PubMed

    García-Morales, Vladimir

    2016-09-01

    A simple discontinuous map is proposed as a generic model for nonlinear dynamical systems. The orbit of the map admits exact solutions for wide regions in parameter space and the method employed (digit manipulation) allows the mathematical design of useful signals, such as regular or aperiodic oscillations with specific waveforms, the construction of complex attractors with nontrivial properties as well as the coexistence of different basins of attraction in phase space with different qualitative properties. A detailed analysis of the dynamical behavior of the map suggests how the latter can be used in the modeling of complex nonlinear dynamics including, e.g., aperiodic nonchaotic attractors and the hierarchical deposition of grains of different sizes on a surface.

  15. Maximal regularity in lp spaces for discrete time fractional shifted equations

    NASA Astrophysics Data System (ADS)

    Lizama, Carlos; Murillo-Arcila, Marina

    2017-09-01

    In this paper, we are presenting a new method based on operator-valued Fourier multipliers to characterize the existence and uniqueness of ℓp-solutions for discrete time fractional models in the form where A is a closed linear operator defined on a Banach space X and Δα denotes the Grünwald-Letnikov fractional derivative of order α > 0. If X is a UMD space, we provide this characterization only in terms of the R-boundedness of the operator-valued symbol associated to the abstract model. To illustrate our results, we derive new qualitative properties of nonlinear difference equations with shiftings, including fractional versions of the logistic and Nagumo equations.

  16. Compressible-Incompressible Two-Phase Flows with Phase Transition: Model Problem

    NASA Astrophysics Data System (ADS)

    Watanabe, Keiichi

    2017-12-01

    We study the compressible and incompressible two-phase flows separated by a sharp interface with a phase transition and a surface tension. In particular, we consider the problem in R^N , and the Navier-Stokes-Korteweg equations is used in the upper domain and the Navier-Stokes equations is used in the lower domain. We prove the existence of R -bounded solution operator families for a resolvent problem arising from its model problem. According to Göts and Shibata (Asymptot Anal 90(3-4):207-236, 2014), the regularity of ρ _+ is W^1_q in space, but to solve the kinetic equation: u_Γ \\cdot n_t = [[ρ u

  17. Impact of additives on the formation of protein aggregates and viscosity in concentrated protein solutions.

    PubMed

    Bauer, Katharina Christin; Suhm, Susanna; Wöll, Anna Katharina; Hubbuch, Jürgen

    2017-01-10

    In concentrated protein solutions attractive protein interactions may not only cause the formation of undesired aggregates but also of gel-like networks with elevated viscosity. To guarantee stable biopharmaceutical processes and safe formulations, both phenomenons have to be avoided as these may hinder regular processing steps. This work screens the impact of additives on both phase behavior and viscosity of concentrated protein solutions. For this purpose, additives known for stabilizing proteins in solution or modulating the dynamic viscosity were selected. These additives were PEG 300, PEG 1000, glycerol, glycine, NaCl and ArgHCl. Concentrated lysozyme and glucose oxidase solutions at pH 3 and 9 served as model systems. Fourier-transformed-infrared spectroscopy was chosen to determine the conformational stability of selected protein samples. Influencing protein interactions, the impact of additives was strongly dependent on pH. Of all additives investigated, glycine was the only one that maintained protein conformational and colloidal stability while decreasing the dynamic viscosity. Low concentrations of NaCl showed the same effect, but increasing concentrations resulted in visible protein aggregation. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Hydrostatic equilibrium of stars without electroneutrality constraint

    NASA Astrophysics Data System (ADS)

    Krivoruchenko, M. I.; Nadyozhin, D. K.; Yudin, A. V.

    2018-04-01

    The general solution of hydrostatic equilibrium equations for a two-component fluid of ions and electrons without a local electroneutrality constraint is found in the framework of Newtonian gravity theory. In agreement with the Poincaré theorem on analyticity and in the context of Dyson's argument, the general solution is demonstrated to possess a fixed (essential) singularity in the gravitational constant G at G =0 . The regular component of the general solution can be determined by perturbation theory in G starting from a locally neutral solution. The nonperturbative component obtained using the method of Wentzel, Kramers and Brillouin is exponentially small in the inner layers of the star and grows rapidly in the outward direction. Near the surface of the star, both components are comparable in magnitude, and their nonlinear interplay determines the properties of an electro- or ionosphere. The stellar charge varies within the limits of -0.1 to 150 C per solar mass. The properties of electro- and ionospheres are exponentially sensitive to variations of the fluid densities in the central regions of the star. The general solutions of two exactly solvable stellar models without a local electroneutrality constraint are also presented.

  19. Mascons, GRACE, and Time-variable Gravity

    NASA Technical Reports Server (NTRS)

    Lemoine, F.; Lutchke, S.; Rowlands, D.; Klosko, S.; Chinn, D.; Boy, J. P.

    2006-01-01

    The GRACE mission has been in orbit now for three years and now regularly produces snapshots of the Earth s gravity field on a monthly basis. The convenient standard approach has been to perform global solutions in spherical harmonics. Alternative local representations of mass variations using mascons show great promise and offer advantages in terms of computational efficiency, minimization of problems due to aliasing, and increased temporal resolution. In this paper, we discuss the results of processing the GRACE KBRR data from March 2003 through August 2005 to produce solutions for GRACE mass variations over mid-latitude and equatorial regions, such as South America, India and the United States, and over the polar regions (Antarctica and Greenland), with a focus on the methodology. We describe in particular mascon solutions developed on regular 4 degree x 4 degree grids, and those tailored specifically to drainage basins over these regions.

  20. A comparative study of minimum norm inverse methods for MEG imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leahy, R.M.; Mosher, J.C.; Phillips, J.W.

    1996-07-01

    The majority of MEG imaging techniques currently in use fall into the general class of (weighted) minimum norm methods. The minimization of a norm is used as the basis for choosing one from a generally infinite set of solutions that provide an equally good fit to the data. This ambiguity in the solution arises from the inherent non- uniqueness of the continuous inverse problem and is compounded by the imbalance between the relatively small number of measurements and the large number of source voxels. Here we present a unified view of the minimum norm methods and describe how we canmore » use Tikhonov regularization to avoid instabilities in the solutions due to noise. We then compare the performance of regularized versions of three well known linear minimum norm methods with the non-linear iteratively reweighted minimum norm method and a Bayesian approach.« less

  1. A deformation of Sasakian structure in the presence of torsion and supergravity solutions

    NASA Astrophysics Data System (ADS)

    Houri, Tsuyoshi; Takeuchi, Hiroshi; Yasui, Yukinori

    2013-07-01

    A deformation of Sasakian structure in the presence of totally skew-symmetric torsion is discussed on odd-dimensional manifolds whose metric cones are Kähler with torsion. It is shown that such a geometry inherits similar properties to those of Sasakian geometry. As their example, we present an explicit expression of local metrics. It is also demonstrated that our example of the metrics admits the existence of hidden symmetry described by non-trivial odd-rank generalized closed conformal Killing-Yano tensors. Furthermore, using these metrics as an ansatz, we construct exact solutions in five-dimensional minimal gauged/ungauged supergravity and 11-dimensional supergravity. Finally, the global structures of the solutions are discussed. We obtain regular metrics on compact manifolds in five dimensions, which give natural generalizations of Sasaki-Einstein manifolds Yp, q and La, b, c. We also briefly discuss regular metrics on non-compact manifolds in 11 dimensions.

  2. Sparse Reconstruction of Regional Gravity Signal Based on Stabilized Orthogonal Matching Pursuit (SOMP)

    NASA Astrophysics Data System (ADS)

    Saadat, S. A.; Safari, A.; Needell, D.

    2016-06-01

    The main role of gravity field recovery is the study of dynamic processes in the interior of the Earth especially in exploration geophysics. In this paper, the Stabilized Orthogonal Matching Pursuit (SOMP) algorithm is introduced for sparse reconstruction of regional gravity signals of the Earth. In practical applications, ill-posed problems may be encountered regarding unknown parameters that are sensitive to the data perturbations. Therefore, an appropriate regularization method needs to be applied to find a stabilized solution. The SOMP algorithm aims to regularize the norm of the solution vector, while also minimizing the norm of the corresponding residual vector. In this procedure, a convergence point of the algorithm that specifies optimal sparsity-level of the problem is determined. The results show that the SOMP algorithm finds the stabilized solution for the ill-posed problem at the optimal sparsity-level, improving upon existing sparsity based approaches.

  3. Wormhole solutions with a complex ghost scalar field and their instability

    NASA Astrophysics Data System (ADS)

    Dzhunushaliev, Vladimir; Folomeev, Vladimir; Kleihaus, Burkhard; Kunz, Jutta

    2018-01-01

    We study compact configurations with a nontrivial wormholelike spacetime topology supported by a complex ghost scalar field with a quartic self-interaction. For this case, we obtain regular asymptotically flat equilibrium solutions possessing reflection symmetry. We then show their instability with respect to linear radial perturbations.

  4. A Curved, Elastostatic Boundary Element for Plane Anisotropic Structures

    NASA Technical Reports Server (NTRS)

    Smeltzer, Stanley S.; Klang, Eric C.

    2001-01-01

    The plane-stress equations of linear elasticity are used in conjunction with those of the boundary element method to develop a novel curved, quadratic boundary element applicable to structures composed of anisotropic materials in a state of plane stress or plane strain. The curved boundary element is developed to solve two-dimensional, elastostatic problems of arbitrary shape, connectivity, and material type. As a result of the anisotropy, complex variables are employed in the fundamental solution derivations for a concentrated unit-magnitude force in an infinite elastic anisotropic medium. Once known, the fundamental solutions are evaluated numerically by using the known displacement and traction boundary values in an integral formulation with Gaussian quadrature. All the integral equations of the boundary element method are evaluated using one of two methods: either regular Gaussian quadrature or a combination of regular and logarithmic Gaussian quadrature. The regular Gaussian quadrature is used to evaluate most of the integrals along the boundary, and the combined scheme is employed for integrals that are singular. Individual element contributions are assembled into the global matrices of the standard boundary element method, manipulated to form a system of linear equations, and the resulting system is solved. The interior displacements and stresses are found through a separate set of auxiliary equations that are derived using an Airy-type stress function in terms of complex variables. The capabilities and accuracy of this method are demonstrated for a laminated-composite plate with a central, elliptical cutout that is subjected to uniform tension along one of the straight edges of the plate. Comparison of the boundary element results for this problem with corresponding results from an analytical model show a difference of less than 1%.

  5. Inverse problems with nonnegative and sparse solutions: algorithms and application to the phase retrieval problem

    NASA Astrophysics Data System (ADS)

    Quy Muoi, Pham; Nho Hào, Dinh; Sahoo, Sujit Kumar; Tang, Dongliang; Cong, Nguyen Huu; Dang, Cuong

    2018-05-01

    In this paper, we study a gradient-type method and a semismooth Newton method for minimization problems in regularizing inverse problems with nonnegative and sparse solutions. We propose a special penalty functional forcing the minimizers of regularized minimization problems to be nonnegative and sparse, and then we apply the proposed algorithms in a practical the problem. The strong convergence of the gradient-type method and the local superlinear convergence of the semismooth Newton method are proven. Then, we use these algorithms for the phase retrieval problem and illustrate their efficiency in numerical examples, particularly in the practical problem of optical imaging through scattering media where all the noises from experiment are presented.

  6. Dirac-Born-Infeld actions and tachyon monopoles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Calo, Vincenzo; Tallarita, Gianni; Thomas, Steven

    2010-04-15

    We investigate magnetic monopole solutions of the non-Abelian Dirac-Born-Infeld (DBI) action describing two coincident non-BPS D9-branes in flat space. Just as in the case of kink and vortex solitonic tachyon solutions of the full DBI non-BPS actions, as previously analyzed by Sen, these monopole configurations are singular in the first instance and require regularization. We discuss a suitable non-Abelian ansatz that describes a pointlike magnetic monopole and show it solves the equations of motion to leading order in the regularization parameter. Fluctuations are studied and shown to describe a codimension three BPS D6-brane, and a formula is derived for itsmore » tension.« less

  7. Some Investigations Relating to the Elastostatics of a Tapered Tube

    DTIC Science & Technology

    1978-03-01

    regularity of the solution on the Z axis. Indeed the assumption of such’regularity is stated explicitly by Heins (p. 789) and the problems solved (e.g. a... assumptions , becomes where t h e integrand is evaluated a t ( + i ,O). This i s a form P a of t he i n t e g r a l representa t ion of t h e...solut ion. Now l e t us look a t t h e assumptions on Q. F i r s t of a l l , i n order t o be sure t h a t our operations a r e l eg i

  8. Analysis of borehole expansion and gallery tests in anisotropic rock masses

    USGS Publications Warehouse

    Amadei, B.; Savage, W.Z.

    1991-01-01

    Closed-form solutions are used to show how rock anisotropy affects the variation of the modulus of deformation around the walls of a hole in which expansion tests are conducted. These tests include dilatometer and NX-jack tests in boreholes and gallery tests in tunnels. The effects of rock anisotropy on the modulus of deformation are shown for transversely isotropic and regularly jointed rock masses with planes of transverse isotropy or joint planes parallel or normal to the hole longitudinal axis for plane strain or plane stress condition. The closed-form solutions can also be used when determining the elastic properties of anisotropic rock masses (intact or regularly jointed) in situ. ?? 1991.

  9. Charge generation layers for solution processed tandem organic light emitting diodes with regular device architecture.

    PubMed

    Höfle, Stefan; Bernhard, Christoph; Bruns, Michael; Kübel, Christian; Scherer, Torsten; Lemmer, Uli; Colsmann, Alexander

    2015-04-22

    Tandem organic light emitting diodes (OLEDs) utilizing fluorescent polymers in both sub-OLEDs and a regular device architecture were fabricated from solution, and their structure and performance characterized. The charge carrier generation layer comprised a zinc oxide layer, modified by a polyethylenimine interface dipole, for electron injection and either MoO3, WO3, or VOx for hole injection into the adjacent sub-OLEDs. ToF-SIMS investigations and STEM-EDX mapping verified the distinct functional layers throughout the layer stack. At a given device current density, the current efficiencies of both sub-OLEDs add up to a maximum of 25 cd/A, indicating a properly working tandem OLED.

  10. Global Regularity and Time Decay for the 2D Magnetohydrodynamic Equations with Fractional Dissipation and Partial Magnetic Diffusion

    NASA Astrophysics Data System (ADS)

    Dong, Bo-Qing; Jia, Yan; Li, Jingna; Wu, Jiahong

    2018-05-01

    This paper focuses on a system of the 2D magnetohydrodynamic (MHD) equations with the kinematic dissipation given by the fractional operator (-Δ )^α and the magnetic diffusion by partial Laplacian. We are able to show that this system with any α >0 always possesses a unique global smooth solution when the initial data is sufficiently smooth. In addition, we make a detailed study on the large-time behavior of these smooth solutions and obtain optimal large-time decay rates. Since the magnetic diffusion is only partial here, some classical tools such as the maximal regularity property for the 2D heat operator can no longer be applied. A key observation on the structure of the MHD equations allows us to get around the difficulties due to the lack of full Laplacian magnetic diffusion. The results presented here are the sharpest on the global regularity problem for the 2D MHD equations with only partial magnetic diffusion.

  11. Reconstruction of electrical impedance tomography (EIT) images based on the expectation maximum (EM) method.

    PubMed

    Wang, Qi; Wang, Huaxiang; Cui, Ziqiang; Yang, Chengyi

    2012-11-01

    Electrical impedance tomography (EIT) calculates the internal conductivity distribution within a body using electrical contact measurements. The image reconstruction for EIT is an inverse problem, which is both non-linear and ill-posed. The traditional regularization method cannot avoid introducing negative values in the solution. The negativity of the solution produces artifacts in reconstructed images in presence of noise. A statistical method, namely, the expectation maximization (EM) method, is used to solve the inverse problem for EIT in this paper. The mathematical model of EIT is transformed to the non-negatively constrained likelihood minimization problem. The solution is obtained by the gradient projection-reduced Newton (GPRN) iteration method. This paper also discusses the strategies of choosing parameters. Simulation and experimental results indicate that the reconstructed images with higher quality can be obtained by the EM method, compared with the traditional Tikhonov and conjugate gradient (CG) methods, even with non-negative processing. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  12. Helicobacter pylori displays spiral trajectories while swimming like a cork-screw in solutions

    NASA Astrophysics Data System (ADS)

    Constantino, Maira A.; Hardcastle, Joseph M.; Bansil, Rama; Jabbarzadeh, Mehdi; Fu, Henry C.

    Helicobacter pylori is a helical shaped bacterium that causes gastritis, ulcers and gastric cancer in humans and other animals. In order to colonize the harsh acidic environment of the stomach H. pylori has evolved a unique biochemical mechanism to go across the viscoelastic gel-like gastric mucus layer. Many studies have been conducted on the swimming of H. pylori in viscous media. However a yet unanswered question is if the helical cell shape influences bacterial swimming dynamics or confers any advantage when swimming in viscous solution. We will present measurements of H. pylori trajectories displaying corkscrew motion while swimming in solution obtained by tracking single cells using 2-dimensional phase contrast imaging at high magnification and fast frame rates and simultaneously imaging their shape. We observe a linear relationship between swimming speed and rotation rate. The experimental trajectories show good agreement with trajectories calculated using a regularized Stokeslet method to model the low Reynolds number swimming behavior. Supported by NSF PHY 1410798 (PI: RB).

  13. Three-gradient regular solution model for simple liquids wetting complex surface topologies

    PubMed Central

    Akerboom, Sabine; Kamperman, Marleen

    2016-01-01

    Summary We use regular solution theory and implement a three-gradient model for a liquid/vapour system in contact with a complex surface topology to study the shape of a liquid drop in advancing and receding wetting scenarios. More specifically, we study droplets on an inverse opal: spherical cavities in a hexagonal pattern. In line with experimental data, we find that the surface may switch from hydrophilic (contact angle on a smooth surface θY < 90°) to hydrophobic (effective advancing contact angle θ > 90°). Both the Wenzel wetting state, that is cavities under the liquid are filled, as well as the Cassie–Baxter wetting state, that is air entrapment in the cavities under the liquid, were observed using our approach, without a discontinuity in the water front shape or in the water advancing contact angle θ. Therefore, air entrapment cannot be the main reason why the contact angle θ for an advancing water front varies. Rather, the contact line is pinned and curved due to the surface structures, inducing curvature perpendicular to the plane in which the contact angle θ is observed, and the contact line does not move in a continuous way, but via depinning transitions. The pinning is not limited to kinks in the surface with angles θkink smaller than the angle θY. Even for θkink > θY, contact line pinning is found. Therefore, the full 3D-structure of the inverse opal, rather than a simple parameter such as the wetting state or θkink, determines the final observed contact angle. PMID:27826512

  14. On the Use of Nonlinear Regularization in Inverse Methods for the Solar Tachocline Profile Determination

    NASA Astrophysics Data System (ADS)

    Corbard, T.; Berthomieu, G.; Provost, J.; Blanc-Feraud, L.

    Inferring the solar rotation from observed frequency splittings represents an ill-posed problem in the sense of Hadamard and the traditional approach used to override this difficulty consists in regularizing the problem by adding some a priori information on the global smoothness of the solution defined as the norm of its first or second derivative. Nevertheless, inversions of rotational splittings (e.g. Corbard et al., 1998; Schou et al., 1998) have shown that the surface layers and the so-called solar tachocline (Spiegel & Zahn 1992) at the base of the convection zone are regions in which high radial gradients of the rotation rate occur. %there exist high gradients in the solar rotation profile near %the surface and at the base of the convection zone (e.g. Corbard et al. 1998) %in the so-called solar tachocline (Spiegel & Zahn 1992). Therefore, the global smoothness a-priori which tends to smooth out every high gradient in the solution may not be appropriate for the study of a zone like the tachocline which is of particular interest for the study of solar dynamics (e.g. Elliot 1997). In order to infer the fine structure of such regions with high gradients by inverting helioseismic data, we have to find a way to preserve these zones in the inversion process. Setting a more adapted constraint on the solution leads to non-linear regularization methods that are in current use for edge-preserving regularization in computed imaging (e.g. Blanc-Feraud et al. 1995). In this work, we investigate their use in the helioseismic context of rotational inversions.

  15. Selection of Polynomial Chaos Bases via Bayesian Model Uncertainty Methods with Applications to Sparse Approximation of PDEs with Stochastic Inputs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karagiannis, Georgios; Lin, Guang

    2014-02-15

    Generalized polynomial chaos (gPC) expansions allow the representation of the solution of a stochastic system as a series of polynomial terms. The number of gPC terms increases dramatically with the dimension of the random input variables. When the number of the gPC terms is larger than that of the available samples, a scenario that often occurs if the evaluations of the system are expensive, the evaluation of the gPC expansion can be inaccurate due to over-fitting. We propose a fully Bayesian approach that allows for global recovery of the stochastic solution, both in spacial and random domains, by coupling Bayesianmore » model uncertainty and regularization regression methods. It allows the evaluation of the PC coefficients on a grid of spacial points via (1) Bayesian model average or (2) medial probability model, and their construction as functions on the spacial domain via spline interpolation. The former accounts the model uncertainty and provides Bayes-optimal predictions; while the latter, additionally, provides a sparse representation of the solution by evaluating the expansion on a subset of dominating gPC bases when represented as a gPC expansion. Moreover, the method quantifies the importance of the gPC bases through inclusion probabilities. We design an MCMC sampler that evaluates all the unknown quantities without the need of ad-hoc techniques. The proposed method is suitable for, but not restricted to, problems whose stochastic solution is sparse at the stochastic level with respect to the gPC bases while the deterministic solver involved is expensive. We demonstrate the good performance of the proposed method and make comparisons with others on 1D, 14D and 40D in random space elliptic stochastic partial differential equations.« less

  16. Resolution and Trade-offs in Finite Fault Inversions for Large Earthquakes Using Teleseismic Signals (Invited)

    NASA Astrophysics Data System (ADS)

    Lay, T.; Ammon, C. J.

    2010-12-01

    An unusually large number of widely distributed great earthquakes have occurred in the past six years, with extensive data sets of teleseismic broadband seismic recordings being available in near-real time for each event. Numerous research groups have implemented finite-fault inversions that utilize the rapidly accessible teleseismic recordings, and slip models are regularly determined and posted on websites for all major events. The source inversion validation project has already demonstrated that for events of all sizes there is often significant variability in models for a given earthquake. Some of these differences can be attributed to variations in data sets and procedures used for including signals with very different bandwidth and signal characteristics into joint inversions. Some differences can also be attributed to choice of velocity structure and data weighting. However, our experience is that some of the primary causes of solution variability involve rupture model parameterization and imposed kinematic constraints such as rupture velocity and subfault source time function description. In some cases it is viable to rapidly perform separate procedures such as teleseismic array back-projection or surface wave directivity analysis to reduce the uncertainties associated with rupture velocity, and it is possible to explore a range of subfault source parameterizations to place some constraints on which model features are robust. In general, many such tests are performed, but not fully described, with single model solutions being posted or published, with limited insight into solution confidence being conveyed. Using signals from recent great earthquakes in the Kuril Islands, Solomon Islands, Peru, Chile and Samoa, we explore issues of uncertainty and robustness of solutions that can be rapidly obtained by inversion of teleseismic signals. Formalizing uncertainty estimates remains a formidable undertaking and some aspects of that challenge will be addressed.

  17. Comparison between two meshless methods based on collocation technique for the numerical solution of four-species tumor growth model

    NASA Astrophysics Data System (ADS)

    Dehghan, Mehdi; Mohammadi, Vahid

    2017-03-01

    As is said in [27], the tumor-growth model is the incorporation of nutrient within the mixture as opposed to being modeled with an auxiliary reaction-diffusion equation. The formulation involves systems of highly nonlinear partial differential equations of surface effects through diffuse-interface models [27]. Simulations of this practical model using numerical methods can be applied for evaluating it. The present paper investigates the solution of the tumor growth model with meshless techniques. Meshless methods are applied based on the collocation technique which employ multiquadrics (MQ) radial basis function (RBFs) and generalized moving least squares (GMLS) procedures. The main advantages of these choices come back to the natural behavior of meshless approaches. As well as, a method based on meshless approach can be applied easily for finding the solution of partial differential equations in high-dimension using any distributions of points on regular and irregular domains. The present paper involves a time-dependent system of partial differential equations that describes four-species tumor growth model. To overcome the time variable, two procedures will be used. One of them is a semi-implicit finite difference method based on Crank-Nicolson scheme and another one is based on explicit Runge-Kutta time integration. The first case gives a linear system of algebraic equations which will be solved at each time-step. The second case will be efficient but conditionally stable. The obtained numerical results are reported to confirm the ability of these techniques for solving the two and three-dimensional tumor-growth equations.

  18. Particle-like solutions of the Einstein-Dirac-Maxwell equations

    NASA Astrophysics Data System (ADS)

    Finster, Felix; Smoller, Joel; Yau, Shing-Tung

    1999-08-01

    We consider the coupled Einstein-Dirac-Maxwell equations for a static, spherically symmetric system of two fermions in a singlet spinor state. Soliton-like solutions are constructed numerically. The stability and the properties of the ground state solutions are discussed for different values of the electromagnetic coupling constant. We find solutions even when the electromagnetic coupling is so strong that the total interaction is repulsive in the Newtonian limit. Our solutions are regular and well-behaved; this shows that the combined electromagnetic and gravitational self-interaction of the Dirac particles is finite.

  19. A constrained regularization method for inverting data represented by linear algebraic or integral equations

    NASA Astrophysics Data System (ADS)

    Provencher, Stephen W.

    1982-09-01

    CONTIN is a portable Fortran IV package for inverting noisy linear operator equations. These problems occur in the analysis of data from a wide variety experiments. They are generally ill-posed problems, which means that errors in an unregularized inversion are unbounded. Instead, CONTIN seeks the optimal solution by incorporating parsimony and any statistical prior knowledge into the regularizor and absolute prior knowledge into equallity and inequality constraints. This can be greatly increase the resolution and accuracyh of the solution. CONTIN is very flexible, consisting of a core of about 50 subprograms plus 13 small "USER" subprograms, which the user can easily modify to specify special-purpose constraints, regularizors, operator equations, simulations, statistical weighting, etc. Specjial collections of USER subprograms are available for photon correlation spectroscopy, multicomponent spectra, and Fourier-Bessel, Fourier and Laplace transforms. Numerically stable algorithms are used throughout CONTIN. A fairly precise definition of information content in terms of degrees of freedom is given. The regularization parameter can be automatically chosen on the basis of an F-test and confidence region. The interpretation of the latter and of error estimates based on the covariance matrix of the constrained regularized solution are discussed. The strategies, methods and options in CONTIN are outlined. The program itself is described in the following paper.

  20. Regularized solution of a nonlinear problem in electromagnetic sounding

    NASA Astrophysics Data System (ADS)

    Piero Deidda, Gian; Fenu, Caterina; Rodriguez, Giuseppe

    2014-12-01

    Non destructive investigation of soil properties is crucial when trying to identify inhomogeneities in the ground or the presence of conductive substances. This kind of survey can be addressed with the aid of electromagnetic induction measurements taken with a ground conductivity meter. In this paper, starting from electromagnetic data collected by this device, we reconstruct the electrical conductivity of the soil with respect to depth, with the aid of a regularized damped Gauss-Newton method. We propose an inversion method based on the low-rank approximation of the Jacobian of the function to be inverted, for which we develop exact analytical formulae. The algorithm chooses a relaxation parameter in order to ensure the positivity of the solution and implements various methods for the automatic estimation of the regularization parameter. This leads to a fast and reliable algorithm, which is tested on numerical experiments both on synthetic data sets and on field data. The results show that the algorithm produces reasonable solutions in the case of synthetic data sets, even in the presence of a noise level consistent with real applications, and yields results that are compatible with those obtained by electrical resistivity tomography in the case of field data. Research supported in part by Regione Sardegna grant CRP2_686.

  1. Quaternion regularization in celestial mechanics, astrodynamics, and trajectory motion control. III

    NASA Astrophysics Data System (ADS)

    Chelnokov, Yu. N.

    2015-09-01

    The present paper1 analyzes the basic problems arising in the solution of problems of the optimum control of spacecraft (SC) trajectory motion (including the Lyapunov instability of solutions of conjugate equations) using the principle of the maximum. The use of quaternion models of astrodynamics is shown to allow: (1) the elimination of singular points in the differential phase and conjugate equations and in their partial analytical solutions; (2) construction of the first integrals of the new quaternion; (3) a considerable decrease of the dimensions of systems of differential equations of boundary value optimization problems with their simultaneous simplification by using the new quaternion variables related with quaternion constants of motion by rotation transformations; (4) construction of general solutions of differential equations for phase and conjugate variables on the sections of SC passive motion in the simplest and most convenient form, which is important for the solution of optimum pulse SC transfers; (5) the extension of the possibilities of the analytical investigation of differential equations of boundary value problems with the purpose of identifying the basic laws of optimum control and motion of SC; (6) improvement of the computational stability of the solution of boundary value problems; (7) a decrease in the required volume of computation.

  2. A New Variational Method for Bias Correction and Its Applications to Rodent Brain Extraction.

    PubMed

    Chang, Huibin; Huang, Weimin; Wu, Chunlin; Huang, Su; Guan, Cuntai; Sekar, Sakthivel; Bhakoo, Kishore Kumar; Duan, Yuping

    2017-03-01

    Brain extraction is an important preprocessing step for further analysis of brain MR images. Significant intensity inhomogeneity can be observed in rodent brain images due to the high-field MRI technique. Unlike most existing brain extraction methods that require bias corrected MRI, we present a high-order and L 0 regularized variational model for bias correction and brain extraction. The model is composed of a data fitting term, a piecewise constant regularization and a smooth regularization, which is constructed on a 3-D formulation for medical images with anisotropic voxel sizes. We propose an efficient multi-resolution algorithm for fast computation. At each resolution layer, we solve an alternating direction scheme, all subproblems of which have the closed-form solutions. The method is tested on three T2 weighted acquisition configurations comprising a total of 50 rodent brain volumes, which are with the acquisition field strengths of 4.7 Tesla, 9.4 Tesla and 17.6 Tesla, respectively. On one hand, we compare the results of bias correction with N3 and N4 in terms of the coefficient of variations on 20 different tissues of rodent brain. On the other hand, the results of brain extraction are compared against manually segmented gold standards, BET, BSE and 3-D PCNN based on a number of metrics. With the high accuracy and efficiency, our proposed method can facilitate automatic processing of large-scale brain studies.

  3. Potential profile near singularity point in kinetic Tonks-Langmuir discharges as a function of the ion sources temperature

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kos, L.; Tskhakaya, D. D.; Jelic, N.

    2011-05-15

    A plasma-sheath transition analysis requires a reliable mathematical expression for the plasma potential profile {Phi}(x) near the sheath edge x{sub s} in the limit {epsilon}{identical_to}{lambda}{sub D}/l=0 (where {lambda}{sub D} is the Debye length and l is a proper characteristic length of the discharge). Such expressions have been explicitly calculated for the fluid model and the singular (cold ion source) kinetic model, where exact analytic solutions for plasma equation ({epsilon}=0) are known, but not for the regular (warm ion source) kinetic model, where no analytic solution of the plasma equation has ever been obtained. For the latter case, Riemann [J. Phys.more » D: Appl. Phys. 24, 493 (1991)] only predicted a general formula assuming relatively high ion-source temperatures, i.e., much higher than the plasma-sheath potential drop. Riemann's formula, however, according to him, never was confirmed in explicit solutions of particular models (e.g., that of Bissell and Johnson [Phys. Fluids 30, 779 (1987)] and Scheuer and Emmert [Phys. Fluids 31, 3645 (1988)]) since ''the accuracy of the classical solutions is not sufficient to analyze the sheath vicinity''[Riemann, in Proceedings of the 62nd Annual Gaseous Electronic Conference, APS Meeting Abstracts, Vol. 54 (APS, 2009)]. Therefore, for many years, there has been a need for explicit calculation that might confirm the Riemann's general formula regarding the potential profile at the sheath edge in the cases of regular very warm ion sources. Fortunately, now we are able to achieve a very high accuracy of results [see, e.g., Kos et al., Phys. Plasmas 16, 093503 (2009)]. We perform this task by using both the analytic and the numerical method with explicit Maxwellian and ''water-bag'' ion source velocity distributions. We find the potential profile near the plasma-sheath edge in the whole range of ion source temperatures of general interest to plasma physics, from zero to ''practical infinity.'' While within limits of ''very low'' and ''relatively high'' ion source temperatures, the potential is proportional to the space coordinate powered by rational numbers {alpha}=1/2 and {alpha}=2/3, with medium ion source temperatures. We found {alpha} between these values being a non-rational number strongly dependent on the ion source temperature. The range of the non-rational power-law turns out to be a very narrow one, at the expense of the extension of {alpha}=2/3 region towards unexpectedly low ion source temperatures.« less

  4. Thermodynamic properties of hematite — ilmenite — geikielite solid solutions

    NASA Astrophysics Data System (ADS)

    Ghiorso, Mark S.

    1990-11-01

    A solution model is developed for rhombohedral oxide solid solutions having compositions within the ternary system ilmenite [(Fe{2+/ s }Ti{4+/1- s }) A (Fe{2+/1- s }Ti{4+/s}) B O3]-geikielite [(Mg{2+/ t }Ti{4+/1- t }) A (Mg{2+/1- t }Ti{4+/ t }) B O3]-hematite [(Fe3+) A (Fe3+) B O3]. The model incorporates an expression for the configurational entropy of solution, which accounts for varying degrees of structural long-range order (0≤s, t≤1) and utilizes simple regular solution theory to characterize the excess Gibbs free energy of mixing within the five-dimensional composition-ordering space. The 13 model parameters are calibrated from available data on: (1) the degree of long-range order and the composition-temperature dependence of theRbar 3c - Rbar 3 transition along the ilmenite-hematite binary join; (2) the compositions of coexisting olivine and rhombohedral oxide solid solutions close to the Mg-Fe2+ join; (3) the shape of the miscibility gap along the ilmenite-hematite join; (4) the compositions of coexisting spinel and rhombohedral oxide solid solutions along the Fe2+-Fe3+ join. In the course of calibration, estimates are obtained for the reference state enthalpy of formation of ulvöspinel and stoichiometric hematite (-1488.5 and -822.0 kJ/mol at 298 K and 1 bar, respectively). The model involves no excess entropies of mixing nor does it incorporate ternary interaction parameters. The formulation fits the available data and represents an internally consistent energetic model when used in conjuction with the standard state thermodynamic data set of Berman (1988) and the solution theory for orthopyroxenes, olivines and Fe-Mg titanomagnetite-aluminate-chromate spinels developed by Sack and Ghiorso (1989, 1990a, b). Calculated activity-composition relations for the end-members of the series, demonstrate the substantial degree of nonideality associated with interactions between the ordered and disordered structures and the dominant influence of the miscibility gap across much of the ternary system. The predicted shape of the miscibility gap, and the orientation of tie-lines relating the compositions of coexisting phases, display the effects of coupling between the excess enthalpy of solution and the degree of long-range order. One limb of the miscibility gap follows the composititiontemperature surface corresponding to the ternaryRbar 3 - Rbar 3c second-order transition.

  5. Generalizations of Tikhonov's regularized method of least squares to non-Euclidean vector norms

    NASA Astrophysics Data System (ADS)

    Volkov, V. V.; Erokhin, V. I.; Kakaev, V. V.; Onufrei, A. Yu.

    2017-09-01

    Tikhonov's regularized method of least squares and its generalizations to non-Euclidean norms, including polyhedral, are considered. The regularized method of least squares is reduced to mathematical programming problems obtained by "instrumental" generalizations of the Tikhonov lemma on the minimal (in a certain norm) solution of a system of linear algebraic equations with respect to an unknown matrix. Further studies are needed for problems concerning the development of methods and algorithms for solving reduced mathematical programming problems in which the objective functions and admissible domains are constructed using polyhedral vector norms.

  6. Relativistic Bessel cylinders

    NASA Astrophysics Data System (ADS)

    Krisch, J. P.; Glass, E. N.

    2014-10-01

    A set of cylindrical solutions to Einstein's field equations for power law densities is described. The solutions have a Bessel function contribution to the metric. For matter cylinders regular on axis, the first two solutions are the constant density Gott-Hiscock string and a cylinder with a metric Airy function. All members of this family have the Vilenkin limit to their mass per length. Some examples of Bessel shells and Bessel motion are given.

  7. Evaluation and application of the ROMS 1-way embedding procedure to the central california upwelling system

    NASA Astrophysics Data System (ADS)

    Penven, Pierrick; Debreu, Laurent; Marchesiello, Patrick; McWilliams, James C.

    What most clearly distinguishes near-shore and off-shore currents is their dominant spatial scale, O (1-30) km near-shore and O (30-1000) km off-shore. In practice, these phenomena are usually both measured and modeled with separate methods. In particular, it is infeasible for any regular computational grid to be large enough to simultaneously resolve well both types of currents. In order to obtain local solutions at high resolution while preserving the regional-scale circulation at an affordable computational cost, a 1-way grid embedding capability has been integrated into the Regional Oceanic Modeling System (ROMS). It takes advantage of the AGRIF (Adaptive Grid Refinement in Fortran) Fortran 90 package based on the use of pointers. After a first evaluation in a baroclinic vortex test case, the embedding procedure has been applied to a domain that covers the central upwelling region off California, around Monterey Bay, embedded in a domain that spans the continental U.S. Pacific Coast. Long-term simulations (10 years) have been conducted to obtain mean-seasonal statistical equilibria. The final solution shows few discontinuities at the parent-child domain boundary and a valid representation of the local upwelling structure, at a CPU cost only slightly greater than for the inner region alone. The solution is assessed by comparison with solutions for the whole US Pacific Coast at both low and high resolutions and to solutions for only the inner region at high resolution with mean-seasonal boundary conditions.

  8. Promoting the Adsorption of Metal Ions on Kaolinite by Defect Sites: A Molecular Dynamics Study

    PubMed Central

    Li, Xiong; Li, Hang; Yang, Gang

    2015-01-01

    Defect sites exist abundantly in minerals and play a crucial role for a variety of important processes. Here molecular dynamics simulations are used to comprehensively investigate the adsorption behaviors, stabilities and mechanisms of metal ions on defective minerals, considering different ionic concentrations, defect sizes and contents. Outer-sphere adsorbed Pb2+ ions predominate for all models (regular and defective), while inner-sphere Na+ ions, which exist sporadically only at concentrated solutions for regular models, govern the adsorption for all defective models. Adsorption quantities and stabilities of metal ions on kaolinite are fundamentally promoted by defect sites, thus explaining the experimental observations. Defect sites improve the stabilities of both inner- and outer-sphere adsorption, and (quasi) inner-sphere Pb2+ ions emerge only at defect sites that reinforce the interactions. Adsorption configurations are greatly altered by defect sites but respond weakly by changing defect sizes or contents. Both adsorption quantities and stabilities are enhanced by increasing defect sizes or contents, while ionic concentrations mainly affect adsorption quantities. We also find that adsorption of metal ions and anions can be promoted by each other and proceeds in a collaborative mechanism. Results thus obtained are beneficial to comprehend related processes for all types of minerals. PMID:26403873

  9. Singular boundary method for global gravity field modelling

    NASA Astrophysics Data System (ADS)

    Cunderlik, Robert

    2014-05-01

    The singular boundary method (SBM) and method of fundamental solutions (MFS) are meshless boundary collocation techniques that use the fundamental solution of a governing partial differential equation (e.g. the Laplace equation) as their basis functions. They have been developed to avoid singular numerical integration as well as mesh generation in the traditional boundary element method (BEM). SBM have been proposed to overcome a main drawback of MFS - its controversial fictitious boundary outside the domain. The key idea of SBM is to introduce a concept of the origin intensity factors that isolate singularities of the fundamental solution and its derivatives using some appropriate regularization techniques. Consequently, the source points can be placed directly on the real boundary and coincide with the collocation nodes. In this study we deal with SBM applied for high-resolution global gravity field modelling. The first numerical experiment presents a numerical solution to the fixed gravimetric boundary value problem. The achieved results are compared with the numerical solutions obtained by MFS or the direct BEM indicating efficiency of all methods. In the second numerical experiments, SBM is used to derive the geopotential and its first derivatives from the Tzz components of the gravity disturbing tensor observed by the GOCE satellite mission. A determination of the origin intensity factors allows to evaluate the disturbing potential and gravity disturbances directly on the Earth's surface where the source points are located. To achieve high-resolution numerical solutions, the large-scale parallel computations are performed on the cluster with 1TB of the distributed memory and an iterative elimination of far zones' contributions is applied.

  10. Analysis of a system modelling the motion of a piston in a viscous gas

    NASA Astrophysics Data System (ADS)

    Maity, Debayan; Takahashi, Takéo; Tucsnak, Marius

    2017-09-01

    We study a free boundary problem modelling the motion of a piston in a viscous gas. The gas-piston system fills a cylinder with fixed extremities, which possibly allow gas from the exterior to penetrate inside the cylinder. The gas is modeled by the 1D compressible Navier-Stokes system and the piston motion is described by the second Newton's law. We prove the existence and uniqueness of global in time strong solutions. The main novelty brought in by our results is that they include the case of nonhomogeneous boundary conditions which, as far as we know, have not been studied in this context. Moreover, even for homogeneous boundary conditions, our results require less regularity of the initial data than those obtained in previous works.

  11. TRPM8-Dependent Dynamic Response in a Mathematical Model of Cold Thermoreceptor

    PubMed Central

    Olivares, Erick; Salgado, Simón; Maidana, Jean Paul; Herrera, Gaspar; Campos, Matías; Madrid, Rodolfo; Orio, Patricio

    2015-01-01

    Cold-sensitive nerve terminals (CSNTs) encode steady temperatures with regular, rhythmic temperature-dependent firing patterns that range from irregular tonic firing to regular bursting (static response). During abrupt temperature changes, CSNTs show a dynamic response, transiently increasing their firing frequency as temperature decreases and silencing when the temperature increases (dynamic response). To date, mathematical models that simulate the static response are based on two depolarizing/repolarizing pairs of membrane ionic conductance (slow and fast kinetics). However, these models fail to reproduce the dynamic response of CSNTs to rapid changes in temperature and notoriously they lack a specific cold-activated conductance such as the TRPM8 channel. We developed a model that includes TRPM8 as a temperature-dependent conductance with a calcium-dependent desensitization. We show by computer simulations that it appropriately reproduces the dynamic response of CSNTs from mouse cornea, while preserving their static response behavior. In this model, the TRPM8 conductance is essential to display a dynamic response. In agreement with experimental results, TRPM8 is also needed for the ongoing activity in the absence of stimulus (i.e. neutral skin temperature). Free parameters of the model were adjusted by an evolutionary optimization algorithm, allowing us to find different solutions. We present a family of possible parameters that reproduce the behavior of CSNTs under different temperature protocols. The detection of temperature gradients is associated to a homeostatic mechanism supported by the calcium-dependent desensitization. PMID:26426259

  12. Neyman, Markov processes and survival analysis.

    PubMed

    Yang, Grace

    2013-07-01

    J. Neyman used stochastic processes extensively in his applied work. One example is the Fix and Neyman (F-N) competing risks model (1951) that uses finite homogeneous Markov processes to analyse clinical trials with breast cancer patients. We revisit the F-N model, and compare it with the Kaplan-Meier (K-M) formulation for right censored data. The comparison offers a way to generalize the K-M formulation to include risks of recovery and relapses in the calculation of a patient's survival probability. The generalization is to extend the F-N model to a nonhomogeneous Markov process. Closed-form solutions of the survival probability are available in special cases of the nonhomogeneous processes, like the popular multiple decrement model (including the K-M model) and Chiang's staging model, but these models do not consider recovery and relapses while the F-N model does. An analysis of sero-epidemiology current status data with recurrent events is illustrated. Fix and Neyman used Neyman's RBAN (regular best asymptotic normal) estimates for the risks, and provided a numerical example showing the importance of considering both the survival probability and the length of time of a patient living a normal life in the evaluation of clinical trials. The said extension would result in a complicated model and it is unlikely to find analytical closed-form solutions for survival analysis. With ever increasing computing power, numerical methods offer a viable way of investigating the problem.

  13. Classical mutual information in mean-field spin glass models

    NASA Astrophysics Data System (ADS)

    Alba, Vincenzo; Inglis, Stephen; Pollet, Lode

    2016-03-01

    We investigate the classical Rényi entropy Sn and the associated mutual information In in the Sherrington-Kirkpatrick (S-K) model, which is the paradigm model of mean-field spin glasses. Using classical Monte Carlo simulations and analytical tools we investigate the S-K model in the n -sheet booklet. This is achieved by gluing together n independent copies of the model, and it is the main ingredient for constructing the Rényi entanglement-related quantities. We find a glassy phase at low temperatures, whereas at high temperatures the model exhibits paramagnetic behavior, consistent with the regular S-K model. The temperature of the paramagnetic-glassy transition depends nontrivially on the geometry of the booklet. At high temperatures we provide the exact solution of the model by exploiting the replica symmetry. This is the permutation symmetry among the fictitious replicas that are used to perform disorder averages (via the replica trick). In the glassy phase the replica symmetry has to be broken. Using a generalization of the Parisi solution, we provide analytical results for Sn and In and for standard thermodynamic quantities. Both Sn and In exhibit a volume law in the whole phase diagram. We characterize the behavior of the corresponding densities, Sn/N and In/N , in the thermodynamic limit. Interestingly, at the critical point the mutual information does not exhibit any crossing for different system sizes, in contrast with local spin models.

  14. Euclid, Fibonacci, Sketchpad.

    ERIC Educational Resources Information Center

    Litchfield, Daniel C.; Goldenheim, David A.

    1997-01-01

    Describes the solution to a geometric problem by two ninth-grade mathematicians using The Geometer's Sketchpad computer software program. The problem was to divide any line segment into a regular partition of any number of parts, a variation on a problem by Euclid. The solution yielded two constructions, one a GLaD construction and the other using…

  15. Persistent Problems and Promising Solutions in Inservice Education. Report of Selected REGI Project Directors.

    ERIC Educational Resources Information Center

    Grigsby, Greg

    This report summarizes and presents information from interviews with 22 National Inservice Network project directors. The purpose was to identify problems and solutions encountered in directing regular education inservice (REGI) projects. The projects were sponsored by institutions of higher education, state and local education agencies, and an…

  16. The Automated Root Exudate System (ARES): a method to apply solutes at regular intervals to soils in the field.

    PubMed

    Lopez-Sangil, Luis; George, Charles; Medina-Barcenas, Eduardo; Birkett, Ali J; Baxendale, Catherine; Bréchet, Laëtitia M; Estradera-Gumbau, Eduard; Sayer, Emma J

    2017-09-01

    Root exudation is a key component of nutrient and carbon dynamics in terrestrial ecosystems. Exudation rates vary widely by plant species and environmental conditions, but our understanding of how root exudates affect soil functioning is incomplete, in part because there are few viable methods to manipulate root exudates in situ . To address this, we devised the Automated Root Exudate System (ARES), which simulates increased root exudation by applying small amounts of labile solutes at regular intervals in the field.The ARES is a gravity-fed drip irrigation system comprising a reservoir bottle connected via a timer to a micro-hose irrigation grid covering c . 1 m 2 ; 24 drip-tips are inserted into the soil to 4-cm depth to apply solutions into the rooting zone. We installed two ARES subplots within existing litter removal and control plots in a temperate deciduous woodland. We applied either an artificial root exudate solution (RE) or a procedural control solution (CP) to each subplot for 1 min day -1 during two growing seasons. To investigate the influence of root exudation on soil carbon dynamics, we measured soil respiration monthly and soil microbial biomass at the end of each growing season.The ARES applied the solutions at a rate of c . 2 L m -2  week -1 without significantly increasing soil water content. The application of RE solution had a clear effect on soil carbon dynamics, but the response varied by litter treatment. Across two growing seasons, soil respiration was 25% higher in RE compared to CP subplots in the litter removal treatment, but not in the control plots. By contrast, we observed a significant increase in microbial biomass carbon (33%) and nitrogen (26%) in RE subplots in the control litter treatment.The ARES is an effective, low-cost method to apply experimental solutions directly into the rooting zone in the field. The installation of the systems entails minimal disturbance to the soil and little maintenance is required. Although we used ARES to apply root exudate solution, the method can be used to apply many other treatments involving solute inputs at regular intervals in a wide range of ecosystems.

  17. On the Global Regularity of a Helical-Decimated Version of the 3D Navier-Stokes Equations

    NASA Astrophysics Data System (ADS)

    Biferale, Luca; Titi, Edriss S.

    2013-06-01

    We study the global regularity, for all time and all initial data in H 1/2, of a recently introduced decimated version of the incompressible 3D Navier-Stokes (dNS) equations. The model is based on a projection of the dynamical evolution of Navier-Stokes (NS) equations into the subspace where helicity (the L 2-scalar product of velocity and vorticity) is sign-definite. The presence of a second (beside energy) sign-definite inviscid conserved quadratic quantity, which is equivalent to the H 1/2-Sobolev norm, allows us to demonstrate global existence and uniqueness, of space-periodic solutions, together with continuity with respect to the initial conditions, for this decimated 3D model. This is achieved thanks to the establishment of two new estimates, for this 3D model, which show that the H 1/2 and the time average of the square of the H 3/2 norms of the velocity field remain finite. Such two additional bounds are known, in the spirit of the work of H. Fujita and T. Kato (Arch. Ration. Mech. Anal. 16:269-315, 1964; Rend. Semin. Mat. Univ. Padova 32:243-260, 1962), to be sufficient for showing well-posedness for the 3D NS equations. Furthermore, they are directly linked to the helicity evolution for the dNS model, and therefore with a clear physical meaning and consequences.

  18. Regularization of instabilities in gravity theories

    NASA Astrophysics Data System (ADS)

    Ramazanoǧlu, Fethi M.

    2018-01-01

    We investigate instabilities and their regularization in theories of gravitation. Instabilities can be beneficial since their growth often leads to prominent observable signatures, which makes them especially relevant to relatively low signal-to-noise ratio measurements such as gravitational wave detections. An indefinitely growing instability usually renders a theory unphysical; hence, a desirable instability should also come with underlying physical machinery that stops the growth at finite values, i.e., regularization mechanisms. The prototypical gravity theory that presents such an instability is the spontaneous scalarization phenomena of scalar-tensor theories, which feature a tachyonic instability. We identify the regularization mechanisms in this theory and show that they can be utilized to regularize other instabilities as well. Namely, we present theories in which spontaneous growth is triggered by a ghost rather than a tachyon and numerically calculate stationary solutions of scalarized neutron stars in these theories. We speculate on the possibility of regularizing known divergent instabilities in certain gravity theories using our findings and discuss alternative theories of gravitation in which regularized instabilities may be present. Even though we study many specific examples, our main point is the recognition of regularized instabilities as a common theme and unifying mechanism in a vast array of gravity theories.

  19. Regularization of soft-X-ray imaging in the DIII-D tokamak

    DOE PAGES

    Wingen, A.; Shafer, M. W.; Unterberg, E. A.; ...

    2015-03-02

    We developed an image inversion scheme for the soft X-ray imaging system (SXRIS) diagnostic at the DIII-D tokamak in order to obtain the local soft X-ray emission at a poloidal cross-section from the spatially line-integrated image taken by the SXRIS camera. The scheme uses the Tikhonov regularization method since the inversion problem is generally ill-posed. The regularization technique uses the generalized singular value decomposition to determine a solution that depends on a free regularization parameter. The latter has to be chosen carefully, and the so called {\\it L-curve} method to find the optimum regularization parameter is outlined. A representative testmore » image is used to study the properties of the inversion scheme with respect to inversion accuracy, amount/strength of regularization, image noise and image resolution. Moreover, the optimum inversion parameters are identified, while the L-curve method successfully computes the optimum regularization parameter. Noise is found to be the most limiting issue, but sufficient regularization is still possible at noise to signal ratios up to 10%-15%. Finally, the inversion scheme is applied to measured SXRIS data and the line-integrated SXRIS image is successfully inverted.« less

  20. Detailed finite element method modeling of evaporating multi-component droplets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Diddens, Christian, E-mail: C.Diddens@tue.nl

    The evaporation of sessile multi-component droplets is modeled with an axisymmetic finite element method. The model comprises the coupled processes of mixture evaporation, multi-component flow with composition-dependent fluid properties and thermal effects. Based on representative examples of water–glycerol and water–ethanol droplets, regular and chaotic examples of solutal Marangoni flows are discussed. Furthermore, the relevance of the substrate thickness for the evaporative cooling of volatile binary mixture droplets is pointed out. It is shown how the evaporation of the more volatile component can drastically decrease the interface temperature, so that ambient vapor of the less volatile component condenses on the droplet.more » Finally, results of this model are compared with corresponding results of a lubrication theory model, showing that the application of lubrication theory can cause considerable errors even for moderate contact angles of 40°. - Graphical abstract:.« less

  1. Regularity of random attractors for fractional stochastic reaction-diffusion equations on Rn

    NASA Astrophysics Data System (ADS)

    Gu, Anhui; Li, Dingshi; Wang, Bixiang; Yang, Han

    2018-06-01

    We investigate the regularity of random attractors for the non-autonomous non-local fractional stochastic reaction-diffusion equations in Hs (Rn) with s ∈ (0 , 1). We prove the existence and uniqueness of the tempered random attractor that is compact in Hs (Rn) and attracts all tempered random subsets of L2 (Rn) with respect to the norm of Hs (Rn). The main difficulty is to show the pullback asymptotic compactness of solutions in Hs (Rn) due to the noncompactness of Sobolev embeddings on unbounded domains and the almost sure nondifferentiability of the sample paths of the Wiener process. We establish such compactness by the ideas of uniform tail-estimates and the spectral decomposition of solutions in bounded domains.

  2. Ionospheric-thermospheric UV tomography: 1. Image space reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Dymond, K. F.; Budzien, S. A.; Hei, M. A.

    2017-03-01

    We present and discuss two algorithms of the class known as Image Space Reconstruction Algorithms (ISRAs) that we are applying to the solution of large-scale ionospheric tomography problems. ISRAs have several desirable features that make them useful for ionospheric tomography. In addition to producing nonnegative solutions, ISRAs are amenable to sparse-matrix formulations and are fast, stable, and robust. We present the results of our studies of two types of ISRA: the Least Squares Positive Definite and the Richardson-Lucy algorithms. We compare their performance to the Multiplicative Algebraic Reconstruction and Conjugate Gradient Least Squares algorithms. We then discuss the use of regularization in these algorithms and present our new approach based on regularization to a partial differential equation.

  3. Space-Time Discrete KPZ Equation

    NASA Astrophysics Data System (ADS)

    Cannizzaro, G.; Matetski, K.

    2018-03-01

    We study a general family of space-time discretizations of the KPZ equation and show that they converge to its solution. The approach we follow makes use of basic elements of the theory of regularity structures (Hairer in Invent Math 198(2):269-504, 2014) as well as its discrete counterpart (Hairer and Matetski in Discretizations of rough stochastic PDEs, 2015. arXiv:1511.06937). Since the discretization is in both space and time and we allow non-standard discretization for the product, the methods mentioned above have to be suitably modified in order to accommodate the structure of the models under study.

  4. Derivation of Inviscid Quasi-geostrophic Equation from Rotational Compressible Magnetohydrodynamic Flows

    NASA Astrophysics Data System (ADS)

    Kwon, Young-Sam; Lin, Ying-Chieh; Su, Cheng-Fang

    2018-04-01

    In this paper, we consider the compressible models of magnetohydrodynamic flows giving rise to a variety of mathematical problems in many areas. We derive a rigorous quasi-geostrophic equation governed by magnetic field from the rotational compressible magnetohydrodynamic flows with the well-prepared initial data. It is a first derivation of quasi-geostrophic equation governed by the magnetic field, and the tool is based on the relative entropy method. This paper covers two results: the existence of the unique local strong solution of quasi-geostrophic equation with the good regularity and the derivation of a quasi-geostrophic equation.

  5. Design and analysis of tubular permanent magnet linear generator for small-scale wave energy converter

    NASA Astrophysics Data System (ADS)

    Kim, Jeong-Man; Koo, Min-Mo; Jeong, Jae-Hoon; Hong, Keyyong; Cho, Il-Hyoung; Choi, Jang-Young

    2017-05-01

    This paper reports the design and analysis of a tubular permanent magnet linear generator (TPMLG) for a small-scale wave-energy converter. The analytical field computation is performed by applying a magnetic vector potential and a 2-D analytical model to determine design parameters. Based on analytical solutions, parametric analysis is performed to meet the design specifications of a wave-energy converter (WEC). Then, 2-D FEA is employed to validate the analytical method. Finally, the experimental result confirms the predictions of the analytical and finite element analysis (FEA) methods under regular and irregular wave conditions.

  6. Renormalization Group Theory of Bolgiano Scaling in Boussinesq Turbulence

    NASA Technical Reports Server (NTRS)

    Rubinstein, Robert

    1994-01-01

    Bolgiano scaling in Boussinesq turbulence is analyzed using the Yakhot-Orszag renormalization group. For this purpose, an isotropic model is introduced. Scaling exponents are calculated by forcing the temperature equation so that the temperature variance flux is constant in the inertial range. Universal amplitudes associated with the scaling laws are computed by expanding about a logarithmic theory. Connections between this formalism and the direct interaction approximation are discussed. It is suggested that the Yakhot-Orszag theory yields a lowest order approximate solution of a regularized direct interaction approximation which can be corrected by a simple iterative procedure.

  7. Computational inverse methods of heat source in fatigue damage problems

    NASA Astrophysics Data System (ADS)

    Chen, Aizhou; Li, Yuan; Yan, Bo

    2018-04-01

    Fatigue dissipation energy is the research focus in field of fatigue damage at present. It is a new idea to solve the problem of calculating fatigue dissipation energy by introducing inverse method of heat source into parameter identification of fatigue dissipation energy model. This paper introduces the research advances on computational inverse method of heat source and regularization technique to solve inverse problem, as well as the existing heat source solution method in fatigue process, prospects inverse method of heat source applying in fatigue damage field, lays the foundation for further improving the effectiveness of fatigue dissipation energy rapid prediction.

  8. An improved cylindrical FDTD method and its application to field-tissue interaction study in MRI.

    PubMed

    Chi, Jieru; Liu, Feng; Xia, Ling; Shao, Tingting; Mason, David G; Crozier, Stuart

    2010-01-01

    This paper presents a three dimensional finite-difference time-domain (FDTD) scheme in cylindrical coordinates with an improved algorithm for accommodating the numerical singularity associated with the polar axis. The regularization of this singularity problem is entirely based on Ampere's law. The proposed algorithm has been detailed and verified against a problem with a known solution obtained from a commercial electromagnetic simulation package. The numerical scheme is also illustrated by modeling high-frequency RF field-human body interactions in MRI. The results demonstrate the accuracy and capability of the proposed algorithm.

  9. Analysis of the incomplete Galerkin method for modelling of smoothly-irregular transition between planar waveguides

    NASA Astrophysics Data System (ADS)

    Divakov, D.; Sevastianov, L.; Nikolaev, N.

    2017-01-01

    The paper deals with a numerical solution of the problem of waveguide propagation of polarized light in smoothly-irregular transition between closed regular waveguides using the incomplete Galerkin method. This method consists in replacement of variables in the problem of reduction of the Helmholtz equation to the system of differential equations by the Kantorovich method and in formulation of the boundary conditions for the resulting system. The formulation of the boundary problem for the ODE system is realized in computer algebra system Maple. The stated boundary problem is solved using Maples libraries of numerical methods.

  10. L1-norm locally linear representation regularization multi-source adaptation learning.

    PubMed

    Tao, Jianwen; Wen, Shiting; Hu, Wenjun

    2015-09-01

    In most supervised domain adaptation learning (DAL) tasks, one has access only to a small number of labeled examples from target domain. Therefore the success of supervised DAL in this "small sample" regime needs the effective utilization of the large amounts of unlabeled data to extract information that is useful for generalization. Toward this end, we here use the geometric intuition of manifold assumption to extend the established frameworks in existing model-based DAL methods for function learning by incorporating additional information about the target geometric structure of the marginal distribution. We would like to ensure that the solution is smooth with respect to both the ambient space and the target marginal distribution. In doing this, we propose a novel L1-norm locally linear representation regularization multi-source adaptation learning framework which exploits the geometry of the probability distribution, which has two techniques. Firstly, an L1-norm locally linear representation method is presented for robust graph construction by replacing the L2-norm reconstruction measure in LLE with L1-norm one, which is termed as L1-LLR for short. Secondly, considering the robust graph regularization, we replace traditional graph Laplacian regularization with our new L1-LLR graph Laplacian regularization and therefore construct new graph-based semi-supervised learning framework with multi-source adaptation constraint, which is coined as L1-MSAL method. Moreover, to deal with the nonlinear learning problem, we also generalize the L1-MSAL method by mapping the input data points from the input space to a high-dimensional reproducing kernel Hilbert space (RKHS) via a nonlinear mapping. Promising experimental results have been obtained on several real-world datasets such as face, visual video and object. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Boundedness and almost Periodicity in Time of Solutions of Evolutionary Variational Inequalities

    NASA Astrophysics Data System (ADS)

    Pankov, A. A.

    1983-04-01

    In this paper existence theorems are obtained for the solutions of abstract parabolic variational inequalities, which are bounded with respect to time (in the Stepanov and L^\\infty norms). The regularity and almost periodicity properties of such solutions are studied. Theorems are also established concerning their solvability in spaces of Besicovitch almost periodic functions. The majority of the results are obtained without any compactness assumptions. Bibliography: 30 titles.

  12. An Assessment of the Dimensionality and Factorial Structure of the Revised Paranormal Belief Scale

    PubMed Central

    Drinkwater, Kenneth; Denovan, Andrew; Dagnall, Neil; Parker, Andrew

    2017-01-01

    Since its introduction, the Revised Paranormal Belief Scale (RPBS) has developed into a principal measure of belief in the paranormal. Accordingly, the RPBS regularly appears within parapsychological research. Despite common usage, academic debates continue to focus on the factorial structure of the RPBS and its psychometric integrity. Using an aggregated heterogeneous sample (N = 3,764), the present study tested the fit of 10 factorial models encompassing variants of the most commonly proposed solutions (seven, five, two, and one-factor) plus new bifactor alternatives. A comparison of competing models revealed a seven-factor bifactor solution possessed superior data-model fit (CFI = 0.945, TLI = 0.933, IFI = 0.945, SRMR = 0.046, RMSEA = 0.058), containing strong factor loadings for a general factor and weaker, albeit acceptable, factor loadings for seven subfactors. This indicated that belief in the paranormal, as measured by the RPBS, is best characterized as a single overarching construct, comprising several related, but conceptually independent subfactors. Furthermore, women reported significantly higher paranormal belief scores than men, and tests of invariance indicated that mean differences in gender are unlikely to reflect measurement bias. Results indicate that despite concerns about the content and psychometric integrity of the RPBS the measure functions well at both a global and seven-factor level. Indeed, the original seven-factors contaminate alternative solutions. PMID:29018398

  13. An Assessment of the Dimensionality and Factorial Structure of the Revised Paranormal Belief Scale.

    PubMed

    Drinkwater, Kenneth; Denovan, Andrew; Dagnall, Neil; Parker, Andrew

    2017-01-01

    Since its introduction, the Revised Paranormal Belief Scale (RPBS) has developed into a principal measure of belief in the paranormal. Accordingly, the RPBS regularly appears within parapsychological research. Despite common usage, academic debates continue to focus on the factorial structure of the RPBS and its psychometric integrity. Using an aggregated heterogeneous sample ( N = 3,764), the present study tested the fit of 10 factorial models encompassing variants of the most commonly proposed solutions (seven, five, two, and one-factor) plus new bifactor alternatives. A comparison of competing models revealed a seven-factor bifactor solution possessed superior data-model fit (CFI = 0.945, TLI = 0.933, IFI = 0.945, SRMR = 0.046, RMSEA = 0.058), containing strong factor loadings for a general factor and weaker, albeit acceptable, factor loadings for seven subfactors. This indicated that belief in the paranormal, as measured by the RPBS, is best characterized as a single overarching construct, comprising several related, but conceptually independent subfactors. Furthermore, women reported significantly higher paranormal belief scores than men, and tests of invariance indicated that mean differences in gender are unlikely to reflect measurement bias. Results indicate that despite concerns about the content and psychometric integrity of the RPBS the measure functions well at both a global and seven-factor level. Indeed, the original seven-factors contaminate alternative solutions.

  14. LS-APC v1.0: a tuning-free method for the linear inverse problem and its application to source-term determination

    NASA Astrophysics Data System (ADS)

    Tichý, Ondřej; Šmídl, Václav; Hofman, Radek; Stohl, Andreas

    2016-11-01

    Estimation of pollutant releases into the atmosphere is an important problem in the environmental sciences. It is typically formalized as an inverse problem using a linear model that can explain observable quantities (e.g., concentrations or deposition values) as a product of the source-receptor sensitivity (SRS) matrix obtained from an atmospheric transport model multiplied by the unknown source-term vector. Since this problem is typically ill-posed, current state-of-the-art methods are based on regularization of the problem and solution of a formulated optimization problem. This procedure depends on manual settings of uncertainties that are often very poorly quantified, effectively making them tuning parameters. We formulate a probabilistic model, that has the same maximum likelihood solution as the conventional method using pre-specified uncertainties. Replacement of the maximum likelihood solution by full Bayesian estimation also allows estimation of all tuning parameters from the measurements. The estimation procedure is based on the variational Bayes approximation which is evaluated by an iterative algorithm. The resulting method is thus very similar to the conventional approach, but with the possibility to also estimate all tuning parameters from the observations. The proposed algorithm is tested and compared with the standard methods on data from the European Tracer Experiment (ETEX) where advantages of the new method are demonstrated. A MATLAB implementation of the proposed algorithm is available for download.

  15. Inflation in a closed universe

    NASA Astrophysics Data System (ADS)

    Ratra, Bharat

    2017-11-01

    To derive a power spectrum for energy density inhomogeneities in a closed universe, we study a spatially-closed inflation-modified hot big bang model whose evolutionary history is divided into three epochs: an early slowly-rolling scalar field inflation epoch and the usual radiation and nonrelativistic matter epochs. (For our purposes it is not necessary to consider a final dark energy dominated epoch.) We derive general solutions of the relativistic linear perturbation equations in each epoch. The constants of integration in the inflation epoch solutions are determined from de Sitter invariant quantum-mechanical initial conditions in the Lorentzian section of the inflating closed de Sitter space derived from Hawking's prescription that the quantum state of the universe only include field configurations that are regular on the Euclidean (de Sitter) sphere section. The constants of integration in the radiation and matter epoch solutions are determined from joining conditions derived by requiring that the linear perturbation equations remain nonsingular at the transitions between epochs. The matter epoch power spectrum of gauge-invariant energy density inhomogeneities is not a power law, and depends on spatial wave number in the way expected for a generalization to the closed model of the standard flat-space scale-invariant power spectrum. The power spectrum we derive appears to differ from a number of other closed inflation model power spectra derived assuming different (presumably non de Sitter invariant) initial conditions.

  16. Numerical Modeling of Sub-Wavelength Anti-Reflective Structures for Solar Module Applications

    PubMed Central

    Han, Katherine; Chang, Chih-Hung

    2014-01-01

    This paper reviews the current progress in mathematical modeling of anti-reflective subwavelength structures. Methods covered include effective medium theory (EMT), finite-difference time-domain (FDTD), transfer matrix method (TMM), the Fourier modal method (FMM)/rigorous coupled-wave analysis (RCWA) and the finite element method (FEM). Time-based solutions to Maxwell’s equations, such as FDTD, have the benefits of calculating reflectance for multiple wavelengths of light per simulation, but are computationally intensive. Space-discretized methods such as FDTD and FEM output field strength results over the whole geometry and are capable of modeling arbitrary shapes. Frequency-based solutions such as RCWA/FMM and FEM model one wavelength per simulation and are thus able to handle dispersion for regular geometries. Analytical approaches such as TMM are appropriate for very simple thin films. Initial disadvantages such as neglect of dispersion (FDTD), inaccuracy in TM polarization (RCWA), inability to model aperiodic gratings (RCWA), and inaccuracy with metallic materials (FDTD) have been overcome by most modern software. All rigorous numerical methods have accurately predicted the broadband reflection of ideal, graded-index anti-reflective subwavelength structures; ideal structures are tapered nanostructures with periods smaller than the wavelengths of light of interest and lengths that are at least a large portion of the wavelengths considered. PMID:28348287

  17. Physics-driven Spatiotemporal Regularization for High-dimensional Predictive Modeling: A Novel Approach to Solve the Inverse ECG Problem

    NASA Astrophysics Data System (ADS)

    Yao, Bing; Yang, Hui

    2016-12-01

    This paper presents a novel physics-driven spatiotemporal regularization (STRE) method for high-dimensional predictive modeling in complex healthcare systems. This model not only captures the physics-based interrelationship between time-varying explanatory and response variables that are distributed in the space, but also addresses the spatial and temporal regularizations to improve the prediction performance. The STRE model is implemented to predict the time-varying distribution of electric potentials on the heart surface based on the electrocardiogram (ECG) data from the distributed sensor network placed on the body surface. The model performance is evaluated and validated in both a simulated two-sphere geometry and a realistic torso-heart geometry. Experimental results show that the STRE model significantly outperforms other regularization models that are widely used in current practice such as Tikhonov zero-order, Tikhonov first-order and L1 first-order regularization methods.

  18. Dielectric Interactions and the Prediction of Retention Times of Pesticides in Supercritical Fluid Chromatography with CO2

    NASA Astrophysics Data System (ADS)

    Alvarez, Guillermo A.; Baumanna, Wolfram

    2005-02-01

    A thermodynamic model for the partition of a solute (pesticide) between two immiscible phases, such as the stationary and mobile phases of supercritical fluid chromatography with CO2, is developed from first principles. A key ingredient of the model is the result of the calculation made by Liptay of the energy of interaction of a polar molecule with a dielectric continuum, which represents the solvent. The strength of the interaction between the solute and the solvent, which may be considered a measure of the solvent power, is characterized by a function g = (ɛ - 1)/(2ɛ +1), where ɛ is the dielectric constant of the medium, which is a function of the temperature T and the pressure P. Since the interactions between the nonpolar supercritical CO2 solvent and the slightly polar pesticide molecules are considered to be extremely weak, a regular solution model is appropriate from the thermodynamic point of view. At constant temperature, the model predicts a linear dependence of the logarithm of the capacity factor (lnk) of the chromatographic experiment on the function g = g(P), as the pressure is varied, with a slope which depends on the dipole moment of the solute, dispersion interactions and the size of the solute cavity in the solvent. At constant pressure, once the term containing the g (solvent interaction) factor is subtracted from lnk, a plot of the resulting term against the inverse of temperature yields the enthalpy change of transfer of the solute from the mobile (supercritical CO2) phase to the stationary (adsorbent) phase. The increase in temperature with the consequent large volume expansion of the supercritical fluid lowers its solvent strength and hence the capacity factor of the column (or solute retention time) increases. These pressure and temperature effects, predicted by the model, agree excellently with the experimental retention times of seven pesticides. Beyond a temperature of about 393 K, where the liquid solvent densities approach those of a gas (and hence the solvent strength becomes negligible), a dramatic loss of the retention times of all pesticides is observed in the experiments; this is attributed to desorption of the solute from the stationary phase, as predicted by Le Châtelier's principle for the (exothermic) adsorption process.

  19. Analytic solutions for Long's equation and its generalization

    NASA Astrophysics Data System (ADS)

    Humi, Mayer

    2017-12-01

    Two-dimensional, steady-state, stratified, isothermal atmospheric flow over topography is governed by Long's equation. Numerical solutions of this equation were derived and used by several authors. In particular, these solutions were applied extensively to analyze the experimental observations of gravity waves. In the first part of this paper we derive an extension of this equation to non-isothermal flows. Then we devise a transformation that simplifies this equation. We show that this simplified equation admits solitonic-type solutions in addition to regular gravity waves. These new analytical solutions provide new insights into the propagation and amplitude of gravity waves over topography.

  20. A framework with Cucho algorithm for discovering regular plans in mobile clients

    NASA Astrophysics Data System (ADS)

    Tsiligaridis, John

    2017-09-01

    In a mobile computing system, broadcasting has become a very interesting and challenging research issue. The server continuously broadcasts data to mobile users; the data can be inserted into customized size relations and broadcasted as Regular Broadcast Plan (RBP) with multiple channels. Two algorithms, given the data size for each provided service, the Basic Regular (BRA) and the Partition Value Algorithm (PVA) can provide a static and dynamic RBP construction with multiple constraints solutions respectively. Servers have to define the data size of the services and can provide a feasible RBP working with many broadcasting plan operations. The operations become more complicated when there are many kinds of services and the sizes of data sets are unknown to the server. To that end a framework has been developed that also gives the ability to select low or high capacity channels for servicing. Theorems with new analytical results can provide direct conditions that can state the existence of solutions for the RBP problem with the compound criterion. Two kinds of solutions are provided: the equal and the non equal subrelation solutions. The Cucho Search Algorithm (CS) with the Levy flight behavior has been selected for the optimization. The CS for RBP (CSRP) is developed applying the theorems to the discovery of RBPs. An additional change to CS has been made in order to increase the local search. The CS can also discover RBPs with the minimum number of channels. From all the above modern servers can be upgraded with these possibilities in regards to RBPs discovery with fewer channels.

  1. Dynamics for a diffusive prey-predator model with different free boundaries

    NASA Astrophysics Data System (ADS)

    Wang, Mingxin; Zhang, Yang

    2018-03-01

    To understand the spreading and interaction of prey and predator, in this paper we study the dynamics of the diffusive Lotka-Volterra type prey-predator model with different free boundaries. These two free boundaries, which may intersect each other as time evolves, are used to describe the spreading of prey and predator. We investigate the existence and uniqueness, regularity and uniform estimates, and long time behaviors of global solution. Some sufficient conditions for spreading and vanishing are established. When spreading occurs, we provide the more accurate limits of (u , v) as t → ∞, and give some estimates of asymptotic spreading speeds of u , v and asymptotic speeds of g , h. Some realistic and significant spreading phenomena are found.

  2. Private sector approaches to workforce enhancement.

    PubMed

    Wendling, Wayne R

    2010-06-01

    This paper addresses the private practice model of dental care delivery in the US. The great majority of dental care services are delivered through this model and thus changes in the model represent a means to substantially change the supply and availability of dental services. The two main forces that change how private practices function are broad economic factors, which alter the demand for dental care and innovations in practice structure and function which alter the supply and cost of services. Economics has long recognized that although there are private market solutions for many issues, not all problems can be addressed through this model. The private practice of dentistry is a private market solution that works for a substantial share of the market. However, the private market may not work to resolve all issues associated with access and utilization. Solutions for some problems call for creative private - public arrangements - another form of innovation; and market-based solutions may not be feasible for each and every problem. This paper discusses these economic factors and innovation as they relate to the private practice of dentistry, with special emphasis on those elements that have increased the capacity of the dental practice to offer services to those with limited means to access fee-based care. Innovations are frequently described as new care delivery models or new workforce models. However, innovation can occur on an ongoing and regular basis as dental practices examine new ways to combine capital and human resources and to leverage the education and skill of the dentists to a greater number of patients. Innovation occurs within a market context as the current and projected economic returns reward the innovation. Innovation can also occur through private-public arrangements. There are indications of available capacity within the existing delivery system to expand service delivery. The Michigan Medicaid Healthy Kids Dental program is discussed as one example of how dental services to Medicaid insured children were effectively expanded using the private practice model.

  3. Path Following in the Exact Penalty Method of Convex Programming.

    PubMed

    Zhou, Hua; Lange, Kenneth

    2015-07-01

    Classical penalty methods solve a sequence of unconstrained problems that put greater and greater stress on meeting the constraints. In the limit as the penalty constant tends to ∞, one recovers the constrained solution. In the exact penalty method, squared penalties are replaced by absolute value penalties, and the solution is recovered for a finite value of the penalty constant. In practice, the kinks in the penalty and the unknown magnitude of the penalty constant prevent wide application of the exact penalty method in nonlinear programming. In this article, we examine a strategy of path following consistent with the exact penalty method. Instead of performing optimization at a single penalty constant, we trace the solution as a continuous function of the penalty constant. Thus, path following starts at the unconstrained solution and follows the solution path as the penalty constant increases. In the process, the solution path hits, slides along, and exits from the various constraints. For quadratic programming, the solution path is piecewise linear and takes large jumps from constraint to constraint. For a general convex program, the solution path is piecewise smooth, and path following operates by numerically solving an ordinary differential equation segment by segment. Our diverse applications to a) projection onto a convex set, b) nonnegative least squares, c) quadratically constrained quadratic programming, d) geometric programming, and e) semidefinite programming illustrate the mechanics and potential of path following. The final detour to image denoising demonstrates the relevance of path following to regularized estimation in inverse problems. In regularized estimation, one follows the solution path as the penalty constant decreases from a large value.

  4. Path Following in the Exact Penalty Method of Convex Programming

    PubMed Central

    Zhou, Hua; Lange, Kenneth

    2015-01-01

    Classical penalty methods solve a sequence of unconstrained problems that put greater and greater stress on meeting the constraints. In the limit as the penalty constant tends to ∞, one recovers the constrained solution. In the exact penalty method, squared penalties are replaced by absolute value penalties, and the solution is recovered for a finite value of the penalty constant. In practice, the kinks in the penalty and the unknown magnitude of the penalty constant prevent wide application of the exact penalty method in nonlinear programming. In this article, we examine a strategy of path following consistent with the exact penalty method. Instead of performing optimization at a single penalty constant, we trace the solution as a continuous function of the penalty constant. Thus, path following starts at the unconstrained solution and follows the solution path as the penalty constant increases. In the process, the solution path hits, slides along, and exits from the various constraints. For quadratic programming, the solution path is piecewise linear and takes large jumps from constraint to constraint. For a general convex program, the solution path is piecewise smooth, and path following operates by numerically solving an ordinary differential equation segment by segment. Our diverse applications to a) projection onto a convex set, b) nonnegative least squares, c) quadratically constrained quadratic programming, d) geometric programming, and e) semidefinite programming illustrate the mechanics and potential of path following. The final detour to image denoising demonstrates the relevance of path following to regularized estimation in inverse problems. In regularized estimation, one follows the solution path as the penalty constant decreases from a large value. PMID:26366044

  5. Critical Behavior of the Annealed Ising Model on Random Regular Graphs

    NASA Astrophysics Data System (ADS)

    Can, Van Hao

    2017-11-01

    In Giardinà et al. (ALEA Lat Am J Probab Math Stat 13(1):121-161, 2016), the authors have defined an annealed Ising model on random graphs and proved limit theorems for the magnetization of this model on some random graphs including random 2-regular graphs. Then in Can (Annealed limit theorems for the Ising model on random regular graphs, arXiv:1701.08639, 2017), we generalized their results to the class of all random regular graphs. In this paper, we study the critical behavior of this model. In particular, we determine the critical exponents and prove a non standard limit theorem stating that the magnetization scaled by n^{3/4} converges to a specific random variable, with n the number of vertices of random regular graphs.

  6. A Regular Production-Remanufacturing Inventory Model for a Two-Echelon System with Price-dependent Return Rate and Environmental Effects Investigation

    NASA Astrophysics Data System (ADS)

    Dwicahyani, A. R.; Jauhari, W. A.; Jonrinaldi

    2017-06-01

    Product take-back recovery has currently became a promising effort for companies in order to create a sustainable supply chain. In addition, some restrictions including government regulations, social-ethical responsibilities, and up to economic factors have contributed to the reasons for the importance of product take-back recovery. This study aims to develop an inventory model in a system of reverse logistic management consisting of a manufacturer and a collector. Recycle dealer collects used products from the market and ships it to manufacturer. Manufacturer then recovers the used products and sell it eventually to the market. Some recovered products that can not be recovered as good as new one will be sold to the secondary market. In this study, we investigate the effects of environmental factors including GHG emissions and energy usage from transportation, regular production, and remanufacturing operations conducted by manufacturer and solve the model to get the maximum annual joint total profit for both parties. The model also considers price-dependent return rate and determine it as a decision variable as well as number of shipments from collector to manufacturer and optimal cycle period. An iterative procedure is proposed to determine the optimal solutions. We present a numerical example to illustrate the application of the model and perform a sensitivity analysis to study the effects of the changes in environmental related costs on the model’s decision.

  7. Regular Motions of Resonant Asteroids

    NASA Astrophysics Data System (ADS)

    Ferraz-Mello, S.

    1990-11-01

    RESUMEN. Se revisan resultados analiticos relativos a soluciones regulares del problema asteroidal eliptico promediados en la vecindad de una resonancia con jupiten Mencionamos Ia ley de estructura para libradores de alta excentricidad, la estabilidad de los centros de liberaci6n, las perturbaciones forzadas por la excentricidad de jupiter y las 6rbitas de corotaci6n. ABSTRAC This paper reviews analytical results concerning the regular solutions of the elliptic asteroidal problem averaged in the neighbourhood of a resonance with jupiter. We mention the law of structure for high-eccentricity librators, the stability of the libration centers, the perturbations forced by the eccentricity ofjupiter and the corotation orbits. Key words: ASThROIDS

  8. Efficient Solution of Three-Dimensional Problems of Acoustic and Electromagnetic Scattering by Open Surfaces

    NASA Technical Reports Server (NTRS)

    Turc, Catalin; Anand, Akash; Bruno, Oscar; Chaubell, Julian

    2011-01-01

    We present a computational methodology (a novel Nystrom approach based on use of a non-overlapping patch technique and Chebyshev discretizations) for efficient solution of problems of acoustic and electromagnetic scattering by open surfaces. Our integral equation formulations (1) Incorporate, as ansatz, the singular nature of open-surface integral-equation solutions, and (2) For the Electric Field Integral Equation (EFIE), use analytical regularizes that effectively reduce the number of iterations required by iterative linear-algebra solution based on Krylov-subspace iterative solvers.

  9. Towards combined global monthly gravity field solutions

    NASA Astrophysics Data System (ADS)

    Jaeggi, Adrian; Meyer, Ulrich; Beutler, Gerhard; Weigelt, Matthias; van Dam, Tonie; Mayer-Gürr, Torsten; Flury, Jakob; Flechtner, Frank; Dahle, Christoph; Lemoine, Jean-Michel; Bruinsma, Sean

    2014-05-01

    Currently, official GRACE Science Data System (SDS) monthly gravity field solutions are generated independently by the Centre for Space Research (CSR) and the German Research Centre for Geosciences (GFZ). Additional GRACE SDS monthly fields are provided by the Jet Propulsion Laboratory (JPL) for validation and outside the SDS by a number of other institutions worldwide. Although the adopted background models and processing standards have been harmonized more and more by the various processing centers during the past years, notable differences still exist and the users are more or less left alone with a decision which model to choose for their individual applications. This procedure seriously limits the accessibility of these valuable data. Combinations are well established in the area of other space geodetic techniques, such as the Global Navigation Satellite Systems (GNSS), Satellite Laser Ranging (SLR), and Very Long Baseline Interferometry (VLBI). Regularly comparing and combining space-geodetic products has tremendously increased the usefulness of the products in a wide range of disciplines and scientific applications. Therefore, we propose in a first step to mutually compare the large variety of available monthly GRACE gravity field solutions, e.g., by assessing the signal content over selected regions, by estimating the noise over the oceans, and by performing significance tests. We make the attempt to assign different solution characteristics to different processing strategies in order to identify subsets of solutions, which are based on similar processing strategies. Using these subsets we will in a second step explore ways to generate combined solutions, e.g., based on a weighted average of the individual solutions using empirical weights derived from pair-wise comparisons. We will also assess the quality of such a combined solution and discuss the potential benefits for the GRACE and GRACE-FO user community, but also address minimum processing requirements to be met by each analysis centre to enable a meaningful combination (either performed on the solution level or, preferably, on the normal equation level).

  10. Development of an automated experimental setup for the study of ionic-exchange kinetics. Application to the ionic adsorption, equilibrium attainment and dissolution of apatite compounds.

    PubMed

    Thomann, J M; Gasser, P; Bres, E F; Voegel, J C; Gramain, P

    1990-02-01

    An ion-selective electrode and microcomputer-based experimental setup for the study of ionic-exchange kinetics between a powdered solid and the solution is described. The equipment is composed of easily available commercial devices and a data acquisition and regularization computer program is presented. The system, especially developed to investigate the ionic adsorption, equilibrium attainment and dissolution of hard mineralized tissues, provides good reliable results by taking into account the volume changes of the reacting solution and the electrode behaviour under different experimental conditions, and by avoiding carbonation of the solution. A second computer program, using the regularized data and the experimental parameters, calculates the quantities of protons consumed and calcium released in the case of equilibrium attainment and dissolution of apatite-like compounds. Finally, typical examples of ion-exchange and dissolution kinetics under constant pH of enamel and synthetic hydroxyapatite are examined.

  11. High-resolution imaging-guided electroencephalography source localization: temporal effect regularization incorporation in LORETA inverse solution

    NASA Astrophysics Data System (ADS)

    Boughariou, Jihene; Zouch, Wassim; Slima, Mohamed Ben; Kammoun, Ines; Hamida, Ahmed Ben

    2015-11-01

    Electroencephalography (EEG) and magnetic resonance imaging (MRI) are noninvasive neuroimaging modalities. They are widely used and could be complementary. The fusion of these modalities may enhance some emerging research fields targeting the exploration better brain activities. Such research attracted various scientific investigators especially to provide a convivial and helpful advanced clinical-aid tool enabling better neurological explorations. Our present research was, in fact, in the context of EEG inverse problem resolution and investigated an advanced estimation methodology for the localization of the cerebral activity. Our focus was, therefore, on the integration of temporal priors to low-resolution brain electromagnetic tomography (LORETA) formalism and to solve the inverse problem in the EEG. The main idea behind our proposed method was in the integration of a temporal projection matrix within the LORETA weighting matrix. A hyperparameter is the principal fact for such a temporal integration, and its importance would be obvious when obtaining a regularized smoothness solution. Our experimental results clearly confirmed the impact of such an optimization procedure adopted for the temporal regularization parameter comparatively to the LORETA method.

  12. FPGA-accelerated algorithm for the regular expression matching system

    NASA Astrophysics Data System (ADS)

    Russek, P.; Wiatr, K.

    2015-01-01

    This article describes an algorithm to support a regular expressions matching system. The goal was to achieve an attractive performance system with low energy consumption. The basic idea of the algorithm comes from a concept of the Bloom filter. It starts from the extraction of static sub-strings for strings of regular expressions. The algorithm is devised to gain from its decomposition into parts which are intended to be executed by custom hardware and the central processing unit (CPU). The pipelined custom processor architecture is proposed and a software algorithm explained accordingly. The software part of the algorithm was coded in C and runs on a processor from the ARM family. The hardware architecture was described in VHDL and implemented in field programmable gate array (FPGA). The performance results and required resources of the above experiments are given. An example of target application for the presented solution is computer and network security systems. The idea was tested on nearly 100,000 body-based viruses from the ClamAV virus database. The solution is intended for the emerging technology of clusters of low-energy computing nodes.

  13. Lp-Norm Regularization in Volumetric Imaging of Cardiac Current Sources

    PubMed Central

    Rahimi, Azar; Xu, Jingjia; Wang, Linwei

    2013-01-01

    Advances in computer vision have substantially improved our ability to analyze the structure and mechanics of the heart. In comparison, our ability to observe and analyze cardiac electrical activities is much limited. The progress to computationally reconstruct cardiac current sources from noninvasive voltage data sensed on the body surface has been hindered by the ill-posedness and the lack of a unique solution of the reconstruction problem. Common L2- and L1-norm regularizations tend to produce a solution that is either too diffused or too scattered to reflect the complex spatial structure of current source distribution in the heart. In this work, we propose a general regularization with Lp-norm (1 < p < 2) constraint to bridge the gap and balance between an overly smeared and overly focal solution in cardiac source reconstruction. In a set of phantom experiments, we demonstrate the superiority of the proposed Lp-norm method over its L1 and L2 counterparts in imaging cardiac current sources with increasing extents. Through computer-simulated and real-data experiments, we further demonstrate the feasibility of the proposed method in imaging the complex structure of excitation wavefront, as well as current sources distributed along the postinfarction scar border. This ability to preserve the spatial structure of source distribution is important for revealing the potential disruption to the normal heart excitation. PMID:24348735

  14. A comparison of image restoration approaches applied to three-dimensional confocal and wide-field fluorescence microscopy.

    PubMed

    Verveer, P. J; Gemkow, M. J; Jovin, T. M

    1999-01-01

    We have compared different image restoration approaches for fluorescence microscopy. The most widely used algorithms were classified with a Bayesian theory according to the assumed noise model and the type of regularization imposed. We considered both Gaussian and Poisson models for the noise in combination with Tikhonov regularization, entropy regularization, Good's roughness and without regularization (maximum likelihood estimation). Simulations of fluorescence confocal imaging were used to examine the different noise models and regularization approaches using the mean squared error criterion. The assumption of a Gaussian noise model yielded only slightly higher errors than the Poisson model. Good's roughness was the best choice for the regularization. Furthermore, we compared simulated confocal and wide-field data. In general, restored confocal data are superior to restored wide-field data, but given sufficient higher signal level for the wide-field data the restoration result may rival confocal data in quality. Finally, a visual comparison of experimental confocal and wide-field data is presented.

  15. ℓ1-Regularized full-waveform inversion with prior model information based on orthant-wise limited memory quasi-Newton method

    NASA Astrophysics Data System (ADS)

    Dai, Meng-Xue; Chen, Jing-Bo; Cao, Jian

    2017-07-01

    Full-waveform inversion (FWI) is an ill-posed optimization problem which is sensitive to noise and initial model. To alleviate the ill-posedness of the problem, regularization techniques are usually adopted. The ℓ1-norm penalty is a robust regularization method that preserves contrasts and edges. The Orthant-Wise Limited-Memory Quasi-Newton (OWL-QN) method extends the widely-used limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) method to the ℓ1-regularized optimization problems and inherits the efficiency of L-BFGS. To take advantage of the ℓ1-regularized method and the prior model information obtained from sonic logs and geological information, we implement OWL-QN algorithm in ℓ1-regularized FWI with prior model information in this paper. Numerical experiments show that this method not only improve the inversion results but also has a strong anti-noise ability.

  16. Consistent Partial Least Squares Path Modeling via Regularization.

    PubMed

    Jung, Sunho; Park, JaeHong

    2018-01-01

    Partial least squares (PLS) path modeling is a component-based structural equation modeling that has been adopted in social and psychological research due to its data-analytic capability and flexibility. A recent methodological advance is consistent PLS (PLSc), designed to produce consistent estimates of path coefficients in structural models involving common factors. In practice, however, PLSc may frequently encounter multicollinearity in part because it takes a strategy of estimating path coefficients based on consistent correlations among independent latent variables. PLSc has yet no remedy for this multicollinearity problem, which can cause loss of statistical power and accuracy in parameter estimation. Thus, a ridge type of regularization is incorporated into PLSc, creating a new technique called regularized PLSc. A comprehensive simulation study is conducted to evaluate the performance of regularized PLSc as compared to its non-regularized counterpart in terms of power and accuracy. The results show that our regularized PLSc is recommended for use when serious multicollinearity is present.

  17. High-Order Accurate Solutions to the Helmholtz Equation in the Presence of Boundary Singularities

    DTIC Science & Technology

    2015-03-31

    FD scheme is only consistent for classical solutions of the PDE . For this reason, we implement the method of singularity subtraction as a means for...regularity due to the boundary conditions. This is because the FD scheme is only consistent for classical solutions of the PDE . For this reason, we...Introduction In the present work, we develop a high-order numerical method for solving linear elliptic PDEs with well-behaved variable coefficients on

  18. Improved belief propagation algorithm finds many Bethe states in the random-field Ising model on random graphs

    NASA Astrophysics Data System (ADS)

    Perugini, G.; Ricci-Tersenghi, F.

    2018-01-01

    We first present an empirical study of the Belief Propagation (BP) algorithm, when run on the random field Ising model defined on random regular graphs in the zero temperature limit. We introduce the notion of extremal solutions for the BP equations, and we use them to fix a fraction of spins in their ground state configuration. At the phase transition point the fraction of unconstrained spins percolates and their number diverges with the system size. This in turn makes the associated optimization problem highly non trivial in the critical region. Using the bounds on the BP messages provided by the extremal solutions we design a new and very easy to implement BP scheme which is able to output a large number of stable fixed points. On one hand this new algorithm is able to provide the minimum energy configuration with high probability in a competitive time. On the other hand we found that the number of fixed points of the BP algorithm grows with the system size in the critical region. This unexpected feature poses new relevant questions about the physics of this class of models.

  19. Efficient Regular Perovskite Solar Cells Based on Pristine [70]Fullerene as Electron-Selective Contact.

    PubMed

    Collavini, Silvia; Kosta, Ivet; Völker, Sebastian F; Cabanero, German; Grande, Hans J; Tena-Zaera, Ramón; Delgado, Juan Luis

    2016-06-08

    [70]Fullerene is presented as an efficient alternative electron-selective contact (ESC) for regular-architecture perovskite solar cells (PSCs). A smart and simple, well-described solution processing protocol for the preparation of [70]- and [60]fullerene-based solar cells, namely the fullerene saturation approach (FSA), allowed us to obtain similar power conversion efficiencies for both fullerene materials (i.e., 10.4 and 11.4 % for [70]- and [60]fullerene-based devices, respectively). Importantly, despite the low electron mobility and significant visible-light absorption of [70]fullerene, the presented protocol allows the employment of [70]fullerene as an efficient ESC. The [70]fullerene film thickness and its solubility in the perovskite processing solutions are crucial parameters, which can be controlled by the use of this simple solution processing protocol. The damage to the [70]fullerene film through dissolution during the perovskite deposition is avoided through the saturation of the perovskite processing solution with [70]fullerene. Additionally, this fullerene-saturation strategy improves the performance of the perovskite film significantly and enhances the power conversion efficiency of solar cells based on different ESCs (i.e., [60]fullerene, [70]fullerene, and TiO2 ). Therefore, this universal solution processing protocol widens the opportunities for the further development of PSCs. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. A model of two-way selection system for human behavior.

    PubMed

    Zhou, Bin; Qin, Shujia; Han, Xiao-Pu; He, Zhe; Xie, Jia-Rong; Wang, Bing-Hong

    2014-01-01

    Two-way selection is a common phenomenon in nature and society. It appears in the processes like choosing a mate between men and women, making contracts between job hunters and recruiters, and trading between buyers and sellers. In this paper, we propose a model of two-way selection system, and present its analytical solution for the expectation of successful matching total and the regular pattern that the matching rate trends toward an inverse proportion to either the ratio between the two sides or the ratio of the state total to the smaller group's people number. The proposed model is verified by empirical data of the matchmaking fairs. Results indicate that the model well predicts this typical real-world two-way selection behavior to the bounded error extent, thus it is helpful for understanding the dynamics mechanism of the real-world two-way selection system.

Top